Jul 15 05:15:07.897929 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 03:28:48 -00 2025 Jul 15 05:15:07.897971 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:15:07.897987 kernel: BIOS-provided physical RAM map: Jul 15 05:15:07.897998 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 15 05:15:07.898009 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jul 15 05:15:07.898020 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 15 05:15:07.898034 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 15 05:15:07.898046 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 15 05:15:07.898060 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 15 05:15:07.898071 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 15 05:15:07.898083 kernel: NX (Execute Disable) protection: active Jul 15 05:15:07.898095 kernel: APIC: Static calls initialized Jul 15 05:15:07.898106 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jul 15 05:15:07.898119 kernel: extended physical RAM map: Jul 15 05:15:07.898139 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 15 05:15:07.898153 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jul 15 05:15:07.898167 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jul 15 05:15:07.898182 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jul 15 05:15:07.898196 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 15 05:15:07.898211 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 15 05:15:07.898225 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 15 05:15:07.898240 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 15 05:15:07.898254 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 15 05:15:07.898269 kernel: efi: EFI v2.7 by EDK II Jul 15 05:15:07.898286 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Jul 15 05:15:07.898314 kernel: secureboot: Secure boot disabled Jul 15 05:15:07.900344 kernel: SMBIOS 2.7 present. Jul 15 05:15:07.900357 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 15 05:15:07.900368 kernel: DMI: Memory slots populated: 1/1 Jul 15 05:15:07.900380 kernel: Hypervisor detected: KVM Jul 15 05:15:07.900393 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 05:15:07.900404 kernel: kvm-clock: using sched offset of 5821540734 cycles Jul 15 05:15:07.900417 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 05:15:07.900429 kernel: tsc: Detected 2499.998 MHz processor Jul 15 05:15:07.900441 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 05:15:07.900458 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 05:15:07.900470 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jul 15 05:15:07.900483 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 15 05:15:07.900496 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 05:15:07.900510 kernel: Using GB pages for direct mapping Jul 15 05:15:07.900526 kernel: ACPI: Early table checksum verification disabled Jul 15 05:15:07.900540 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jul 15 05:15:07.900558 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jul 15 05:15:07.900572 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 15 05:15:07.900587 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 15 05:15:07.900601 kernel: ACPI: FACS 0x00000000789D0000 000040 Jul 15 05:15:07.900613 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 15 05:15:07.900627 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 15 05:15:07.900642 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 15 05:15:07.900660 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 15 05:15:07.900673 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 15 05:15:07.900686 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 15 05:15:07.900698 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 15 05:15:07.900712 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jul 15 05:15:07.900726 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jul 15 05:15:07.900739 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jul 15 05:15:07.900754 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jul 15 05:15:07.900767 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jul 15 05:15:07.900787 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jul 15 05:15:07.900802 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jul 15 05:15:07.900817 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jul 15 05:15:07.900832 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jul 15 05:15:07.900845 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jul 15 05:15:07.900858 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jul 15 05:15:07.900870 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jul 15 05:15:07.900883 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 15 05:15:07.900898 kernel: NUMA: Initialized distance table, cnt=1 Jul 15 05:15:07.900915 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Jul 15 05:15:07.900927 kernel: Zone ranges: Jul 15 05:15:07.900941 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 05:15:07.900986 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jul 15 05:15:07.901000 kernel: Normal empty Jul 15 05:15:07.901014 kernel: Device empty Jul 15 05:15:07.901028 kernel: Movable zone start for each node Jul 15 05:15:07.901043 kernel: Early memory node ranges Jul 15 05:15:07.901056 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 15 05:15:07.901074 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jul 15 05:15:07.901088 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jul 15 05:15:07.901102 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jul 15 05:15:07.901116 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 05:15:07.901130 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 15 05:15:07.901145 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 15 05:15:07.901159 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jul 15 05:15:07.901174 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 15 05:15:07.901188 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 05:15:07.901202 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 15 05:15:07.901219 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 05:15:07.901233 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 05:15:07.901247 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 05:15:07.901261 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 05:15:07.901274 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 05:15:07.901289 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 05:15:07.901319 kernel: TSC deadline timer available Jul 15 05:15:07.901333 kernel: CPU topo: Max. logical packages: 1 Jul 15 05:15:07.901348 kernel: CPU topo: Max. logical dies: 1 Jul 15 05:15:07.901365 kernel: CPU topo: Max. dies per package: 1 Jul 15 05:15:07.901380 kernel: CPU topo: Max. threads per core: 2 Jul 15 05:15:07.901393 kernel: CPU topo: Num. cores per package: 1 Jul 15 05:15:07.901407 kernel: CPU topo: Num. threads per package: 2 Jul 15 05:15:07.901421 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 15 05:15:07.901435 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 15 05:15:07.901449 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jul 15 05:15:07.901463 kernel: Booting paravirtualized kernel on KVM Jul 15 05:15:07.901488 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 05:15:07.901505 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 15 05:15:07.901520 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 15 05:15:07.901534 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 15 05:15:07.901548 kernel: pcpu-alloc: [0] 0 1 Jul 15 05:15:07.901562 kernel: kvm-guest: PV spinlocks enabled Jul 15 05:15:07.901574 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 05:15:07.901589 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:15:07.901601 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 05:15:07.901617 kernel: random: crng init done Jul 15 05:15:07.901630 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 05:15:07.901643 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 15 05:15:07.901656 kernel: Fallback order for Node 0: 0 Jul 15 05:15:07.901669 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jul 15 05:15:07.901682 kernel: Policy zone: DMA32 Jul 15 05:15:07.901706 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 05:15:07.901722 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 05:15:07.901736 kernel: Kernel/User page tables isolation: enabled Jul 15 05:15:07.901750 kernel: ftrace: allocating 40097 entries in 157 pages Jul 15 05:15:07.901763 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 05:15:07.901776 kernel: Dynamic Preempt: voluntary Jul 15 05:15:07.901793 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 05:15:07.901807 kernel: rcu: RCU event tracing is enabled. Jul 15 05:15:07.901821 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 05:15:07.901835 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 05:15:07.901849 kernel: Rude variant of Tasks RCU enabled. Jul 15 05:15:07.901865 kernel: Tracing variant of Tasks RCU enabled. Jul 15 05:15:07.901879 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 05:15:07.901892 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 05:15:07.901906 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:15:07.901920 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:15:07.901934 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:15:07.901947 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 15 05:15:07.901961 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 05:15:07.901975 kernel: Console: colour dummy device 80x25 Jul 15 05:15:07.901992 kernel: printk: legacy console [tty0] enabled Jul 15 05:15:07.902005 kernel: printk: legacy console [ttyS0] enabled Jul 15 05:15:07.902019 kernel: ACPI: Core revision 20240827 Jul 15 05:15:07.902033 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 15 05:15:07.902046 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 05:15:07.902059 kernel: x2apic enabled Jul 15 05:15:07.902073 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 05:15:07.902086 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jul 15 05:15:07.902100 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jul 15 05:15:07.902116 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 15 05:15:07.902129 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 15 05:15:07.902143 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 05:15:07.902156 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 05:15:07.902169 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 05:15:07.902183 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 15 05:15:07.902197 kernel: RETBleed: Vulnerable Jul 15 05:15:07.902210 kernel: Speculative Store Bypass: Vulnerable Jul 15 05:15:07.902223 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 15 05:15:07.902237 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 15 05:15:07.902253 kernel: GDS: Unknown: Dependent on hypervisor status Jul 15 05:15:07.902266 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 15 05:15:07.902280 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 05:15:07.902293 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 05:15:07.902325 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 05:15:07.902338 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 15 05:15:07.902352 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 15 05:15:07.902365 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 15 05:15:07.902378 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 15 05:15:07.902391 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 15 05:15:07.902405 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 15 05:15:07.902422 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 05:15:07.902436 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 15 05:15:07.902449 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 15 05:15:07.902463 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 15 05:15:07.902476 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 15 05:15:07.902489 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 15 05:15:07.902503 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 15 05:15:07.902517 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 15 05:15:07.902530 kernel: Freeing SMP alternatives memory: 32K Jul 15 05:15:07.902544 kernel: pid_max: default: 32768 minimum: 301 Jul 15 05:15:07.902557 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 05:15:07.902570 kernel: landlock: Up and running. Jul 15 05:15:07.902586 kernel: SELinux: Initializing. Jul 15 05:15:07.902600 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 15 05:15:07.902614 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 15 05:15:07.902627 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 15 05:15:07.902641 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 15 05:15:07.902654 kernel: signal: max sigframe size: 3632 Jul 15 05:15:07.902668 kernel: rcu: Hierarchical SRCU implementation. Jul 15 05:15:07.902682 kernel: rcu: Max phase no-delay instances is 400. Jul 15 05:15:07.902696 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 05:15:07.902710 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 15 05:15:07.902726 kernel: smp: Bringing up secondary CPUs ... Jul 15 05:15:07.902740 kernel: smpboot: x86: Booting SMP configuration: Jul 15 05:15:07.902754 kernel: .... node #0, CPUs: #1 Jul 15 05:15:07.902768 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 15 05:15:07.902782 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 15 05:15:07.902796 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 05:15:07.902809 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jul 15 05:15:07.902822 kernel: Memory: 1908052K/2037804K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54608K init, 2360K bss, 125188K reserved, 0K cma-reserved) Jul 15 05:15:07.902839 kernel: devtmpfs: initialized Jul 15 05:15:07.902853 kernel: x86/mm: Memory block size: 128MB Jul 15 05:15:07.902866 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jul 15 05:15:07.902881 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 05:15:07.902894 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 05:15:07.902908 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 05:15:07.902921 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 05:15:07.902934 kernel: audit: initializing netlink subsys (disabled) Jul 15 05:15:07.902948 kernel: audit: type=2000 audit(1752556505.405:1): state=initialized audit_enabled=0 res=1 Jul 15 05:15:07.902964 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 05:15:07.902979 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 05:15:07.902992 kernel: cpuidle: using governor menu Jul 15 05:15:07.903005 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 05:15:07.903019 kernel: dca service started, version 1.12.1 Jul 15 05:15:07.903033 kernel: PCI: Using configuration type 1 for base access Jul 15 05:15:07.903046 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 05:15:07.903061 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 05:15:07.903074 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 05:15:07.903091 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 05:15:07.903105 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 05:15:07.903118 kernel: ACPI: Added _OSI(Module Device) Jul 15 05:15:07.903132 kernel: ACPI: Added _OSI(Processor Device) Jul 15 05:15:07.903145 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 05:15:07.903159 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 15 05:15:07.903172 kernel: ACPI: Interpreter enabled Jul 15 05:15:07.903187 kernel: ACPI: PM: (supports S0 S5) Jul 15 05:15:07.903200 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 05:15:07.903217 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 05:15:07.903231 kernel: PCI: Using E820 reservations for host bridge windows Jul 15 05:15:07.903245 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 15 05:15:07.903259 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 05:15:07.906796 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 15 05:15:07.906962 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 15 05:15:07.907098 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 15 05:15:07.907125 kernel: acpiphp: Slot [3] registered Jul 15 05:15:07.907141 kernel: acpiphp: Slot [4] registered Jul 15 05:15:07.907156 kernel: acpiphp: Slot [5] registered Jul 15 05:15:07.907171 kernel: acpiphp: Slot [6] registered Jul 15 05:15:07.907185 kernel: acpiphp: Slot [7] registered Jul 15 05:15:07.907197 kernel: acpiphp: Slot [8] registered Jul 15 05:15:07.907210 kernel: acpiphp: Slot [9] registered Jul 15 05:15:07.907223 kernel: acpiphp: Slot [10] registered Jul 15 05:15:07.907237 kernel: acpiphp: Slot [11] registered Jul 15 05:15:07.907254 kernel: acpiphp: Slot [12] registered Jul 15 05:15:07.907267 kernel: acpiphp: Slot [13] registered Jul 15 05:15:07.907281 kernel: acpiphp: Slot [14] registered Jul 15 05:15:07.907293 kernel: acpiphp: Slot [15] registered Jul 15 05:15:07.907340 kernel: acpiphp: Slot [16] registered Jul 15 05:15:07.907354 kernel: acpiphp: Slot [17] registered Jul 15 05:15:07.907368 kernel: acpiphp: Slot [18] registered Jul 15 05:15:07.907383 kernel: acpiphp: Slot [19] registered Jul 15 05:15:07.907397 kernel: acpiphp: Slot [20] registered Jul 15 05:15:07.907410 kernel: acpiphp: Slot [21] registered Jul 15 05:15:07.907426 kernel: acpiphp: Slot [22] registered Jul 15 05:15:07.907440 kernel: acpiphp: Slot [23] registered Jul 15 05:15:07.907454 kernel: acpiphp: Slot [24] registered Jul 15 05:15:07.907468 kernel: acpiphp: Slot [25] registered Jul 15 05:15:07.907482 kernel: acpiphp: Slot [26] registered Jul 15 05:15:07.907497 kernel: acpiphp: Slot [27] registered Jul 15 05:15:07.907511 kernel: acpiphp: Slot [28] registered Jul 15 05:15:07.907526 kernel: acpiphp: Slot [29] registered Jul 15 05:15:07.907541 kernel: acpiphp: Slot [30] registered Jul 15 05:15:07.907559 kernel: acpiphp: Slot [31] registered Jul 15 05:15:07.907574 kernel: PCI host bridge to bus 0000:00 Jul 15 05:15:07.907727 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 05:15:07.907851 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 05:15:07.907976 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 05:15:07.908092 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 15 05:15:07.908212 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jul 15 05:15:07.910990 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 05:15:07.911167 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 15 05:15:07.911333 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 15 05:15:07.911473 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jul 15 05:15:07.911604 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 15 05:15:07.911727 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 15 05:15:07.911862 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 15 05:15:07.911983 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 15 05:15:07.912109 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 15 05:15:07.912235 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 15 05:15:07.912372 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 15 05:15:07.912509 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jul 15 05:15:07.912637 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jul 15 05:15:07.912764 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 15 05:15:07.912891 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 05:15:07.913031 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jul 15 05:15:07.913174 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jul 15 05:15:07.917425 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jul 15 05:15:07.917623 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jul 15 05:15:07.917648 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 05:15:07.917671 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 05:15:07.917687 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 05:15:07.917702 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 05:15:07.917716 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 15 05:15:07.917732 kernel: iommu: Default domain type: Translated Jul 15 05:15:07.917747 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 05:15:07.917762 kernel: efivars: Registered efivars operations Jul 15 05:15:07.917777 kernel: PCI: Using ACPI for IRQ routing Jul 15 05:15:07.917793 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 05:15:07.917811 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jul 15 05:15:07.917826 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jul 15 05:15:07.917840 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jul 15 05:15:07.918007 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 15 05:15:07.918149 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 15 05:15:07.918289 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 05:15:07.919360 kernel: vgaarb: loaded Jul 15 05:15:07.919382 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 15 05:15:07.919399 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 15 05:15:07.919420 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 05:15:07.919437 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 05:15:07.919454 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 05:15:07.919471 kernel: pnp: PnP ACPI init Jul 15 05:15:07.919487 kernel: pnp: PnP ACPI: found 5 devices Jul 15 05:15:07.919502 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 05:15:07.919517 kernel: NET: Registered PF_INET protocol family Jul 15 05:15:07.919533 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 05:15:07.919549 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 15 05:15:07.919568 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 05:15:07.919585 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 15 05:15:07.919601 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 15 05:15:07.919617 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 15 05:15:07.919635 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 15 05:15:07.919652 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 15 05:15:07.919670 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 05:15:07.919686 kernel: NET: Registered PF_XDP protocol family Jul 15 05:15:07.919848 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 05:15:07.919986 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 05:15:07.920116 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 05:15:07.920242 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 15 05:15:07.925043 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jul 15 05:15:07.925224 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 15 05:15:07.925248 kernel: PCI: CLS 0 bytes, default 64 Jul 15 05:15:07.925264 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 15 05:15:07.925281 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jul 15 05:15:07.925326 kernel: clocksource: Switched to clocksource tsc Jul 15 05:15:07.925341 kernel: Initialise system trusted keyrings Jul 15 05:15:07.925354 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 15 05:15:07.925367 kernel: Key type asymmetric registered Jul 15 05:15:07.925380 kernel: Asymmetric key parser 'x509' registered Jul 15 05:15:07.925394 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 05:15:07.925408 kernel: io scheduler mq-deadline registered Jul 15 05:15:07.925423 kernel: io scheduler kyber registered Jul 15 05:15:07.925437 kernel: io scheduler bfq registered Jul 15 05:15:07.925457 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 05:15:07.925472 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 05:15:07.925496 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 05:15:07.925510 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 05:15:07.925523 kernel: i8042: Warning: Keylock active Jul 15 05:15:07.925537 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 05:15:07.925552 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 05:15:07.925709 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 15 05:15:07.925839 kernel: rtc_cmos 00:00: registered as rtc0 Jul 15 05:15:07.925965 kernel: rtc_cmos 00:00: setting system clock to 2025-07-15T05:15:07 UTC (1752556507) Jul 15 05:15:07.926089 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 15 05:15:07.926110 kernel: intel_pstate: CPU model not supported Jul 15 05:15:07.926149 kernel: efifb: probing for efifb Jul 15 05:15:07.926169 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jul 15 05:15:07.926185 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 15 05:15:07.926202 kernel: efifb: scrolling: redraw Jul 15 05:15:07.926222 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 15 05:15:07.926239 kernel: Console: switching to colour frame buffer device 100x37 Jul 15 05:15:07.926259 kernel: fb0: EFI VGA frame buffer device Jul 15 05:15:07.926279 kernel: pstore: Using crash dump compression: deflate Jul 15 05:15:07.926319 kernel: pstore: Registered efi_pstore as persistent store backend Jul 15 05:15:07.926341 kernel: NET: Registered PF_INET6 protocol family Jul 15 05:15:07.926361 kernel: Segment Routing with IPv6 Jul 15 05:15:07.926382 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 05:15:07.926404 kernel: NET: Registered PF_PACKET protocol family Jul 15 05:15:07.926425 kernel: Key type dns_resolver registered Jul 15 05:15:07.926449 kernel: IPI shorthand broadcast: enabled Jul 15 05:15:07.926470 kernel: sched_clock: Marking stable (2692002983, 144273829)->(2926685057, -90408245) Jul 15 05:15:07.926491 kernel: registered taskstats version 1 Jul 15 05:15:07.926511 kernel: Loading compiled-in X.509 certificates Jul 15 05:15:07.926532 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: a24478b628e55368911ce1800a2bd6bc158938c7' Jul 15 05:15:07.926552 kernel: Demotion targets for Node 0: null Jul 15 05:15:07.926572 kernel: Key type .fscrypt registered Jul 15 05:15:07.926593 kernel: Key type fscrypt-provisioning registered Jul 15 05:15:07.926613 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 05:15:07.926637 kernel: ima: Allocated hash algorithm: sha1 Jul 15 05:15:07.926657 kernel: ima: No architecture policies found Jul 15 05:15:07.926678 kernel: clk: Disabling unused clocks Jul 15 05:15:07.926698 kernel: Warning: unable to open an initial console. Jul 15 05:15:07.926718 kernel: Freeing unused kernel image (initmem) memory: 54608K Jul 15 05:15:07.926739 kernel: Write protecting the kernel read-only data: 24576k Jul 15 05:15:07.926760 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 05:15:07.926781 kernel: Run /init as init process Jul 15 05:15:07.926807 kernel: with arguments: Jul 15 05:15:07.926828 kernel: /init Jul 15 05:15:07.926848 kernel: with environment: Jul 15 05:15:07.926868 kernel: HOME=/ Jul 15 05:15:07.926887 kernel: TERM=linux Jul 15 05:15:07.926908 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 05:15:07.926934 systemd[1]: Successfully made /usr/ read-only. Jul 15 05:15:07.926962 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:15:07.926983 systemd[1]: Detected virtualization amazon. Jul 15 05:15:07.927004 systemd[1]: Detected architecture x86-64. Jul 15 05:15:07.927024 systemd[1]: Running in initrd. Jul 15 05:15:07.927044 systemd[1]: No hostname configured, using default hostname. Jul 15 05:15:07.927066 systemd[1]: Hostname set to . Jul 15 05:15:07.927091 systemd[1]: Initializing machine ID from VM UUID. Jul 15 05:15:07.927112 systemd[1]: Queued start job for default target initrd.target. Jul 15 05:15:07.927135 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:15:07.927157 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:15:07.927181 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 05:15:07.927203 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:15:07.927226 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 05:15:07.927253 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 05:15:07.927272 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 05:15:07.927293 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 05:15:07.927629 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:15:07.927650 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:15:07.927669 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:15:07.927688 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:15:07.927706 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:15:07.927730 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:15:07.927748 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:15:07.927768 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:15:07.927787 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 05:15:07.927806 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 05:15:07.927825 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:15:07.927843 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:15:07.927862 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:15:07.927885 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:15:07.927903 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 05:15:07.927921 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:15:07.927939 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 05:15:07.927958 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 05:15:07.927978 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 05:15:07.927996 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:15:07.928014 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:15:07.928032 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:15:07.928055 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 05:15:07.928120 systemd-journald[207]: Collecting audit messages is disabled. Jul 15 05:15:07.928163 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:15:07.928181 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 05:15:07.928200 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 05:15:07.928220 systemd-journald[207]: Journal started Jul 15 05:15:07.928259 systemd-journald[207]: Runtime Journal (/run/log/journal/ec2006ef0cc0bd562042a54a19e216b8) is 4.8M, max 38.4M, 33.6M free. Jul 15 05:15:07.931844 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:15:07.934680 systemd-modules-load[208]: Inserted module 'overlay' Jul 15 05:15:07.934884 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:07.942680 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 05:15:07.945470 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:15:07.950370 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:15:07.957484 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:15:07.980163 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 05:15:07.986326 kernel: Bridge firewalling registered Jul 15 05:15:07.987041 systemd-modules-load[208]: Inserted module 'br_netfilter' Jul 15 05:15:07.988506 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:15:07.993178 systemd-tmpfiles[224]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 05:15:07.996704 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:15:08.003979 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 05:15:08.004758 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:15:08.007846 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:15:08.012460 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:15:08.015470 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 05:15:08.025459 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:15:08.030516 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:15:08.042048 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:15:08.091160 systemd-resolved[246]: Positive Trust Anchors: Jul 15 05:15:08.092097 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:15:08.092165 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:15:08.101060 systemd-resolved[246]: Defaulting to hostname 'linux'. Jul 15 05:15:08.103182 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:15:08.104681 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:15:08.139335 kernel: SCSI subsystem initialized Jul 15 05:15:08.149327 kernel: Loading iSCSI transport class v2.0-870. Jul 15 05:15:08.160336 kernel: iscsi: registered transport (tcp) Jul 15 05:15:08.182342 kernel: iscsi: registered transport (qla4xxx) Jul 15 05:15:08.182416 kernel: QLogic iSCSI HBA Driver Jul 15 05:15:08.200849 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:15:08.223843 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:15:08.226247 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:15:08.271473 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 05:15:08.273461 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 05:15:08.321355 kernel: raid6: avx512x4 gen() 18001 MB/s Jul 15 05:15:08.339331 kernel: raid6: avx512x2 gen() 17824 MB/s Jul 15 05:15:08.357356 kernel: raid6: avx512x1 gen() 17705 MB/s Jul 15 05:15:08.375331 kernel: raid6: avx2x4 gen() 17566 MB/s Jul 15 05:15:08.393336 kernel: raid6: avx2x2 gen() 17613 MB/s Jul 15 05:15:08.411595 kernel: raid6: avx2x1 gen() 13669 MB/s Jul 15 05:15:08.411659 kernel: raid6: using algorithm avx512x4 gen() 18001 MB/s Jul 15 05:15:08.430609 kernel: raid6: .... xor() 7724 MB/s, rmw enabled Jul 15 05:15:08.430677 kernel: raid6: using avx512x2 recovery algorithm Jul 15 05:15:08.451335 kernel: xor: automatically using best checksumming function avx Jul 15 05:15:08.619338 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 05:15:08.626159 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:15:08.628223 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:15:08.654844 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jul 15 05:15:08.661218 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:15:08.664447 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 05:15:08.689456 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Jul 15 05:15:08.715921 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:15:08.717553 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:15:08.775621 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:15:08.781788 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 05:15:08.867543 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 15 05:15:08.867810 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 15 05:15:08.872321 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 05:15:08.899484 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:15:08.901457 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:08.904390 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 15 05:15:08.904709 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:15:08.910327 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 15 05:15:08.911202 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:15:08.967132 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jul 15 05:15:08.967170 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 15 05:15:08.967403 kernel: AES CTR mode by8 optimization enabled Jul 15 05:15:08.967425 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:79:5d:92:d1:db Jul 15 05:15:08.967600 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 15 05:15:08.967775 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 05:15:08.967795 kernel: GPT:9289727 != 16777215 Jul 15 05:15:08.967820 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 05:15:08.967840 kernel: GPT:9289727 != 16777215 Jul 15 05:15:08.967869 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 05:15:08.967886 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 05:15:08.940791 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:15:08.953780 (udev-worker)[516]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:15:08.971469 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:15:08.972367 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:08.974223 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:15:08.976943 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:15:09.000372 kernel: nvme nvme0: using unchecked data buffer Jul 15 05:15:09.012696 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:09.128963 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 15 05:15:09.131031 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 15 05:15:09.149426 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 15 05:15:09.162059 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 15 05:15:09.162928 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 05:15:09.182889 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 15 05:15:09.183682 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:15:09.184799 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:15:09.185985 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:15:09.187686 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 05:15:09.190493 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 05:15:09.208978 disk-uuid[699]: Primary Header is updated. Jul 15 05:15:09.208978 disk-uuid[699]: Secondary Entries is updated. Jul 15 05:15:09.208978 disk-uuid[699]: Secondary Header is updated. Jul 15 05:15:09.213068 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:15:09.219322 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 05:15:10.234354 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 05:15:10.234639 disk-uuid[705]: The operation has completed successfully. Jul 15 05:15:10.357481 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 05:15:10.357603 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 05:15:10.388400 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 05:15:10.402116 sh[967]: Success Jul 15 05:15:10.432929 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 05:15:10.433010 kernel: device-mapper: uevent: version 1.0.3 Jul 15 05:15:10.433034 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 05:15:10.467341 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 15 05:15:10.580633 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 05:15:10.583387 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 05:15:10.593116 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 05:15:10.623610 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 05:15:10.623673 kernel: BTRFS: device fsid eb96c768-dac4-4ca9-ae1d-82815d4ce00b devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (990) Jul 15 05:15:10.630298 kernel: BTRFS info (device dm-0): first mount of filesystem eb96c768-dac4-4ca9-ae1d-82815d4ce00b Jul 15 05:15:10.630384 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:15:10.630398 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 05:15:10.725094 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 05:15:10.726130 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:15:10.727361 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 05:15:10.728095 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 05:15:10.730411 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 05:15:10.763325 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1023) Jul 15 05:15:10.767473 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:15:10.767532 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:15:10.767545 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 05:15:10.789350 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:15:10.789760 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 05:15:10.791490 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 05:15:10.822736 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:15:10.825190 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:15:10.865218 systemd-networkd[1159]: lo: Link UP Jul 15 05:15:10.865231 systemd-networkd[1159]: lo: Gained carrier Jul 15 05:15:10.867026 systemd-networkd[1159]: Enumeration completed Jul 15 05:15:10.867146 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:15:10.867886 systemd-networkd[1159]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:15:10.867892 systemd-networkd[1159]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:15:10.868272 systemd[1]: Reached target network.target - Network. Jul 15 05:15:10.870576 systemd-networkd[1159]: eth0: Link UP Jul 15 05:15:10.870581 systemd-networkd[1159]: eth0: Gained carrier Jul 15 05:15:10.870595 systemd-networkd[1159]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:15:10.882408 systemd-networkd[1159]: eth0: DHCPv4 address 172.31.21.211/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 15 05:15:11.316060 ignition[1118]: Ignition 2.21.0 Jul 15 05:15:11.316074 ignition[1118]: Stage: fetch-offline Jul 15 05:15:11.316252 ignition[1118]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:11.316259 ignition[1118]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:11.316585 ignition[1118]: Ignition finished successfully Jul 15 05:15:11.318426 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:15:11.319868 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 05:15:11.344061 ignition[1169]: Ignition 2.21.0 Jul 15 05:15:11.344078 ignition[1169]: Stage: fetch Jul 15 05:15:11.344455 ignition[1169]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:11.344468 ignition[1169]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:11.344574 ignition[1169]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:11.353254 ignition[1169]: PUT result: OK Jul 15 05:15:11.355289 ignition[1169]: parsed url from cmdline: "" Jul 15 05:15:11.355319 ignition[1169]: no config URL provided Jul 15 05:15:11.355329 ignition[1169]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 05:15:11.355343 ignition[1169]: no config at "/usr/lib/ignition/user.ign" Jul 15 05:15:11.355372 ignition[1169]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:11.355927 ignition[1169]: PUT result: OK Jul 15 05:15:11.355972 ignition[1169]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 15 05:15:11.356490 ignition[1169]: GET result: OK Jul 15 05:15:11.356585 ignition[1169]: parsing config with SHA512: 231b87e04ba7fc3aa198358f71489d0c93a98dac0c299f11f824ea3a389a13d2809238e5c21c6948ae4610a60d6b5e8a4dcd1bda0dd6acc8915509665f273991 Jul 15 05:15:11.360971 unknown[1169]: fetched base config from "system" Jul 15 05:15:11.360984 unknown[1169]: fetched base config from "system" Jul 15 05:15:11.361504 ignition[1169]: fetch: fetch complete Jul 15 05:15:11.360993 unknown[1169]: fetched user config from "aws" Jul 15 05:15:11.361512 ignition[1169]: fetch: fetch passed Jul 15 05:15:11.361568 ignition[1169]: Ignition finished successfully Jul 15 05:15:11.364775 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 05:15:11.366334 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 05:15:11.394925 ignition[1175]: Ignition 2.21.0 Jul 15 05:15:11.394938 ignition[1175]: Stage: kargs Jul 15 05:15:11.395231 ignition[1175]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:11.395239 ignition[1175]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:11.395331 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:11.396246 ignition[1175]: PUT result: OK Jul 15 05:15:11.399266 ignition[1175]: kargs: kargs passed Jul 15 05:15:11.399698 ignition[1175]: Ignition finished successfully Jul 15 05:15:11.401043 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 05:15:11.402689 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 05:15:11.425897 ignition[1182]: Ignition 2.21.0 Jul 15 05:15:11.425905 ignition[1182]: Stage: disks Jul 15 05:15:11.426187 ignition[1182]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:11.426196 ignition[1182]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:11.426274 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:11.427163 ignition[1182]: PUT result: OK Jul 15 05:15:11.429430 ignition[1182]: disks: disks passed Jul 15 05:15:11.429600 ignition[1182]: Ignition finished successfully Jul 15 05:15:11.431043 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 05:15:11.431575 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 05:15:11.431899 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 05:15:11.432422 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:15:11.432910 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:15:11.433563 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:15:11.434988 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 05:15:11.487691 systemd-fsck[1190]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 05:15:11.490562 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 05:15:11.492208 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 05:15:11.667323 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 277c3938-5262-4ab1-8fa3-62fde82f8257 r/w with ordered data mode. Quota mode: none. Jul 15 05:15:11.668261 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 05:15:11.669158 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 05:15:11.671433 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:15:11.674393 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 05:15:11.675567 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 05:15:11.676207 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 05:15:11.676236 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:15:11.686681 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 05:15:11.688666 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 05:15:11.707331 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1209) Jul 15 05:15:11.710572 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:15:11.710628 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:15:11.712504 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 05:15:11.720909 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:15:12.151392 initrd-setup-root[1233]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 05:15:12.166697 initrd-setup-root[1240]: cut: /sysroot/etc/group: No such file or directory Jul 15 05:15:12.170657 initrd-setup-root[1247]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 05:15:12.175023 initrd-setup-root[1254]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 05:15:12.246492 systemd-networkd[1159]: eth0: Gained IPv6LL Jul 15 05:15:12.495562 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 05:15:12.497260 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 05:15:12.498865 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 05:15:12.515267 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 05:15:12.517562 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:15:12.540909 ignition[1321]: INFO : Ignition 2.21.0 Jul 15 05:15:12.540909 ignition[1321]: INFO : Stage: mount Jul 15 05:15:12.542217 ignition[1321]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:12.542217 ignition[1321]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:12.542217 ignition[1321]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:12.544318 ignition[1321]: INFO : PUT result: OK Jul 15 05:15:12.546173 ignition[1321]: INFO : mount: mount passed Jul 15 05:15:12.546643 ignition[1321]: INFO : Ignition finished successfully Jul 15 05:15:12.548121 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 05:15:12.549643 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 05:15:12.551365 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 05:15:12.670096 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:15:12.706332 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1335) Jul 15 05:15:12.711435 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:15:12.711500 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:15:12.711514 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 05:15:12.720827 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:15:12.746188 ignition[1351]: INFO : Ignition 2.21.0 Jul 15 05:15:12.746188 ignition[1351]: INFO : Stage: files Jul 15 05:15:12.747430 ignition[1351]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:12.747430 ignition[1351]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:12.747430 ignition[1351]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:12.747430 ignition[1351]: INFO : PUT result: OK Jul 15 05:15:12.750464 ignition[1351]: DEBUG : files: compiled without relabeling support, skipping Jul 15 05:15:12.752009 ignition[1351]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 05:15:12.754256 ignition[1351]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 05:15:12.758738 ignition[1351]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 05:15:12.759773 ignition[1351]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 05:15:12.759773 ignition[1351]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 05:15:12.759394 unknown[1351]: wrote ssh authorized keys file for user: core Jul 15 05:15:12.763045 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 15 05:15:12.763904 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 15 05:15:12.855076 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 05:15:13.027843 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 15 05:15:13.027843 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 05:15:13.029488 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 05:15:13.029488 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:15:13.029488 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:15:13.029488 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:15:13.029488 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:15:13.029488 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:15:13.029488 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:15:13.033899 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:15:13.033899 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:15:13.033899 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 05:15:13.036341 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 05:15:13.036341 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 05:15:13.036341 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 15 05:15:13.435699 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 05:15:16.787723 ignition[1351]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 05:15:16.787723 ignition[1351]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 05:15:16.790100 ignition[1351]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:15:16.794835 ignition[1351]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:15:16.794835 ignition[1351]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 05:15:16.794835 ignition[1351]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 15 05:15:16.797284 ignition[1351]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 05:15:16.797284 ignition[1351]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:15:16.797284 ignition[1351]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:15:16.797284 ignition[1351]: INFO : files: files passed Jul 15 05:15:16.797284 ignition[1351]: INFO : Ignition finished successfully Jul 15 05:15:16.797011 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 05:15:16.798284 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 05:15:16.802245 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 05:15:16.811577 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 05:15:16.811686 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 05:15:16.828872 initrd-setup-root-after-ignition[1382]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:15:16.828872 initrd-setup-root-after-ignition[1382]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:15:16.830981 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:15:16.832607 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:15:16.834144 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 05:15:16.835363 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 05:15:16.880026 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 05:15:16.880171 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 05:15:16.881507 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 05:15:16.882711 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 05:15:16.883632 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 05:15:16.884802 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 05:15:16.907343 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:15:16.909354 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 05:15:16.941700 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:15:16.942362 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:15:16.943278 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 05:15:16.944009 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 05:15:16.944157 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:15:16.945159 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 05:15:16.946188 systemd[1]: Stopped target basic.target - Basic System. Jul 15 05:15:16.946810 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 05:15:16.947440 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:15:16.948108 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 05:15:16.948848 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:15:16.949704 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 05:15:16.950294 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:15:16.951067 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 05:15:16.951980 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 05:15:16.952736 systemd[1]: Stopped target swap.target - Swaps. Jul 15 05:15:16.953324 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 05:15:16.953531 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:15:16.954464 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:15:16.955208 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:15:16.955829 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 05:15:16.955946 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:15:16.956501 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 05:15:16.956658 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 05:15:16.957612 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 05:15:16.957770 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:15:16.958318 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 05:15:16.958442 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 05:15:16.960967 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 05:15:16.961706 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 05:15:16.961830 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:15:16.963729 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 05:15:16.964448 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 05:15:16.964950 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:15:16.965820 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 05:15:16.966275 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:15:16.970542 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 05:15:16.972571 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 05:15:16.990445 ignition[1406]: INFO : Ignition 2.21.0 Jul 15 05:15:16.990445 ignition[1406]: INFO : Stage: umount Jul 15 05:15:16.992177 ignition[1406]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:15:16.992177 ignition[1406]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 05:15:16.992177 ignition[1406]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 05:15:16.996941 ignition[1406]: INFO : PUT result: OK Jul 15 05:15:16.999515 ignition[1406]: INFO : umount: umount passed Jul 15 05:15:16.999515 ignition[1406]: INFO : Ignition finished successfully Jul 15 05:15:16.998964 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 05:15:17.000895 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 05:15:17.001101 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 05:15:17.002403 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 05:15:17.002509 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 05:15:17.002968 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 05:15:17.003030 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 05:15:17.003614 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 05:15:17.003670 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 05:15:17.004739 systemd[1]: Stopped target network.target - Network. Jul 15 05:15:17.005673 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 05:15:17.005739 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:15:17.006800 systemd[1]: Stopped target paths.target - Path Units. Jul 15 05:15:17.007333 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 05:15:17.008612 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:15:17.009068 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 05:15:17.009756 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 05:15:17.010406 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 05:15:17.010459 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:15:17.011046 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 05:15:17.011093 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:15:17.011670 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 05:15:17.011737 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 05:15:17.012681 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 05:15:17.012736 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 05:15:17.013608 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 05:15:17.014161 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 05:15:17.016633 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 05:15:17.016755 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 05:15:17.022392 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 05:15:17.022750 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 05:15:17.022839 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 05:15:17.024022 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 05:15:17.024141 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 05:15:17.026040 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 05:15:17.027960 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 05:15:17.028419 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 05:15:17.028479 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:15:17.029016 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 05:15:17.029084 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 05:15:17.030795 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 05:15:17.033521 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 05:15:17.033591 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:15:17.035611 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 05:15:17.035673 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:15:17.036398 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 05:15:17.036457 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 05:15:17.037032 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 05:15:17.037085 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:15:17.037873 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:15:17.039968 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 05:15:17.040047 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:15:17.046684 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 05:15:17.046885 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:15:17.048094 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 05:15:17.048171 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 05:15:17.049243 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 05:15:17.049290 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:15:17.050677 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 05:15:17.050742 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:15:17.051418 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 05:15:17.051477 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 05:15:17.052805 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 05:15:17.052876 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:15:17.057748 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 05:15:17.058196 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 05:15:17.058273 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:15:17.060858 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 05:15:17.060927 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:15:17.064216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:15:17.064285 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:17.066988 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 05:15:17.067066 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 05:15:17.067123 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:15:17.067653 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 05:15:17.070460 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 05:15:17.076935 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 05:15:17.077033 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 05:15:17.078296 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 05:15:17.079471 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 05:15:17.100091 systemd[1]: Switching root. Jul 15 05:15:17.145096 systemd-journald[207]: Journal stopped Jul 15 05:15:18.993699 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 15 05:15:18.993756 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 05:15:18.993771 kernel: SELinux: policy capability open_perms=1 Jul 15 05:15:18.993783 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 05:15:18.993795 kernel: SELinux: policy capability always_check_network=0 Jul 15 05:15:18.993807 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 05:15:18.993818 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 05:15:18.993829 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 05:15:18.993840 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 05:15:18.993860 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 05:15:18.993876 kernel: audit: type=1403 audit(1752556517.591:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 05:15:18.993888 systemd[1]: Successfully loaded SELinux policy in 91.500ms. Jul 15 05:15:18.993913 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.086ms. Jul 15 05:15:18.993926 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:15:18.993939 systemd[1]: Detected virtualization amazon. Jul 15 05:15:18.993951 systemd[1]: Detected architecture x86-64. Jul 15 05:15:18.993963 systemd[1]: Detected first boot. Jul 15 05:15:18.993977 systemd[1]: Initializing machine ID from VM UUID. Jul 15 05:15:18.993989 zram_generator::config[1449]: No configuration found. Jul 15 05:15:18.994002 kernel: Guest personality initialized and is inactive Jul 15 05:15:18.994014 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 05:15:18.994029 kernel: Initialized host personality Jul 15 05:15:18.994039 kernel: NET: Registered PF_VSOCK protocol family Jul 15 05:15:18.994050 systemd[1]: Populated /etc with preset unit settings. Jul 15 05:15:18.994063 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 05:15:18.994075 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 05:15:18.994089 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 05:15:18.994102 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 05:15:18.994114 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 05:15:18.994126 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 05:15:18.994138 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 05:15:18.994150 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 05:15:18.994166 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 05:15:18.994179 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 05:15:18.994195 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 05:15:18.994206 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 05:15:18.994219 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:15:18.994231 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:15:18.994244 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 05:15:18.994257 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 05:15:18.994269 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 05:15:18.994281 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:15:18.994296 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 05:15:18.994319 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:15:18.994331 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:15:18.994343 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 05:15:18.994355 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 05:15:18.994367 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 05:15:18.994378 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 05:15:18.994392 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:15:18.994404 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:15:18.994419 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:15:18.994431 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:15:18.994443 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 05:15:18.994456 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 05:15:18.994468 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 05:15:18.994480 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:15:18.994491 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:15:18.994503 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:15:18.994515 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 05:15:18.994529 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 05:15:18.994542 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 05:15:18.994554 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 05:15:18.994566 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:15:18.994578 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 05:15:18.994589 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 05:15:18.996324 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 05:15:18.996343 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 05:15:18.996356 systemd[1]: Reached target machines.target - Containers. Jul 15 05:15:18.996372 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 05:15:18.996385 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:15:18.996398 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:15:18.996410 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 05:15:18.996422 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:15:18.996434 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:15:18.996445 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:15:18.996458 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 05:15:18.996472 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:15:18.996485 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 05:15:18.996498 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 05:15:18.996510 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 05:15:18.996522 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 05:15:18.996534 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 05:15:18.996547 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:15:18.996560 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:15:18.996572 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:15:18.996586 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:15:18.996599 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 05:15:18.996611 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 05:15:18.996626 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:15:18.996642 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 05:15:18.996654 systemd[1]: Stopped verity-setup.service. Jul 15 05:15:18.996667 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:15:18.996680 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 05:15:18.996691 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 05:15:18.996704 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 05:15:18.996719 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 05:15:18.996731 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 05:15:18.996743 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 05:15:18.996756 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:15:18.996768 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:15:18.996780 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:15:18.996792 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 05:15:18.997994 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 05:15:18.998043 systemd-journald[1528]: Collecting audit messages is disabled. Jul 15 05:15:18.998072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:15:18.998085 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:15:18.998097 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:15:18.998111 systemd-journald[1528]: Journal started Jul 15 05:15:18.998135 systemd-journald[1528]: Runtime Journal (/run/log/journal/ec2006ef0cc0bd562042a54a19e216b8) is 4.8M, max 38.4M, 33.6M free. Jul 15 05:15:18.736869 systemd[1]: Queued start job for default target multi-user.target. Jul 15 05:15:18.749704 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 15 05:15:18.750135 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 05:15:18.999806 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:15:19.001325 kernel: loop: module loaded Jul 15 05:15:19.004473 kernel: fuse: init (API version 7.41) Jul 15 05:15:19.002696 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:15:19.002890 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:15:19.003643 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 05:15:19.003790 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 05:15:19.005434 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:15:19.006034 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 05:15:19.014960 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:15:19.020406 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 05:15:19.023926 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 05:15:19.024375 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 05:15:19.024415 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:15:19.025713 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 05:15:19.031566 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 05:15:19.032602 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:15:19.038456 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 05:15:19.041159 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 05:15:19.042397 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:15:19.047783 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 05:15:19.049399 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:15:19.055883 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:15:19.061469 systemd-journald[1528]: Time spent on flushing to /var/log/journal/ec2006ef0cc0bd562042a54a19e216b8 is 54.274ms for 1006 entries. Jul 15 05:15:19.061469 systemd-journald[1528]: System Journal (/var/log/journal/ec2006ef0cc0bd562042a54a19e216b8) is 8M, max 195.6M, 187.6M free. Jul 15 05:15:19.134017 systemd-journald[1528]: Received client request to flush runtime journal. Jul 15 05:15:19.059667 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 05:15:19.063350 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 05:15:19.064071 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 05:15:19.064641 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 05:15:19.076569 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 05:15:19.089237 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 05:15:19.114127 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 05:15:19.114748 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 05:15:19.120114 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 05:15:19.137492 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 05:15:19.148223 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:15:19.170332 kernel: loop0: detected capacity change from 0 to 114000 Jul 15 05:15:19.172404 kernel: ACPI: bus type drm_connector registered Jul 15 05:15:19.172674 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:15:19.173634 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:15:19.175663 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 05:15:19.185552 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:15:19.193745 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 05:15:19.195556 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:15:19.223607 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Jul 15 05:15:19.224021 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Jul 15 05:15:19.239279 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:15:19.296485 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 05:15:19.314692 kernel: loop1: detected capacity change from 0 to 72384 Jul 15 05:15:19.373332 kernel: loop2: detected capacity change from 0 to 146488 Jul 15 05:15:19.512334 kernel: loop3: detected capacity change from 0 to 229808 Jul 15 05:15:19.567347 kernel: loop4: detected capacity change from 0 to 114000 Jul 15 05:15:19.590062 kernel: loop5: detected capacity change from 0 to 72384 Jul 15 05:15:19.609328 kernel: loop6: detected capacity change from 0 to 146488 Jul 15 05:15:19.634344 kernel: loop7: detected capacity change from 0 to 229808 Jul 15 05:15:19.677693 (sd-merge)[1608]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 15 05:15:19.678247 (sd-merge)[1608]: Merged extensions into '/usr'. Jul 15 05:15:19.685009 systemd[1]: Reload requested from client PID 1578 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 05:15:19.685154 systemd[1]: Reloading... Jul 15 05:15:19.760437 zram_generator::config[1630]: No configuration found. Jul 15 05:15:19.926978 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:15:20.074742 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 05:15:20.075054 systemd[1]: Reloading finished in 389 ms. Jul 15 05:15:20.091323 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 05:15:20.092081 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 05:15:20.102188 systemd[1]: Starting ensure-sysext.service... Jul 15 05:15:20.104501 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:15:20.110654 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:15:20.146358 systemd[1]: Reload requested from client PID 1686 ('systemctl') (unit ensure-sysext.service)... Jul 15 05:15:20.146520 systemd[1]: Reloading... Jul 15 05:15:20.162953 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 05:15:20.163004 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 05:15:20.164403 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 05:15:20.164835 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 05:15:20.168182 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 05:15:20.168646 systemd-tmpfiles[1687]: ACLs are not supported, ignoring. Jul 15 05:15:20.168727 systemd-tmpfiles[1687]: ACLs are not supported, ignoring. Jul 15 05:15:20.179266 systemd-udevd[1688]: Using default interface naming scheme 'v255'. Jul 15 05:15:20.183960 systemd-tmpfiles[1687]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:15:20.183975 systemd-tmpfiles[1687]: Skipping /boot Jul 15 05:15:20.208901 systemd-tmpfiles[1687]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:15:20.210019 systemd-tmpfiles[1687]: Skipping /boot Jul 15 05:15:20.279347 zram_generator::config[1716]: No configuration found. Jul 15 05:15:20.546972 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:15:20.649495 (udev-worker)[1739]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:15:20.748659 systemd[1]: Reloading finished in 601 ms. Jul 15 05:15:20.756776 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:15:20.759120 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:15:20.769802 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 15 05:15:20.770148 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 05:15:20.810753 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 05:15:20.818328 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 15 05:15:20.824344 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:15:20.828419 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 05:15:20.833810 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 05:15:20.843060 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:15:20.859209 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:15:20.870246 kernel: ACPI: button: Power Button [PWRF] Jul 15 05:15:20.863783 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 05:15:20.876225 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:15:20.877011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:15:20.886441 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:15:20.893108 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:15:20.898373 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:15:20.899184 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:15:20.899385 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:15:20.899536 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:15:20.911796 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 05:15:20.918536 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:15:20.918848 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:15:20.919095 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:15:20.919233 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:15:20.920449 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:15:20.930330 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jul 15 05:15:20.936598 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:15:20.937090 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:15:20.939756 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:15:20.940572 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:15:20.941550 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:15:20.941917 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 05:15:20.942837 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:15:20.957387 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 05:15:20.960692 systemd[1]: Finished ensure-sysext.service. Jul 15 05:15:20.978824 ldconfig[1569]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 05:15:20.982344 kernel: ACPI: button: Sleep Button [SLPF] Jul 15 05:15:20.985222 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 05:15:20.999588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:15:21.001038 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:15:21.002755 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:15:21.018124 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:15:21.018929 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:15:21.025409 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:15:21.026371 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:15:21.029757 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:15:21.039928 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 05:15:21.042532 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:15:21.042770 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:15:21.051839 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 05:15:21.108872 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 05:15:21.114258 augenrules[1915]: No rules Jul 15 05:15:21.116755 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:15:21.117274 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:15:21.121042 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 05:15:21.122034 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 05:15:21.250244 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 05:15:21.283619 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 15 05:15:21.291009 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 05:15:21.307440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:15:21.329498 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:15:21.329794 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:21.337777 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:15:21.359780 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 05:15:21.487525 systemd-networkd[1814]: lo: Link UP Jul 15 05:15:21.487540 systemd-networkd[1814]: lo: Gained carrier Jul 15 05:15:21.489592 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:15:21.491977 systemd-networkd[1814]: Enumeration completed Jul 15 05:15:21.492447 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:15:21.492633 systemd-networkd[1814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:15:21.492734 systemd-networkd[1814]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:15:21.495957 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 05:15:21.496562 systemd-networkd[1814]: eth0: Link UP Jul 15 05:15:21.496722 systemd-networkd[1814]: eth0: Gained carrier Jul 15 05:15:21.496749 systemd-networkd[1814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:15:21.500456 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 05:15:21.507386 systemd-networkd[1814]: eth0: DHCPv4 address 172.31.21.211/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 15 05:15:21.518443 systemd-resolved[1820]: Positive Trust Anchors: Jul 15 05:15:21.518460 systemd-resolved[1820]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:15:21.518524 systemd-resolved[1820]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:15:21.525855 systemd-resolved[1820]: Defaulting to hostname 'linux'. Jul 15 05:15:21.529109 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:15:21.530030 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 05:15:21.530669 systemd[1]: Reached target network.target - Network. Jul 15 05:15:21.531129 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:15:21.531606 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:15:21.532122 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 05:15:21.532595 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 05:15:21.532965 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 05:15:21.533534 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 05:15:21.533979 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 05:15:21.534366 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 05:15:21.534705 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 05:15:21.534747 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:15:21.535101 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:15:21.537422 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 05:15:21.539076 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 05:15:21.541590 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 05:15:21.542117 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 05:15:21.542515 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 05:15:21.549194 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 05:15:21.550147 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 05:15:21.551316 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 05:15:21.552551 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:15:21.552944 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:15:21.553439 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:15:21.553476 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:15:21.554472 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 05:15:21.558452 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 05:15:21.561184 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 05:15:21.564588 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 05:15:21.568480 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 05:15:21.572157 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 05:15:21.573264 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 05:15:21.576312 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 05:15:21.581572 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 05:15:21.589871 systemd[1]: Started ntpd.service - Network Time Service. Jul 15 05:15:21.596951 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 05:15:21.604997 jq[1974]: false Jul 15 05:15:21.608240 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 15 05:15:21.615086 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 05:15:21.618817 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 05:15:21.624776 google_oslogin_nss_cache[1976]: oslogin_cache_refresh[1976]: Refreshing passwd entry cache Jul 15 05:15:21.625128 oslogin_cache_refresh[1976]: Refreshing passwd entry cache Jul 15 05:15:21.630139 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 05:15:21.633467 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 05:15:21.634195 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 05:15:21.643327 google_oslogin_nss_cache[1976]: oslogin_cache_refresh[1976]: Failure getting users, quitting Jul 15 05:15:21.643327 google_oslogin_nss_cache[1976]: oslogin_cache_refresh[1976]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:15:21.643226 oslogin_cache_refresh[1976]: Failure getting users, quitting Jul 15 05:15:21.643249 oslogin_cache_refresh[1976]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:15:21.644102 google_oslogin_nss_cache[1976]: oslogin_cache_refresh[1976]: Refreshing group entry cache Jul 15 05:15:21.643560 oslogin_cache_refresh[1976]: Refreshing group entry cache Jul 15 05:15:21.645594 google_oslogin_nss_cache[1976]: oslogin_cache_refresh[1976]: Failure getting groups, quitting Jul 15 05:15:21.645661 oslogin_cache_refresh[1976]: Failure getting groups, quitting Jul 15 05:15:21.645731 google_oslogin_nss_cache[1976]: oslogin_cache_refresh[1976]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:15:21.645790 oslogin_cache_refresh[1976]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:15:21.656825 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 05:15:21.659885 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 05:15:21.664891 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 05:15:21.666889 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 05:15:21.667380 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 05:15:21.667757 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 05:15:21.669384 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 05:15:21.676280 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 05:15:21.677403 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 05:15:21.714154 tar[1994]: linux-amd64/LICENSE Jul 15 05:15:21.714154 tar[1994]: linux-amd64/helm Jul 15 05:15:21.752919 jq[1988]: true Jul 15 05:15:21.761355 ntpd[1978]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 03:00:16 UTC 2025 (1): Starting Jul 15 05:15:21.767685 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 03:00:16 UTC 2025 (1): Starting Jul 15 05:15:21.767685 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 05:15:21.767685 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: ---------------------------------------------------- Jul 15 05:15:21.767685 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: ntp-4 is maintained by Network Time Foundation, Jul 15 05:15:21.767685 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 05:15:21.767685 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: corporation. Support and training for ntp-4 are Jul 15 05:15:21.767685 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: available at https://www.nwtime.org/support Jul 15 05:15:21.767685 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: ---------------------------------------------------- Jul 15 05:15:21.761406 ntpd[1978]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 05:15:21.761418 ntpd[1978]: ---------------------------------------------------- Jul 15 05:15:21.761430 ntpd[1978]: ntp-4 is maintained by Network Time Foundation, Jul 15 05:15:21.761442 ntpd[1978]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 05:15:21.761452 ntpd[1978]: corporation. Support and training for ntp-4 are Jul 15 05:15:21.761463 ntpd[1978]: available at https://www.nwtime.org/support Jul 15 05:15:21.761474 ntpd[1978]: ---------------------------------------------------- Jul 15 05:15:21.771595 extend-filesystems[1975]: Found /dev/nvme0n1p6 Jul 15 05:15:21.773981 ntpd[1978]: proto: precision = 0.091 usec (-23) Jul 15 05:15:21.792601 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: proto: precision = 0.091 usec (-23) Jul 15 05:15:21.792601 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: basedate set to 2025-07-03 Jul 15 05:15:21.792601 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: gps base set to 2025-07-06 (week 2374) Jul 15 05:15:21.789563 ntpd[1978]: basedate set to 2025-07-03 Jul 15 05:15:21.792760 jq[2015]: true Jul 15 05:15:21.789585 ntpd[1978]: gps base set to 2025-07-06 (week 2374) Jul 15 05:15:21.799195 extend-filesystems[1975]: Found /dev/nvme0n1p9 Jul 15 05:15:21.804991 ntpd[1978]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 05:15:21.806734 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 05:15:21.806734 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 05:15:21.806734 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 05:15:21.806734 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: Listen normally on 3 eth0 172.31.21.211:123 Jul 15 05:15:21.805051 ntpd[1978]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 05:15:21.805243 ntpd[1978]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 05:15:21.805280 ntpd[1978]: Listen normally on 3 eth0 172.31.21.211:123 Jul 15 05:15:21.807077 ntpd[1978]: Listen normally on 4 lo [::1]:123 Jul 15 05:15:21.808660 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: Listen normally on 4 lo [::1]:123 Jul 15 05:15:21.808660 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: bind(21) AF_INET6 fe80::479:5dff:fe92:d1db%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 05:15:21.808660 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: unable to create socket on eth0 (5) for fe80::479:5dff:fe92:d1db%2#123 Jul 15 05:15:21.808660 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: failed to init interface for address fe80::479:5dff:fe92:d1db%2 Jul 15 05:15:21.808660 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: Listening on routing socket on fd #21 for interface updates Jul 15 05:15:21.807151 ntpd[1978]: bind(21) AF_INET6 fe80::479:5dff:fe92:d1db%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 05:15:21.807173 ntpd[1978]: unable to create socket on eth0 (5) for fe80::479:5dff:fe92:d1db%2#123 Jul 15 05:15:21.807188 ntpd[1978]: failed to init interface for address fe80::479:5dff:fe92:d1db%2 Jul 15 05:15:21.807225 ntpd[1978]: Listening on routing socket on fd #21 for interface updates Jul 15 05:15:21.809494 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 05:15:21.809806 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 05:15:21.818605 dbus-daemon[1972]: [system] SELinux support is enabled Jul 15 05:15:21.815507 (ntainerd)[2010]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 05:15:21.825989 extend-filesystems[1975]: Checking size of /dev/nvme0n1p9 Jul 15 05:15:21.818801 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 05:15:21.830016 ntpd[1978]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 05:15:21.838125 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 05:15:21.838125 ntpd[1978]: 15 Jul 05:15:21 ntpd[1978]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 05:15:21.825754 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 05:15:21.830059 ntpd[1978]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 05:15:21.825801 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 05:15:21.838399 dbus-daemon[1972]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1814 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 15 05:15:21.836621 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 05:15:21.836651 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 05:15:21.854052 update_engine[1984]: I20250715 05:15:21.853089 1984 main.cc:92] Flatcar Update Engine starting Jul 15 05:15:21.858292 dbus-daemon[1972]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 05:15:21.861421 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 15 05:15:21.876057 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 15 05:15:21.894627 extend-filesystems[1975]: Resized partition /dev/nvme0n1p9 Jul 15 05:15:21.894971 systemd[1]: Started update-engine.service - Update Engine. Jul 15 05:15:21.899535 update_engine[1984]: I20250715 05:15:21.896492 1984 update_check_scheduler.cc:74] Next update check in 9m18s Jul 15 05:15:21.895499 systemd-logind[1983]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 05:15:21.895526 systemd-logind[1983]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 15 05:15:21.895551 systemd-logind[1983]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 05:15:21.901290 systemd-logind[1983]: New seat seat0. Jul 15 05:15:21.903254 extend-filesystems[2041]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 05:15:21.907174 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 05:15:21.912455 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 05:15:21.920332 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 15 05:15:22.024349 coreos-metadata[1971]: Jul 15 05:15:22.019 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 15 05:15:22.024349 coreos-metadata[1971]: Jul 15 05:15:22.022 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 15 05:15:22.028909 coreos-metadata[1971]: Jul 15 05:15:22.028 INFO Fetch successful Jul 15 05:15:22.028909 coreos-metadata[1971]: Jul 15 05:15:22.028 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 15 05:15:22.068026 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.030 INFO Fetch successful Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.030 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.031 INFO Fetch successful Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.031 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.032 INFO Fetch successful Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.032 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.033 INFO Fetch failed with 404: resource not found Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.033 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.036 INFO Fetch successful Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.036 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.038 INFO Fetch successful Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.038 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.040 INFO Fetch successful Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.040 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.042 INFO Fetch successful Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.042 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 15 05:15:22.068077 coreos-metadata[1971]: Jul 15 05:15:22.046 INFO Fetch successful Jul 15 05:15:22.095247 extend-filesystems[2041]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 15 05:15:22.095247 extend-filesystems[2041]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 05:15:22.095247 extend-filesystems[2041]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 15 05:15:22.093959 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 05:15:22.116850 bash[2050]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:15:22.116956 extend-filesystems[1975]: Resized filesystem in /dev/nvme0n1p9 Jul 15 05:15:22.094257 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 05:15:22.098651 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 05:15:22.113678 systemd[1]: Starting sshkeys.service... Jul 15 05:15:22.173502 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 05:15:22.176773 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 05:15:22.201431 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 15 05:15:22.207105 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 15 05:15:22.474682 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 05:15:22.489191 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 15 05:15:22.493056 dbus-daemon[1972]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 15 05:15:22.496580 dbus-daemon[1972]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2031 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 15 05:15:22.505673 systemd[1]: Starting polkit.service - Authorization Manager... Jul 15 05:15:22.550366 coreos-metadata[2076]: Jul 15 05:15:22.550 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 15 05:15:22.554817 coreos-metadata[2076]: Jul 15 05:15:22.552 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 15 05:15:22.555230 coreos-metadata[2076]: Jul 15 05:15:22.555 INFO Fetch successful Jul 15 05:15:22.555803 coreos-metadata[2076]: Jul 15 05:15:22.555 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 15 05:15:22.561481 coreos-metadata[2076]: Jul 15 05:15:22.560 INFO Fetch successful Jul 15 05:15:22.565730 unknown[2076]: wrote ssh authorized keys file for user: core Jul 15 05:15:22.612436 locksmithd[2042]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 05:15:22.643858 update-ssh-keys[2150]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:15:22.644686 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 15 05:15:22.658026 systemd[1]: Finished sshkeys.service. Jul 15 05:15:22.681591 containerd[2010]: time="2025-07-15T05:15:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 05:15:22.681591 containerd[2010]: time="2025-07-15T05:15:22.678995791Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 05:15:22.760392 containerd[2010]: time="2025-07-15T05:15:22.759623688Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.074µs" Jul 15 05:15:22.760392 containerd[2010]: time="2025-07-15T05:15:22.759674002Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 05:15:22.760392 containerd[2010]: time="2025-07-15T05:15:22.759698584Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 05:15:22.760392 containerd[2010]: time="2025-07-15T05:15:22.759878597Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 05:15:22.760392 containerd[2010]: time="2025-07-15T05:15:22.759896962Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 05:15:22.760392 containerd[2010]: time="2025-07-15T05:15:22.759929218Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:15:22.760392 containerd[2010]: time="2025-07-15T05:15:22.759993201Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:15:22.760392 containerd[2010]: time="2025-07-15T05:15:22.760007346Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:15:22.762557 ntpd[1978]: bind(24) AF_INET6 fe80::479:5dff:fe92:d1db%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 05:15:22.763059 containerd[2010]: time="2025-07-15T05:15:22.760294571Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:15:22.763059 containerd[2010]: time="2025-07-15T05:15:22.762686531Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:15:22.763059 containerd[2010]: time="2025-07-15T05:15:22.762719669Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:15:22.763059 containerd[2010]: time="2025-07-15T05:15:22.762732709Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 05:15:22.763059 containerd[2010]: time="2025-07-15T05:15:22.762884499Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 05:15:22.763253 ntpd[1978]: 15 Jul 05:15:22 ntpd[1978]: bind(24) AF_INET6 fe80::479:5dff:fe92:d1db%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 05:15:22.763253 ntpd[1978]: 15 Jul 05:15:22 ntpd[1978]: unable to create socket on eth0 (6) for fe80::479:5dff:fe92:d1db%2#123 Jul 15 05:15:22.763253 ntpd[1978]: 15 Jul 05:15:22 ntpd[1978]: failed to init interface for address fe80::479:5dff:fe92:d1db%2 Jul 15 05:15:22.762619 ntpd[1978]: unable to create socket on eth0 (6) for fe80::479:5dff:fe92:d1db%2#123 Jul 15 05:15:22.763424 containerd[2010]: time="2025-07-15T05:15:22.763134821Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:15:22.763424 containerd[2010]: time="2025-07-15T05:15:22.763174228Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:15:22.763424 containerd[2010]: time="2025-07-15T05:15:22.763191498Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 05:15:22.763424 containerd[2010]: time="2025-07-15T05:15:22.763247072Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 05:15:22.762635 ntpd[1978]: failed to init interface for address fe80::479:5dff:fe92:d1db%2 Jul 15 05:15:22.764720 containerd[2010]: time="2025-07-15T05:15:22.764691625Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 05:15:22.764971 containerd[2010]: time="2025-07-15T05:15:22.764792442Z" level=info msg="metadata content store policy set" policy=shared Jul 15 05:15:22.774060 containerd[2010]: time="2025-07-15T05:15:22.773994800Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 05:15:22.774168 containerd[2010]: time="2025-07-15T05:15:22.774087824Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 05:15:22.774209 containerd[2010]: time="2025-07-15T05:15:22.774193319Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 05:15:22.774264 containerd[2010]: time="2025-07-15T05:15:22.774214178Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 05:15:22.774264 containerd[2010]: time="2025-07-15T05:15:22.774232157Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 05:15:22.774264 containerd[2010]: time="2025-07-15T05:15:22.774247567Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 05:15:22.774378 containerd[2010]: time="2025-07-15T05:15:22.774264079Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 05:15:22.774378 containerd[2010]: time="2025-07-15T05:15:22.774281693Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 05:15:22.774378 containerd[2010]: time="2025-07-15T05:15:22.774299207Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 05:15:22.774378 containerd[2010]: time="2025-07-15T05:15:22.774332012Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 05:15:22.774378 containerd[2010]: time="2025-07-15T05:15:22.774348101Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 05:15:22.774378 containerd[2010]: time="2025-07-15T05:15:22.774366497Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 05:15:22.774563 containerd[2010]: time="2025-07-15T05:15:22.774519418Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 05:15:22.774563 containerd[2010]: time="2025-07-15T05:15:22.774546526Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 05:15:22.774636 containerd[2010]: time="2025-07-15T05:15:22.774568696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 05:15:22.774636 containerd[2010]: time="2025-07-15T05:15:22.774586301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 05:15:22.774636 containerd[2010]: time="2025-07-15T05:15:22.774601917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 05:15:22.774636 containerd[2010]: time="2025-07-15T05:15:22.774617501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 05:15:22.774785 containerd[2010]: time="2025-07-15T05:15:22.774635682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 05:15:22.774785 containerd[2010]: time="2025-07-15T05:15:22.774651478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 05:15:22.774785 containerd[2010]: time="2025-07-15T05:15:22.774669117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 05:15:22.774785 containerd[2010]: time="2025-07-15T05:15:22.774686646Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 05:15:22.774785 containerd[2010]: time="2025-07-15T05:15:22.774702560Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 05:15:22.774785 containerd[2010]: time="2025-07-15T05:15:22.774780683Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 05:15:22.775037 containerd[2010]: time="2025-07-15T05:15:22.774799421Z" level=info msg="Start snapshots syncer" Jul 15 05:15:22.775037 containerd[2010]: time="2025-07-15T05:15:22.774823779Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 05:15:22.776963 containerd[2010]: time="2025-07-15T05:15:22.775186105Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 05:15:22.776963 containerd[2010]: time="2025-07-15T05:15:22.775264401Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 05:15:22.781374 containerd[2010]: time="2025-07-15T05:15:22.780162558Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 05:15:22.781374 containerd[2010]: time="2025-07-15T05:15:22.780387923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 05:15:22.781374 containerd[2010]: time="2025-07-15T05:15:22.780420381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 05:15:22.781374 containerd[2010]: time="2025-07-15T05:15:22.780436704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 05:15:22.781374 containerd[2010]: time="2025-07-15T05:15:22.780452275Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 05:15:22.781374 containerd[2010]: time="2025-07-15T05:15:22.780473972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 05:15:22.781374 containerd[2010]: time="2025-07-15T05:15:22.780498489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 05:15:22.781374 containerd[2010]: time="2025-07-15T05:15:22.780515696Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 05:15:22.781374 containerd[2010]: time="2025-07-15T05:15:22.780552878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 05:15:22.781374 containerd[2010]: time="2025-07-15T05:15:22.780568122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 05:15:22.781374 containerd[2010]: time="2025-07-15T05:15:22.780584886Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 05:15:22.781374 containerd[2010]: time="2025-07-15T05:15:22.780631825Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:15:22.781374 containerd[2010]: time="2025-07-15T05:15:22.780651325Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:15:22.781374 containerd[2010]: time="2025-07-15T05:15:22.780665462Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:15:22.781929 containerd[2010]: time="2025-07-15T05:15:22.780679699Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:15:22.781929 containerd[2010]: time="2025-07-15T05:15:22.780691365Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 05:15:22.781929 containerd[2010]: time="2025-07-15T05:15:22.780703949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 05:15:22.781929 containerd[2010]: time="2025-07-15T05:15:22.780717474Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 05:15:22.781929 containerd[2010]: time="2025-07-15T05:15:22.780738603Z" level=info msg="runtime interface created" Jul 15 05:15:22.781929 containerd[2010]: time="2025-07-15T05:15:22.780746784Z" level=info msg="created NRI interface" Jul 15 05:15:22.781929 containerd[2010]: time="2025-07-15T05:15:22.780757760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 05:15:22.784398 containerd[2010]: time="2025-07-15T05:15:22.783016137Z" level=info msg="Connect containerd service" Jul 15 05:15:22.789218 containerd[2010]: time="2025-07-15T05:15:22.784929521Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 05:15:22.801605 containerd[2010]: time="2025-07-15T05:15:22.801560984Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:15:22.856331 sshd_keygen[2021]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 05:15:22.895376 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 05:15:22.899622 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 05:15:22.905402 systemd[1]: Started sshd@0-172.31.21.211:22-139.178.89.65:41068.service - OpenSSH per-connection server daemon (139.178.89.65:41068). Jul 15 05:15:22.927993 polkitd[2139]: Started polkitd version 126 Jul 15 05:15:22.950155 polkitd[2139]: Loading rules from directory /etc/polkit-1/rules.d Jul 15 05:15:22.950716 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 05:15:22.951229 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 05:15:22.953111 polkitd[2139]: Loading rules from directory /run/polkit-1/rules.d Jul 15 05:15:22.953597 polkitd[2139]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 05:15:22.954899 polkitd[2139]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 15 05:15:22.955035 polkitd[2139]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 05:15:22.955422 polkitd[2139]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 15 05:15:22.959400 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 05:15:22.968504 polkitd[2139]: Finished loading, compiling and executing 2 rules Jul 15 05:15:22.968908 systemd[1]: Started polkit.service - Authorization Manager. Jul 15 05:15:22.977553 dbus-daemon[1972]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 15 05:15:22.978545 polkitd[2139]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 15 05:15:22.996741 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 05:15:23.004759 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 05:15:23.009774 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 05:15:23.010718 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 05:15:23.049563 systemd-hostnamed[2031]: Hostname set to (transient) Jul 15 05:15:23.050081 systemd-resolved[1820]: System hostname changed to 'ip-172-31-21-211'. Jul 15 05:15:23.062440 systemd-networkd[1814]: eth0: Gained IPv6LL Jul 15 05:15:23.074573 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 05:15:23.075838 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 05:15:23.078756 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 15 05:15:23.082093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:15:23.088639 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 05:15:23.181903 tar[1994]: linux-amd64/README.md Jul 15 05:15:23.200380 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 05:15:23.210311 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 05:15:23.240446 amazon-ssm-agent[2212]: Initializing new seelog logger Jul 15 05:15:23.240752 amazon-ssm-agent[2212]: New Seelog Logger Creation Complete Jul 15 05:15:23.240752 amazon-ssm-agent[2212]: 2025/07/15 05:15:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:23.240752 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:23.240939 amazon-ssm-agent[2212]: 2025/07/15 05:15:23 processing appconfig overrides Jul 15 05:15:23.241252 amazon-ssm-agent[2212]: 2025/07/15 05:15:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:23.241252 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:23.241530 amazon-ssm-agent[2212]: 2025/07/15 05:15:23 processing appconfig overrides Jul 15 05:15:23.241649 amazon-ssm-agent[2212]: 2025/07/15 05:15:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:23.241649 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:23.241728 amazon-ssm-agent[2212]: 2025/07/15 05:15:23 processing appconfig overrides Jul 15 05:15:23.242095 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.2411 INFO Proxy environment variables: Jul 15 05:15:23.245602 amazon-ssm-agent[2212]: 2025/07/15 05:15:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:23.245602 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:23.245602 amazon-ssm-agent[2212]: 2025/07/15 05:15:23 processing appconfig overrides Jul 15 05:15:23.267998 containerd[2010]: time="2025-07-15T05:15:23.267887422Z" level=info msg="Start subscribing containerd event" Jul 15 05:15:23.268247 containerd[2010]: time="2025-07-15T05:15:23.267935252Z" level=info msg="Start recovering state" Jul 15 05:15:23.268247 containerd[2010]: time="2025-07-15T05:15:23.268198160Z" level=info msg="Start event monitor" Jul 15 05:15:23.268247 containerd[2010]: time="2025-07-15T05:15:23.268211313Z" level=info msg="Start cni network conf syncer for default" Jul 15 05:15:23.268247 containerd[2010]: time="2025-07-15T05:15:23.268217906Z" level=info msg="Start streaming server" Jul 15 05:15:23.268247 containerd[2010]: time="2025-07-15T05:15:23.268232192Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 05:15:23.268971 containerd[2010]: time="2025-07-15T05:15:23.268838620Z" level=info msg="runtime interface starting up..." Jul 15 05:15:23.268971 containerd[2010]: time="2025-07-15T05:15:23.268925732Z" level=info msg="starting plugins..." Jul 15 05:15:23.268971 containerd[2010]: time="2025-07-15T05:15:23.268942165Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 05:15:23.269152 containerd[2010]: time="2025-07-15T05:15:23.269054730Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 05:15:23.269152 containerd[2010]: time="2025-07-15T05:15:23.269109323Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 05:15:23.269389 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 05:15:23.270919 containerd[2010]: time="2025-07-15T05:15:23.270899409Z" level=info msg="containerd successfully booted in 0.594134s" Jul 15 05:15:23.326101 sshd[2187]: Accepted publickey for core from 139.178.89.65 port 41068 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:23.332129 sshd-session[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:23.341662 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.2411 INFO no_proxy: Jul 15 05:15:23.349665 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 05:15:23.352643 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 05:15:23.357531 systemd-logind[1983]: New session 1 of user core. Jul 15 05:15:23.380711 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 05:15:23.387155 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 05:15:23.402019 (systemd)[2240]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 05:15:23.407504 systemd-logind[1983]: New session c1 of user core. Jul 15 05:15:23.440434 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.2412 INFO https_proxy: Jul 15 05:15:23.453148 amazon-ssm-agent[2212]: 2025/07/15 05:15:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:23.453919 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 05:15:23.453919 amazon-ssm-agent[2212]: 2025/07/15 05:15:23 processing appconfig overrides Jul 15 05:15:23.484334 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.2412 INFO http_proxy: Jul 15 05:15:23.484334 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.2413 INFO Checking if agent identity type OnPrem can be assumed Jul 15 05:15:23.484334 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.2415 INFO Checking if agent identity type EC2 can be assumed Jul 15 05:15:23.484334 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.2904 INFO Agent will take identity from EC2 Jul 15 05:15:23.484334 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.2917 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 15 05:15:23.484762 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.2918 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 15 05:15:23.484762 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.2918 INFO [amazon-ssm-agent] Starting Core Agent Jul 15 05:15:23.484762 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.2918 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 15 05:15:23.484762 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.2918 INFO [Registrar] Starting registrar module Jul 15 05:15:23.484762 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.2931 INFO [EC2Identity] Checking disk for registration info Jul 15 05:15:23.484762 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.2931 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 15 05:15:23.484762 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.2931 INFO [EC2Identity] Generating registration keypair Jul 15 05:15:23.484762 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.4046 INFO [EC2Identity] Checking write access before registering Jul 15 05:15:23.484762 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.4055 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 15 05:15:23.484762 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.4529 INFO [EC2Identity] EC2 registration was successful. Jul 15 05:15:23.484762 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.4529 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 15 05:15:23.484762 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.4530 INFO [CredentialRefresher] credentialRefresher has started Jul 15 05:15:23.484762 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.4530 INFO [CredentialRefresher] Starting credentials refresher loop Jul 15 05:15:23.484762 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.4840 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 15 05:15:23.484762 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.4842 INFO [CredentialRefresher] Credentials ready Jul 15 05:15:23.538548 amazon-ssm-agent[2212]: 2025-07-15 05:15:23.4844 INFO [CredentialRefresher] Next credential rotation will be in 29.999992606166668 minutes Jul 15 05:15:23.607161 systemd[2240]: Queued start job for default target default.target. Jul 15 05:15:23.613462 systemd[2240]: Created slice app.slice - User Application Slice. Jul 15 05:15:23.613502 systemd[2240]: Reached target paths.target - Paths. Jul 15 05:15:23.613560 systemd[2240]: Reached target timers.target - Timers. Jul 15 05:15:23.615356 systemd[2240]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 05:15:23.629007 systemd[2240]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 05:15:23.629157 systemd[2240]: Reached target sockets.target - Sockets. Jul 15 05:15:23.629211 systemd[2240]: Reached target basic.target - Basic System. Jul 15 05:15:23.629262 systemd[2240]: Reached target default.target - Main User Target. Jul 15 05:15:23.629319 systemd[2240]: Startup finished in 213ms. Jul 15 05:15:23.630110 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 05:15:23.651577 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 05:15:23.797763 systemd[1]: Started sshd@1-172.31.21.211:22-139.178.89.65:41072.service - OpenSSH per-connection server daemon (139.178.89.65:41072). Jul 15 05:15:23.973129 sshd[2252]: Accepted publickey for core from 139.178.89.65 port 41072 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:23.973847 sshd-session[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:23.979500 systemd-logind[1983]: New session 2 of user core. Jul 15 05:15:23.983470 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 05:15:24.106447 sshd[2255]: Connection closed by 139.178.89.65 port 41072 Jul 15 05:15:24.107036 sshd-session[2252]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:24.111679 systemd[1]: sshd@1-172.31.21.211:22-139.178.89.65:41072.service: Deactivated successfully. Jul 15 05:15:24.113429 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 05:15:24.114413 systemd-logind[1983]: Session 2 logged out. Waiting for processes to exit. Jul 15 05:15:24.116346 systemd-logind[1983]: Removed session 2. Jul 15 05:15:24.137846 systemd[1]: Started sshd@2-172.31.21.211:22-139.178.89.65:41084.service - OpenSSH per-connection server daemon (139.178.89.65:41084). Jul 15 05:15:24.313627 sshd[2261]: Accepted publickey for core from 139.178.89.65 port 41084 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:24.315213 sshd-session[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:24.321116 systemd-logind[1983]: New session 3 of user core. Jul 15 05:15:24.324516 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 05:15:24.447472 sshd[2264]: Connection closed by 139.178.89.65 port 41084 Jul 15 05:15:24.448942 sshd-session[2261]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:24.452732 systemd[1]: sshd@2-172.31.21.211:22-139.178.89.65:41084.service: Deactivated successfully. Jul 15 05:15:24.454534 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 05:15:24.455491 systemd-logind[1983]: Session 3 logged out. Waiting for processes to exit. Jul 15 05:15:24.456614 systemd-logind[1983]: Removed session 3. Jul 15 05:15:24.497443 amazon-ssm-agent[2212]: 2025-07-15 05:15:24.4967 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 15 05:15:24.598097 amazon-ssm-agent[2212]: 2025-07-15 05:15:24.5008 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2271) started Jul 15 05:15:24.653065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:24.654173 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 05:15:24.655160 systemd[1]: Startup finished in 2.763s (kernel) + 9.912s (initrd) + 7.154s (userspace) = 19.829s. Jul 15 05:15:24.662936 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:15:24.699087 amazon-ssm-agent[2212]: 2025-07-15 05:15:24.5009 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 15 05:15:25.476018 kubelet[2283]: E0715 05:15:25.475963 2283 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:15:25.478515 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:15:25.478711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:15:25.479105 systemd[1]: kubelet.service: Consumed 1.005s CPU time, 267.9M memory peak. Jul 15 05:15:25.761882 ntpd[1978]: Listen normally on 7 eth0 [fe80::479:5dff:fe92:d1db%2]:123 Jul 15 05:15:25.762367 ntpd[1978]: 15 Jul 05:15:25 ntpd[1978]: Listen normally on 7 eth0 [fe80::479:5dff:fe92:d1db%2]:123 Jul 15 05:15:29.525549 systemd-resolved[1820]: Clock change detected. Flushing caches. Jul 15 05:15:35.243984 systemd[1]: Started sshd@3-172.31.21.211:22-139.178.89.65:56230.service - OpenSSH per-connection server daemon (139.178.89.65:56230). Jul 15 05:15:35.415199 sshd[2300]: Accepted publickey for core from 139.178.89.65 port 56230 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:35.416764 sshd-session[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:35.422546 systemd-logind[1983]: New session 4 of user core. Jul 15 05:15:35.433886 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 05:15:35.553109 sshd[2303]: Connection closed by 139.178.89.65 port 56230 Jul 15 05:15:35.553614 sshd-session[2300]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:35.557385 systemd[1]: sshd@3-172.31.21.211:22-139.178.89.65:56230.service: Deactivated successfully. Jul 15 05:15:35.559026 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 05:15:35.559921 systemd-logind[1983]: Session 4 logged out. Waiting for processes to exit. Jul 15 05:15:35.561043 systemd-logind[1983]: Removed session 4. Jul 15 05:15:35.588209 systemd[1]: Started sshd@4-172.31.21.211:22-139.178.89.65:56234.service - OpenSSH per-connection server daemon (139.178.89.65:56234). Jul 15 05:15:35.755995 sshd[2309]: Accepted publickey for core from 139.178.89.65 port 56234 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:35.757432 sshd-session[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:35.762808 systemd-logind[1983]: New session 5 of user core. Jul 15 05:15:35.771897 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 05:15:35.886818 sshd[2312]: Connection closed by 139.178.89.65 port 56234 Jul 15 05:15:35.887490 sshd-session[2309]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:35.892064 systemd[1]: sshd@4-172.31.21.211:22-139.178.89.65:56234.service: Deactivated successfully. Jul 15 05:15:35.894013 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 05:15:35.894907 systemd-logind[1983]: Session 5 logged out. Waiting for processes to exit. Jul 15 05:15:35.896493 systemd-logind[1983]: Removed session 5. Jul 15 05:15:35.917923 systemd[1]: Started sshd@5-172.31.21.211:22-139.178.89.65:56248.service - OpenSSH per-connection server daemon (139.178.89.65:56248). Jul 15 05:15:36.092375 sshd[2318]: Accepted publickey for core from 139.178.89.65 port 56248 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:36.093824 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:36.098772 systemd-logind[1983]: New session 6 of user core. Jul 15 05:15:36.109886 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 05:15:36.226404 sshd[2321]: Connection closed by 139.178.89.65 port 56248 Jul 15 05:15:36.227277 sshd-session[2318]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:36.231052 systemd[1]: sshd@5-172.31.21.211:22-139.178.89.65:56248.service: Deactivated successfully. Jul 15 05:15:36.232712 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 05:15:36.233456 systemd-logind[1983]: Session 6 logged out. Waiting for processes to exit. Jul 15 05:15:36.234573 systemd-logind[1983]: Removed session 6. Jul 15 05:15:36.257626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 05:15:36.259304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:15:36.261881 systemd[1]: Started sshd@6-172.31.21.211:22-139.178.89.65:56258.service - OpenSSH per-connection server daemon (139.178.89.65:56258). Jul 15 05:15:36.429539 sshd[2328]: Accepted publickey for core from 139.178.89.65 port 56258 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:36.431002 sshd-session[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:36.436478 systemd-logind[1983]: New session 7 of user core. Jul 15 05:15:36.443862 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 05:15:36.475256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:36.479600 (kubelet)[2339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:15:36.528755 kubelet[2339]: E0715 05:15:36.528696 2339 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:15:36.532535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:15:36.532811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:15:36.533296 systemd[1]: kubelet.service: Consumed 166ms CPU time, 109.1M memory peak. Jul 15 05:15:36.574645 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 05:15:36.575085 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:15:36.586887 sudo[2346]: pam_unix(sudo:session): session closed for user root Jul 15 05:15:36.610079 sshd[2335]: Connection closed by 139.178.89.65 port 56258 Jul 15 05:15:36.610770 sshd-session[2328]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:36.614806 systemd[1]: sshd@6-172.31.21.211:22-139.178.89.65:56258.service: Deactivated successfully. Jul 15 05:15:36.616354 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 05:15:36.617185 systemd-logind[1983]: Session 7 logged out. Waiting for processes to exit. Jul 15 05:15:36.618545 systemd-logind[1983]: Removed session 7. Jul 15 05:15:36.638403 systemd[1]: Started sshd@7-172.31.21.211:22-139.178.89.65:56264.service - OpenSSH per-connection server daemon (139.178.89.65:56264). Jul 15 05:15:36.807156 sshd[2352]: Accepted publickey for core from 139.178.89.65 port 56264 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:36.808729 sshd-session[2352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:36.813331 systemd-logind[1983]: New session 8 of user core. Jul 15 05:15:36.819918 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 05:15:36.916411 sudo[2357]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 05:15:36.916731 sudo[2357]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:15:36.922085 sudo[2357]: pam_unix(sudo:session): session closed for user root Jul 15 05:15:36.927495 sudo[2356]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 05:15:36.927771 sudo[2356]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:15:36.937336 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:15:36.972402 augenrules[2379]: No rules Jul 15 05:15:36.973608 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:15:36.973834 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:15:36.974651 sudo[2356]: pam_unix(sudo:session): session closed for user root Jul 15 05:15:36.996178 sshd[2355]: Connection closed by 139.178.89.65 port 56264 Jul 15 05:15:36.996787 sshd-session[2352]: pam_unix(sshd:session): session closed for user core Jul 15 05:15:37.000126 systemd[1]: sshd@7-172.31.21.211:22-139.178.89.65:56264.service: Deactivated successfully. Jul 15 05:15:37.001945 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 05:15:37.003707 systemd-logind[1983]: Session 8 logged out. Waiting for processes to exit. Jul 15 05:15:37.005016 systemd-logind[1983]: Removed session 8. Jul 15 05:15:37.030201 systemd[1]: Started sshd@8-172.31.21.211:22-139.178.89.65:56280.service - OpenSSH per-connection server daemon (139.178.89.65:56280). Jul 15 05:15:37.206872 sshd[2388]: Accepted publickey for core from 139.178.89.65 port 56280 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:15:37.208369 sshd-session[2388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:15:37.213567 systemd-logind[1983]: New session 9 of user core. Jul 15 05:15:37.220879 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 05:15:37.320433 sudo[2392]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 05:15:37.320751 sudo[2392]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:15:38.321299 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 05:15:38.332117 (dockerd)[2412]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 05:15:38.769892 dockerd[2412]: time="2025-07-15T05:15:38.769833015Z" level=info msg="Starting up" Jul 15 05:15:38.770694 dockerd[2412]: time="2025-07-15T05:15:38.770652874Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 05:15:38.782349 dockerd[2412]: time="2025-07-15T05:15:38.782241337Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 05:15:38.801326 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport679970112-merged.mount: Deactivated successfully. Jul 15 05:15:38.839073 dockerd[2412]: time="2025-07-15T05:15:38.838874093Z" level=info msg="Loading containers: start." Jul 15 05:15:38.850706 kernel: Initializing XFRM netlink socket Jul 15 05:15:39.184371 (udev-worker)[2433]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:15:39.235522 systemd-networkd[1814]: docker0: Link UP Jul 15 05:15:39.241448 dockerd[2412]: time="2025-07-15T05:15:39.241407042Z" level=info msg="Loading containers: done." Jul 15 05:15:39.257007 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck865413371-merged.mount: Deactivated successfully. Jul 15 05:15:39.262217 dockerd[2412]: time="2025-07-15T05:15:39.262169368Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 05:15:39.262370 dockerd[2412]: time="2025-07-15T05:15:39.262260794Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 05:15:39.262370 dockerd[2412]: time="2025-07-15T05:15:39.262346148Z" level=info msg="Initializing buildkit" Jul 15 05:15:39.291352 dockerd[2412]: time="2025-07-15T05:15:39.291307214Z" level=info msg="Completed buildkit initialization" Jul 15 05:15:39.295539 dockerd[2412]: time="2025-07-15T05:15:39.295495947Z" level=info msg="Daemon has completed initialization" Jul 15 05:15:39.296747 dockerd[2412]: time="2025-07-15T05:15:39.295543992Z" level=info msg="API listen on /run/docker.sock" Jul 15 05:15:39.295891 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 05:15:40.181353 containerd[2010]: time="2025-07-15T05:15:40.181317344Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 15 05:15:40.862962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1888535926.mount: Deactivated successfully. Jul 15 05:15:42.455367 containerd[2010]: time="2025-07-15T05:15:42.455311121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:42.456365 containerd[2010]: time="2025-07-15T05:15:42.456318909Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 15 05:15:42.457781 containerd[2010]: time="2025-07-15T05:15:42.457735147Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:42.460625 containerd[2010]: time="2025-07-15T05:15:42.460490150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:42.462285 containerd[2010]: time="2025-07-15T05:15:42.461725412Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 2.280363727s" Jul 15 05:15:42.462285 containerd[2010]: time="2025-07-15T05:15:42.461766466Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 15 05:15:42.463853 containerd[2010]: time="2025-07-15T05:15:42.463827807Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 15 05:15:44.353798 containerd[2010]: time="2025-07-15T05:15:44.353745579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:44.354737 containerd[2010]: time="2025-07-15T05:15:44.354697706Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 15 05:15:44.356046 containerd[2010]: time="2025-07-15T05:15:44.356004674Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:44.358832 containerd[2010]: time="2025-07-15T05:15:44.358788471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:44.359732 containerd[2010]: time="2025-07-15T05:15:44.359476921Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.895493817s" Jul 15 05:15:44.359732 containerd[2010]: time="2025-07-15T05:15:44.359506239Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 15 05:15:44.360309 containerd[2010]: time="2025-07-15T05:15:44.360274291Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 15 05:15:45.880689 containerd[2010]: time="2025-07-15T05:15:45.880616711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:45.881652 containerd[2010]: time="2025-07-15T05:15:45.881612735Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 15 05:15:45.882911 containerd[2010]: time="2025-07-15T05:15:45.882883078Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:45.885646 containerd[2010]: time="2025-07-15T05:15:45.885599953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:45.886474 containerd[2010]: time="2025-07-15T05:15:45.886355573Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.526049767s" Jul 15 05:15:45.886474 containerd[2010]: time="2025-07-15T05:15:45.886388153Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 15 05:15:45.886993 containerd[2010]: time="2025-07-15T05:15:45.886959329Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 15 05:15:46.598819 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 05:15:46.600183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:15:46.848788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:46.858344 (kubelet)[2694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:15:46.909001 kubelet[2694]: E0715 05:15:46.908953 2694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:15:46.911843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:15:46.911991 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:15:46.912279 systemd[1]: kubelet.service: Consumed 155ms CPU time, 110.6M memory peak. Jul 15 05:15:47.210943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1301798017.mount: Deactivated successfully. Jul 15 05:15:47.719802 containerd[2010]: time="2025-07-15T05:15:47.719724936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:47.720901 containerd[2010]: time="2025-07-15T05:15:47.720859296Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 15 05:15:47.722599 containerd[2010]: time="2025-07-15T05:15:47.722553328Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:47.724640 containerd[2010]: time="2025-07-15T05:15:47.724592532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:47.725313 containerd[2010]: time="2025-07-15T05:15:47.725032688Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.837979375s" Jul 15 05:15:47.725313 containerd[2010]: time="2025-07-15T05:15:47.725065150Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 15 05:15:47.725546 containerd[2010]: time="2025-07-15T05:15:47.725525718Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 15 05:15:48.251801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount878126308.mount: Deactivated successfully. Jul 15 05:15:49.479748 containerd[2010]: time="2025-07-15T05:15:49.479697875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:49.480942 containerd[2010]: time="2025-07-15T05:15:49.480899564Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 15 05:15:49.482400 containerd[2010]: time="2025-07-15T05:15:49.482350022Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:49.484899 containerd[2010]: time="2025-07-15T05:15:49.484843554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:49.485964 containerd[2010]: time="2025-07-15T05:15:49.485620691Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.76001808s" Jul 15 05:15:49.485964 containerd[2010]: time="2025-07-15T05:15:49.485654252Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 15 05:15:49.486337 containerd[2010]: time="2025-07-15T05:15:49.486317753Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 05:15:49.959716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3024661828.mount: Deactivated successfully. Jul 15 05:15:49.966458 containerd[2010]: time="2025-07-15T05:15:49.966397433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:15:49.967269 containerd[2010]: time="2025-07-15T05:15:49.967234181Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 15 05:15:49.968951 containerd[2010]: time="2025-07-15T05:15:49.968900996Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:15:49.971100 containerd[2010]: time="2025-07-15T05:15:49.971056254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:15:49.971719 containerd[2010]: time="2025-07-15T05:15:49.971570874Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 485.148624ms" Jul 15 05:15:49.971719 containerd[2010]: time="2025-07-15T05:15:49.971596484Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 05:15:49.972188 containerd[2010]: time="2025-07-15T05:15:49.972147726Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 15 05:15:50.438304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4014026280.mount: Deactivated successfully. Jul 15 05:15:52.350440 containerd[2010]: time="2025-07-15T05:15:52.350385186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:52.351530 containerd[2010]: time="2025-07-15T05:15:52.351486375Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 15 05:15:52.353055 containerd[2010]: time="2025-07-15T05:15:52.352999376Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:52.355562 containerd[2010]: time="2025-07-15T05:15:52.355509366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:15:52.356346 containerd[2010]: time="2025-07-15T05:15:52.356314569Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.384140496s" Jul 15 05:15:52.356405 containerd[2010]: time="2025-07-15T05:15:52.356354721Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 15 05:15:53.847475 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 15 05:15:56.696448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:56.696739 systemd[1]: kubelet.service: Consumed 155ms CPU time, 110.6M memory peak. Jul 15 05:15:56.699439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:15:56.733011 systemd[1]: Reload requested from client PID 2847 ('systemctl') (unit session-9.scope)... Jul 15 05:15:56.733031 systemd[1]: Reloading... Jul 15 05:15:56.865718 zram_generator::config[2891]: No configuration found. Jul 15 05:15:57.011413 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:15:57.144361 systemd[1]: Reloading finished in 410 ms. Jul 15 05:15:57.201071 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 05:15:57.201329 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 05:15:57.201709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:57.201770 systemd[1]: kubelet.service: Consumed 138ms CPU time, 98.1M memory peak. Jul 15 05:15:57.204227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:15:57.870776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:15:57.880031 (kubelet)[2955]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:15:57.934140 kubelet[2955]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:15:57.934140 kubelet[2955]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 05:15:57.934140 kubelet[2955]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:15:57.960534 kubelet[2955]: I0715 05:15:57.960097 2955 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:15:58.236023 kubelet[2955]: I0715 05:15:58.235599 2955 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 05:15:58.236023 kubelet[2955]: I0715 05:15:58.235632 2955 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:15:58.236023 kubelet[2955]: I0715 05:15:58.235970 2955 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 05:15:58.288174 kubelet[2955]: E0715 05:15:58.288107 2955 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.21.211:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.211:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 15 05:15:58.290587 kubelet[2955]: I0715 05:15:58.290401 2955 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:15:58.318832 kubelet[2955]: I0715 05:15:58.318801 2955 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:15:58.330760 kubelet[2955]: I0715 05:15:58.330725 2955 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:15:58.338466 kubelet[2955]: I0715 05:15:58.338398 2955 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:15:58.343264 kubelet[2955]: I0715 05:15:58.338458 2955 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-211","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:15:58.343264 kubelet[2955]: I0715 05:15:58.343147 2955 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:15:58.343264 kubelet[2955]: I0715 05:15:58.343172 2955 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 05:15:58.343526 kubelet[2955]: I0715 05:15:58.343312 2955 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:15:58.346792 kubelet[2955]: I0715 05:15:58.346753 2955 kubelet.go:480] "Attempting to sync node with API server" Jul 15 05:15:58.346792 kubelet[2955]: I0715 05:15:58.346779 2955 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:15:58.346792 kubelet[2955]: I0715 05:15:58.346803 2955 kubelet.go:386] "Adding apiserver pod source" Jul 15 05:15:58.349129 kubelet[2955]: I0715 05:15:58.349026 2955 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:15:58.362736 kubelet[2955]: I0715 05:15:58.362547 2955 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:15:58.364961 kubelet[2955]: I0715 05:15:58.364871 2955 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 05:15:58.366546 kubelet[2955]: W0715 05:15:58.365886 2955 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 05:15:58.371357 kubelet[2955]: I0715 05:15:58.371326 2955 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 05:15:58.371450 kubelet[2955]: I0715 05:15:58.371385 2955 server.go:1289] "Started kubelet" Jul 15 05:15:58.373013 kubelet[2955]: E0715 05:15:58.372810 2955 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.21.211:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.211:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 05:15:58.373013 kubelet[2955]: E0715 05:15:58.372810 2955 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.21.211:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-211&limit=500&resourceVersion=0\": dial tcp 172.31.21.211:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 05:15:58.376711 kubelet[2955]: I0715 05:15:58.376680 2955 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:15:58.381312 kubelet[2955]: E0715 05:15:58.378045 2955 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.211:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.211:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-211.185254e786817273 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-211,UID:ip-172-31-21-211,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-211,},FirstTimestamp:2025-07-15 05:15:58.371353203 +0000 UTC m=+0.487355952,LastTimestamp:2025-07-15 05:15:58.371353203 +0000 UTC m=+0.487355952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-211,}" Jul 15 05:15:58.385161 kubelet[2955]: I0715 05:15:58.384987 2955 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:15:58.385246 kubelet[2955]: I0715 05:15:58.385190 2955 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 05:15:58.385810 kubelet[2955]: E0715 05:15:58.385512 2955 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-211\" not found" Jul 15 05:15:58.389695 kubelet[2955]: I0715 05:15:58.387582 2955 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 05:15:58.389695 kubelet[2955]: I0715 05:15:58.389405 2955 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:15:58.390591 kubelet[2955]: E0715 05:15:58.390570 2955 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.21.211:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.211:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 05:15:58.390684 kubelet[2955]: E0715 05:15:58.390646 2955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.211:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-211?timeout=10s\": dial tcp 172.31.21.211:6443: connect: connection refused" interval="200ms" Jul 15 05:15:58.391116 kubelet[2955]: I0715 05:15:58.391102 2955 server.go:317] "Adding debug handlers to kubelet server" Jul 15 05:15:58.392781 kubelet[2955]: I0715 05:15:58.392722 2955 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:15:58.393092 kubelet[2955]: I0715 05:15:58.393080 2955 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:15:58.393436 kubelet[2955]: I0715 05:15:58.393418 2955 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:15:58.395592 kubelet[2955]: I0715 05:15:58.395409 2955 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:15:58.396960 kubelet[2955]: E0715 05:15:58.396939 2955 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:15:58.398255 kubelet[2955]: I0715 05:15:58.397567 2955 factory.go:223] Registration of the containerd container factory successfully Jul 15 05:15:58.398255 kubelet[2955]: I0715 05:15:58.397580 2955 factory.go:223] Registration of the systemd container factory successfully Jul 15 05:15:58.414374 kubelet[2955]: I0715 05:15:58.414334 2955 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 05:15:58.414374 kubelet[2955]: I0715 05:15:58.414352 2955 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 05:15:58.414374 kubelet[2955]: I0715 05:15:58.414369 2955 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:15:58.418157 kubelet[2955]: I0715 05:15:58.418131 2955 policy_none.go:49] "None policy: Start" Jul 15 05:15:58.418157 kubelet[2955]: I0715 05:15:58.418159 2955 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 05:15:58.418398 kubelet[2955]: I0715 05:15:58.418174 2955 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:15:58.429540 kubelet[2955]: I0715 05:15:58.429404 2955 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 05:15:58.434092 kubelet[2955]: I0715 05:15:58.432869 2955 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 05:15:58.434092 kubelet[2955]: I0715 05:15:58.433001 2955 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 05:15:58.434092 kubelet[2955]: I0715 05:15:58.433029 2955 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 05:15:58.434092 kubelet[2955]: I0715 05:15:58.433051 2955 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 05:15:58.434092 kubelet[2955]: E0715 05:15:58.433098 2955 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:15:58.436041 kubelet[2955]: E0715 05:15:58.436010 2955 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.21.211:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.211:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 05:15:58.439636 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 05:15:58.456875 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 05:15:58.460882 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 05:15:58.471809 kubelet[2955]: E0715 05:15:58.471777 2955 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 05:15:58.472027 kubelet[2955]: I0715 05:15:58.472009 2955 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:15:58.472084 kubelet[2955]: I0715 05:15:58.472033 2955 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:15:58.472317 kubelet[2955]: I0715 05:15:58.472292 2955 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:15:58.474674 kubelet[2955]: E0715 05:15:58.474467 2955 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 05:15:58.474674 kubelet[2955]: E0715 05:15:58.474552 2955 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-211\" not found" Jul 15 05:15:58.547628 systemd[1]: Created slice kubepods-burstable-pod466a894bad73b6b41249d09f33d89252.slice - libcontainer container kubepods-burstable-pod466a894bad73b6b41249d09f33d89252.slice. Jul 15 05:15:58.554439 kubelet[2955]: E0715 05:15:58.554402 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-211\" not found" node="ip-172-31-21-211" Jul 15 05:15:58.559626 systemd[1]: Created slice kubepods-burstable-pode52b91efcf81ccbc2c3266136c54f575.slice - libcontainer container kubepods-burstable-pode52b91efcf81ccbc2c3266136c54f575.slice. Jul 15 05:15:58.570002 kubelet[2955]: E0715 05:15:58.569972 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-211\" not found" node="ip-172-31-21-211" Jul 15 05:15:58.573277 systemd[1]: Created slice kubepods-burstable-poddc3e88ea2eb50e27678f27b91f6d80b0.slice - libcontainer container kubepods-burstable-poddc3e88ea2eb50e27678f27b91f6d80b0.slice. Jul 15 05:15:58.574624 kubelet[2955]: I0715 05:15:58.574544 2955 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-211" Jul 15 05:15:58.574963 kubelet[2955]: E0715 05:15:58.574939 2955 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.211:6443/api/v1/nodes\": dial tcp 172.31.21.211:6443: connect: connection refused" node="ip-172-31-21-211" Jul 15 05:15:58.575535 kubelet[2955]: E0715 05:15:58.575485 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-211\" not found" node="ip-172-31-21-211" Jul 15 05:15:58.591043 kubelet[2955]: I0715 05:15:58.591006 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/466a894bad73b6b41249d09f33d89252-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-211\" (UID: \"466a894bad73b6b41249d09f33d89252\") " pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:15:58.591420 kubelet[2955]: I0715 05:15:58.591308 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/466a894bad73b6b41249d09f33d89252-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-211\" (UID: \"466a894bad73b6b41249d09f33d89252\") " pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:15:58.591420 kubelet[2955]: I0715 05:15:58.591361 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc3e88ea2eb50e27678f27b91f6d80b0-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-211\" (UID: \"dc3e88ea2eb50e27678f27b91f6d80b0\") " pod="kube-system/kube-apiserver-ip-172-31-21-211" Jul 15 05:15:58.591420 kubelet[2955]: E0715 05:15:58.591199 2955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.211:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-211?timeout=10s\": dial tcp 172.31.21.211:6443: connect: connection refused" interval="400ms" Jul 15 05:15:58.591420 kubelet[2955]: I0715 05:15:58.591380 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc3e88ea2eb50e27678f27b91f6d80b0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-211\" (UID: \"dc3e88ea2eb50e27678f27b91f6d80b0\") " pod="kube-system/kube-apiserver-ip-172-31-21-211" Jul 15 05:15:58.591420 kubelet[2955]: I0715 05:15:58.591425 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/466a894bad73b6b41249d09f33d89252-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-211\" (UID: \"466a894bad73b6b41249d09f33d89252\") " pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:15:58.591712 kubelet[2955]: I0715 05:15:58.591442 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e52b91efcf81ccbc2c3266136c54f575-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-211\" (UID: \"e52b91efcf81ccbc2c3266136c54f575\") " pod="kube-system/kube-scheduler-ip-172-31-21-211" Jul 15 05:15:58.591712 kubelet[2955]: I0715 05:15:58.591471 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc3e88ea2eb50e27678f27b91f6d80b0-ca-certs\") pod \"kube-apiserver-ip-172-31-21-211\" (UID: \"dc3e88ea2eb50e27678f27b91f6d80b0\") " pod="kube-system/kube-apiserver-ip-172-31-21-211" Jul 15 05:15:58.591712 kubelet[2955]: I0715 05:15:58.591484 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/466a894bad73b6b41249d09f33d89252-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-211\" (UID: \"466a894bad73b6b41249d09f33d89252\") " pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:15:58.591712 kubelet[2955]: I0715 05:15:58.591520 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/466a894bad73b6b41249d09f33d89252-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-211\" (UID: \"466a894bad73b6b41249d09f33d89252\") " pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:15:58.776693 kubelet[2955]: I0715 05:15:58.776637 2955 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-211" Jul 15 05:15:58.777023 kubelet[2955]: E0715 05:15:58.776989 2955 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.211:6443/api/v1/nodes\": dial tcp 172.31.21.211:6443: connect: connection refused" node="ip-172-31-21-211" Jul 15 05:15:58.857567 containerd[2010]: time="2025-07-15T05:15:58.857511272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-211,Uid:466a894bad73b6b41249d09f33d89252,Namespace:kube-system,Attempt:0,}" Jul 15 05:15:58.878772 containerd[2010]: time="2025-07-15T05:15:58.878726840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-211,Uid:e52b91efcf81ccbc2c3266136c54f575,Namespace:kube-system,Attempt:0,}" Jul 15 05:15:58.881817 containerd[2010]: time="2025-07-15T05:15:58.881775242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-211,Uid:dc3e88ea2eb50e27678f27b91f6d80b0,Namespace:kube-system,Attempt:0,}" Jul 15 05:15:58.992052 kubelet[2955]: E0715 05:15:58.991987 2955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.211:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-211?timeout=10s\": dial tcp 172.31.21.211:6443: connect: connection refused" interval="800ms" Jul 15 05:15:59.032863 containerd[2010]: time="2025-07-15T05:15:59.032670937Z" level=info msg="connecting to shim 0c5c8147e9650a7052f3c40ced71066b4e5df38632af5696d505e1fc7d07521d" address="unix:///run/containerd/s/bbde67b195baf970f0205a573ff65289e276b6ccb9ce5967e567b7595ae7627b" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:15:59.042170 containerd[2010]: time="2025-07-15T05:15:59.042111499Z" level=info msg="connecting to shim 755e62372d618906fd4400e6186aa9d2610e3a4558f9028ef3169a296a22910f" address="unix:///run/containerd/s/0e541796f976eb3ee67873b863a254afa87eabb664f39c6ea9ab5172fcb324f7" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:15:59.042892 containerd[2010]: time="2025-07-15T05:15:59.042854980Z" level=info msg="connecting to shim 7a43ace7862496331bd6601e0e762dc132c65e59e8fd653d7ab1f37ca0f41f57" address="unix:///run/containerd/s/f9ea3a1cb25f00a9b458f878abb4655e802a95e9daa66f343b157b15a8da864b" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:15:59.162885 systemd[1]: Started cri-containerd-0c5c8147e9650a7052f3c40ced71066b4e5df38632af5696d505e1fc7d07521d.scope - libcontainer container 0c5c8147e9650a7052f3c40ced71066b4e5df38632af5696d505e1fc7d07521d. Jul 15 05:15:59.165255 systemd[1]: Started cri-containerd-755e62372d618906fd4400e6186aa9d2610e3a4558f9028ef3169a296a22910f.scope - libcontainer container 755e62372d618906fd4400e6186aa9d2610e3a4558f9028ef3169a296a22910f. Jul 15 05:15:59.167093 systemd[1]: Started cri-containerd-7a43ace7862496331bd6601e0e762dc132c65e59e8fd653d7ab1f37ca0f41f57.scope - libcontainer container 7a43ace7862496331bd6601e0e762dc132c65e59e8fd653d7ab1f37ca0f41f57. Jul 15 05:15:59.184245 kubelet[2955]: I0715 05:15:59.184133 2955 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-211" Jul 15 05:15:59.185388 kubelet[2955]: E0715 05:15:59.185337 2955 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.211:6443/api/v1/nodes\": dial tcp 172.31.21.211:6443: connect: connection refused" node="ip-172-31-21-211" Jul 15 05:15:59.240961 containerd[2010]: time="2025-07-15T05:15:59.240806304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-211,Uid:466a894bad73b6b41249d09f33d89252,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c5c8147e9650a7052f3c40ced71066b4e5df38632af5696d505e1fc7d07521d\"" Jul 15 05:15:59.245679 containerd[2010]: time="2025-07-15T05:15:59.245575223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-211,Uid:dc3e88ea2eb50e27678f27b91f6d80b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a43ace7862496331bd6601e0e762dc132c65e59e8fd653d7ab1f37ca0f41f57\"" Jul 15 05:15:59.250732 containerd[2010]: time="2025-07-15T05:15:59.250699437Z" level=info msg="CreateContainer within sandbox \"0c5c8147e9650a7052f3c40ced71066b4e5df38632af5696d505e1fc7d07521d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 05:15:59.257201 containerd[2010]: time="2025-07-15T05:15:59.257140181Z" level=info msg="CreateContainer within sandbox \"7a43ace7862496331bd6601e0e762dc132c65e59e8fd653d7ab1f37ca0f41f57\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 05:15:59.272405 containerd[2010]: time="2025-07-15T05:15:59.272368275Z" level=info msg="Container e7f71e1fbbe4ef2bf9ff0e6066e9281aed52c6d9a7df2b9122f1ebaad791f047: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:15:59.275575 containerd[2010]: time="2025-07-15T05:15:59.275509328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-211,Uid:e52b91efcf81ccbc2c3266136c54f575,Namespace:kube-system,Attempt:0,} returns sandbox id \"755e62372d618906fd4400e6186aa9d2610e3a4558f9028ef3169a296a22910f\"" Jul 15 05:15:59.281864 containerd[2010]: time="2025-07-15T05:15:59.281817035Z" level=info msg="Container 86199b33dd01212a44bd6a8da94eb256aef267f0a6489a33f89cbafae33928bd: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:15:59.282703 containerd[2010]: time="2025-07-15T05:15:59.282641357Z" level=info msg="CreateContainer within sandbox \"755e62372d618906fd4400e6186aa9d2610e3a4558f9028ef3169a296a22910f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 05:15:59.305491 containerd[2010]: time="2025-07-15T05:15:59.305443156Z" level=info msg="CreateContainer within sandbox \"0c5c8147e9650a7052f3c40ced71066b4e5df38632af5696d505e1fc7d07521d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e7f71e1fbbe4ef2bf9ff0e6066e9281aed52c6d9a7df2b9122f1ebaad791f047\"" Jul 15 05:15:59.307086 containerd[2010]: time="2025-07-15T05:15:59.307053095Z" level=info msg="CreateContainer within sandbox \"7a43ace7862496331bd6601e0e762dc132c65e59e8fd653d7ab1f37ca0f41f57\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"86199b33dd01212a44bd6a8da94eb256aef267f0a6489a33f89cbafae33928bd\"" Jul 15 05:15:59.307305 containerd[2010]: time="2025-07-15T05:15:59.307232397Z" level=info msg="StartContainer for \"e7f71e1fbbe4ef2bf9ff0e6066e9281aed52c6d9a7df2b9122f1ebaad791f047\"" Jul 15 05:15:59.308279 containerd[2010]: time="2025-07-15T05:15:59.308168643Z" level=info msg="connecting to shim e7f71e1fbbe4ef2bf9ff0e6066e9281aed52c6d9a7df2b9122f1ebaad791f047" address="unix:///run/containerd/s/bbde67b195baf970f0205a573ff65289e276b6ccb9ce5967e567b7595ae7627b" protocol=ttrpc version=3 Jul 15 05:15:59.308719 containerd[2010]: time="2025-07-15T05:15:59.308686577Z" level=info msg="StartContainer for \"86199b33dd01212a44bd6a8da94eb256aef267f0a6489a33f89cbafae33928bd\"" Jul 15 05:15:59.309544 containerd[2010]: time="2025-07-15T05:15:59.309498658Z" level=info msg="connecting to shim 86199b33dd01212a44bd6a8da94eb256aef267f0a6489a33f89cbafae33928bd" address="unix:///run/containerd/s/f9ea3a1cb25f00a9b458f878abb4655e802a95e9daa66f343b157b15a8da864b" protocol=ttrpc version=3 Jul 15 05:15:59.313448 containerd[2010]: time="2025-07-15T05:15:59.313264636Z" level=info msg="Container b494b83c5a6498dfd86c0d9b299c097461effaa9a921b672632ae583acc2b9c2: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:15:59.317421 kubelet[2955]: E0715 05:15:59.317390 2955 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.21.211:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.211:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 05:15:59.330857 systemd[1]: Started cri-containerd-e7f71e1fbbe4ef2bf9ff0e6066e9281aed52c6d9a7df2b9122f1ebaad791f047.scope - libcontainer container e7f71e1fbbe4ef2bf9ff0e6066e9281aed52c6d9a7df2b9122f1ebaad791f047. Jul 15 05:15:59.332617 containerd[2010]: time="2025-07-15T05:15:59.332580235Z" level=info msg="CreateContainer within sandbox \"755e62372d618906fd4400e6186aa9d2610e3a4558f9028ef3169a296a22910f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b494b83c5a6498dfd86c0d9b299c097461effaa9a921b672632ae583acc2b9c2\"" Jul 15 05:15:59.333885 containerd[2010]: time="2025-07-15T05:15:59.333451844Z" level=info msg="StartContainer for \"b494b83c5a6498dfd86c0d9b299c097461effaa9a921b672632ae583acc2b9c2\"" Jul 15 05:15:59.338062 containerd[2010]: time="2025-07-15T05:15:59.338020347Z" level=info msg="connecting to shim b494b83c5a6498dfd86c0d9b299c097461effaa9a921b672632ae583acc2b9c2" address="unix:///run/containerd/s/0e541796f976eb3ee67873b863a254afa87eabb664f39c6ea9ab5172fcb324f7" protocol=ttrpc version=3 Jul 15 05:15:59.338878 systemd[1]: Started cri-containerd-86199b33dd01212a44bd6a8da94eb256aef267f0a6489a33f89cbafae33928bd.scope - libcontainer container 86199b33dd01212a44bd6a8da94eb256aef267f0a6489a33f89cbafae33928bd. Jul 15 05:15:59.372261 systemd[1]: Started cri-containerd-b494b83c5a6498dfd86c0d9b299c097461effaa9a921b672632ae583acc2b9c2.scope - libcontainer container b494b83c5a6498dfd86c0d9b299c097461effaa9a921b672632ae583acc2b9c2. Jul 15 05:15:59.428476 containerd[2010]: time="2025-07-15T05:15:59.427715836Z" level=info msg="StartContainer for \"e7f71e1fbbe4ef2bf9ff0e6066e9281aed52c6d9a7df2b9122f1ebaad791f047\" returns successfully" Jul 15 05:15:59.428476 containerd[2010]: time="2025-07-15T05:15:59.427747513Z" level=info msg="StartContainer for \"86199b33dd01212a44bd6a8da94eb256aef267f0a6489a33f89cbafae33928bd\" returns successfully" Jul 15 05:15:59.453476 kubelet[2955]: E0715 05:15:59.453447 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-211\" not found" node="ip-172-31-21-211" Jul 15 05:15:59.458921 kubelet[2955]: E0715 05:15:59.458877 2955 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.21.211:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.211:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 05:15:59.459398 kubelet[2955]: E0715 05:15:59.459324 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-211\" not found" node="ip-172-31-21-211" Jul 15 05:15:59.465759 containerd[2010]: time="2025-07-15T05:15:59.465723100Z" level=info msg="StartContainer for \"b494b83c5a6498dfd86c0d9b299c097461effaa9a921b672632ae583acc2b9c2\" returns successfully" Jul 15 05:15:59.719092 kubelet[2955]: E0715 05:15:59.718986 2955 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.21.211:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-211&limit=500&resourceVersion=0\": dial tcp 172.31.21.211:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 05:15:59.792932 kubelet[2955]: E0715 05:15:59.792889 2955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.211:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-211?timeout=10s\": dial tcp 172.31.21.211:6443: connect: connection refused" interval="1.6s" Jul 15 05:15:59.880105 kubelet[2955]: E0715 05:15:59.879189 2955 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.21.211:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.211:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 05:15:59.988833 kubelet[2955]: I0715 05:15:59.988752 2955 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-211" Jul 15 05:15:59.989115 kubelet[2955]: E0715 05:15:59.989062 2955 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.211:6443/api/v1/nodes\": dial tcp 172.31.21.211:6443: connect: connection refused" node="ip-172-31-21-211" Jul 15 05:16:00.464544 kubelet[2955]: E0715 05:16:00.464507 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-211\" not found" node="ip-172-31-21-211" Jul 15 05:16:00.464969 kubelet[2955]: E0715 05:16:00.464920 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-211\" not found" node="ip-172-31-21-211" Jul 15 05:16:01.468713 kubelet[2955]: E0715 05:16:01.468197 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-211\" not found" node="ip-172-31-21-211" Jul 15 05:16:01.592476 kubelet[2955]: I0715 05:16:01.592163 2955 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-211" Jul 15 05:16:02.469336 kubelet[2955]: E0715 05:16:02.469042 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-211\" not found" node="ip-172-31-21-211" Jul 15 05:16:03.260274 kubelet[2955]: E0715 05:16:03.260224 2955 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-211\" not found" node="ip-172-31-21-211" Jul 15 05:16:03.345097 kubelet[2955]: I0715 05:16:03.344903 2955 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-211" Jul 15 05:16:03.374771 kubelet[2955]: I0715 05:16:03.373713 2955 apiserver.go:52] "Watching apiserver" Jul 15 05:16:03.388691 kubelet[2955]: I0715 05:16:03.388652 2955 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 05:16:03.388974 kubelet[2955]: I0715 05:16:03.388855 2955 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:16:03.397838 kubelet[2955]: E0715 05:16:03.397653 2955 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-211\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:16:03.397838 kubelet[2955]: I0715 05:16:03.397710 2955 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-211" Jul 15 05:16:03.401426 kubelet[2955]: E0715 05:16:03.401233 2955 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-211\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-21-211" Jul 15 05:16:03.401426 kubelet[2955]: I0715 05:16:03.401267 2955 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-211" Jul 15 05:16:03.404004 kubelet[2955]: E0715 05:16:03.403968 2955 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-211\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-21-211" Jul 15 05:16:04.950525 kubelet[2955]: I0715 05:16:04.950497 2955 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:16:05.350794 systemd[1]: Reload requested from client PID 3233 ('systemctl') (unit session-9.scope)... Jul 15 05:16:05.350812 systemd[1]: Reloading... Jul 15 05:16:05.476695 zram_generator::config[3277]: No configuration found. Jul 15 05:16:05.596049 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:16:05.751037 systemd[1]: Reloading finished in 399 ms. Jul 15 05:16:05.780207 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:16:05.788178 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 05:16:05.788477 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:16:05.788582 systemd[1]: kubelet.service: Consumed 826ms CPU time, 129.2M memory peak. Jul 15 05:16:05.790470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:16:06.073217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:16:06.083169 (kubelet)[3337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:16:06.149517 kubelet[3337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:16:06.149517 kubelet[3337]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 05:16:06.149517 kubelet[3337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:16:06.150021 kubelet[3337]: I0715 05:16:06.149623 3337 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:16:06.159138 kubelet[3337]: I0715 05:16:06.159107 3337 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 05:16:06.159694 kubelet[3337]: I0715 05:16:06.159290 3337 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:16:06.159694 kubelet[3337]: I0715 05:16:06.159529 3337 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 05:16:06.161019 kubelet[3337]: I0715 05:16:06.161002 3337 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 15 05:16:06.163448 kubelet[3337]: I0715 05:16:06.163409 3337 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:16:06.167962 kubelet[3337]: I0715 05:16:06.167944 3337 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:16:06.172981 kubelet[3337]: I0715 05:16:06.172956 3337 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:16:06.173362 kubelet[3337]: I0715 05:16:06.173333 3337 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:16:06.173788 kubelet[3337]: I0715 05:16:06.173441 3337 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-211","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:16:06.174240 kubelet[3337]: I0715 05:16:06.174203 3337 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:16:06.174319 kubelet[3337]: I0715 05:16:06.174311 3337 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 05:16:06.174424 kubelet[3337]: I0715 05:16:06.174416 3337 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:16:06.174612 kubelet[3337]: I0715 05:16:06.174603 3337 kubelet.go:480] "Attempting to sync node with API server" Jul 15 05:16:06.174823 kubelet[3337]: I0715 05:16:06.174812 3337 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:16:06.175503 kubelet[3337]: I0715 05:16:06.175483 3337 kubelet.go:386] "Adding apiserver pod source" Jul 15 05:16:06.175580 kubelet[3337]: I0715 05:16:06.175513 3337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:16:06.178325 kubelet[3337]: I0715 05:16:06.178305 3337 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:16:06.179121 kubelet[3337]: I0715 05:16:06.179102 3337 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 05:16:06.183728 kubelet[3337]: I0715 05:16:06.183625 3337 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 05:16:06.183906 kubelet[3337]: I0715 05:16:06.183892 3337 server.go:1289] "Started kubelet" Jul 15 05:16:06.186147 kubelet[3337]: I0715 05:16:06.186132 3337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:16:06.198872 kubelet[3337]: I0715 05:16:06.198769 3337 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:16:06.200625 kubelet[3337]: I0715 05:16:06.200590 3337 server.go:317] "Adding debug handlers to kubelet server" Jul 15 05:16:06.204603 kubelet[3337]: I0715 05:16:06.204542 3337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:16:06.205403 kubelet[3337]: I0715 05:16:06.205377 3337 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:16:06.206676 kubelet[3337]: I0715 05:16:06.206602 3337 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:16:06.207749 kubelet[3337]: I0715 05:16:06.206816 3337 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 05:16:06.207749 kubelet[3337]: E0715 05:16:06.207058 3337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-211\" not found" Jul 15 05:16:06.214522 kubelet[3337]: I0715 05:16:06.214446 3337 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 05:16:06.214675 kubelet[3337]: I0715 05:16:06.214590 3337 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:16:06.217519 kubelet[3337]: I0715 05:16:06.217408 3337 factory.go:223] Registration of the systemd container factory successfully Jul 15 05:16:06.218241 kubelet[3337]: I0715 05:16:06.217530 3337 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:16:06.221041 kubelet[3337]: I0715 05:16:06.220909 3337 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 05:16:06.225368 kubelet[3337]: I0715 05:16:06.225082 3337 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 05:16:06.225368 kubelet[3337]: I0715 05:16:06.225109 3337 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 05:16:06.225368 kubelet[3337]: I0715 05:16:06.225146 3337 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 05:16:06.225368 kubelet[3337]: I0715 05:16:06.225155 3337 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 05:16:06.225368 kubelet[3337]: E0715 05:16:06.225220 3337 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:16:06.229691 kubelet[3337]: E0715 05:16:06.227895 3337 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:16:06.230796 kubelet[3337]: I0715 05:16:06.230767 3337 factory.go:223] Registration of the containerd container factory successfully Jul 15 05:16:06.282904 kubelet[3337]: I0715 05:16:06.282869 3337 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 05:16:06.282904 kubelet[3337]: I0715 05:16:06.282886 3337 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 05:16:06.283104 kubelet[3337]: I0715 05:16:06.282988 3337 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:16:06.283236 kubelet[3337]: I0715 05:16:06.283208 3337 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 05:16:06.283294 kubelet[3337]: I0715 05:16:06.283229 3337 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 05:16:06.283294 kubelet[3337]: I0715 05:16:06.283252 3337 policy_none.go:49] "None policy: Start" Jul 15 05:16:06.283294 kubelet[3337]: I0715 05:16:06.283266 3337 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 05:16:06.283294 kubelet[3337]: I0715 05:16:06.283279 3337 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:16:06.283449 kubelet[3337]: I0715 05:16:06.283402 3337 state_mem.go:75] "Updated machine memory state" Jul 15 05:16:06.287700 kubelet[3337]: E0715 05:16:06.287625 3337 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 05:16:06.287852 kubelet[3337]: I0715 05:16:06.287830 3337 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:16:06.287910 kubelet[3337]: I0715 05:16:06.287848 3337 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:16:06.288762 kubelet[3337]: I0715 05:16:06.288402 3337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:16:06.290442 kubelet[3337]: E0715 05:16:06.290418 3337 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 05:16:06.326767 kubelet[3337]: I0715 05:16:06.326631 3337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:16:06.326767 kubelet[3337]: I0715 05:16:06.326748 3337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-211" Jul 15 05:16:06.329863 kubelet[3337]: I0715 05:16:06.329820 3337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-211" Jul 15 05:16:06.338370 kubelet[3337]: E0715 05:16:06.338293 3337 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-211\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:16:06.390865 kubelet[3337]: I0715 05:16:06.390340 3337 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-211" Jul 15 05:16:06.399549 kubelet[3337]: I0715 05:16:06.399516 3337 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-21-211" Jul 15 05:16:06.399703 kubelet[3337]: I0715 05:16:06.399592 3337 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-211" Jul 15 05:16:06.517807 kubelet[3337]: I0715 05:16:06.517769 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/466a894bad73b6b41249d09f33d89252-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-211\" (UID: \"466a894bad73b6b41249d09f33d89252\") " pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:16:06.518127 kubelet[3337]: I0715 05:16:06.518014 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/466a894bad73b6b41249d09f33d89252-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-211\" (UID: \"466a894bad73b6b41249d09f33d89252\") " pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:16:06.518127 kubelet[3337]: I0715 05:16:06.518037 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/466a894bad73b6b41249d09f33d89252-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-211\" (UID: \"466a894bad73b6b41249d09f33d89252\") " pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:16:06.518127 kubelet[3337]: I0715 05:16:06.518080 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc3e88ea2eb50e27678f27b91f6d80b0-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-211\" (UID: \"dc3e88ea2eb50e27678f27b91f6d80b0\") " pod="kube-system/kube-apiserver-ip-172-31-21-211" Jul 15 05:16:06.518127 kubelet[3337]: I0715 05:16:06.518096 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/466a894bad73b6b41249d09f33d89252-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-211\" (UID: \"466a894bad73b6b41249d09f33d89252\") " pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:16:06.518387 kubelet[3337]: I0715 05:16:06.518114 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e52b91efcf81ccbc2c3266136c54f575-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-211\" (UID: \"e52b91efcf81ccbc2c3266136c54f575\") " pod="kube-system/kube-scheduler-ip-172-31-21-211" Jul 15 05:16:06.518387 kubelet[3337]: I0715 05:16:06.518312 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc3e88ea2eb50e27678f27b91f6d80b0-ca-certs\") pod \"kube-apiserver-ip-172-31-21-211\" (UID: \"dc3e88ea2eb50e27678f27b91f6d80b0\") " pod="kube-system/kube-apiserver-ip-172-31-21-211" Jul 15 05:16:06.518387 kubelet[3337]: I0715 05:16:06.518328 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc3e88ea2eb50e27678f27b91f6d80b0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-211\" (UID: \"dc3e88ea2eb50e27678f27b91f6d80b0\") " pod="kube-system/kube-apiserver-ip-172-31-21-211" Jul 15 05:16:06.518387 kubelet[3337]: I0715 05:16:06.518362 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/466a894bad73b6b41249d09f33d89252-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-211\" (UID: \"466a894bad73b6b41249d09f33d89252\") " pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:16:07.184167 kubelet[3337]: I0715 05:16:07.184129 3337 apiserver.go:52] "Watching apiserver" Jul 15 05:16:07.215338 kubelet[3337]: I0715 05:16:07.215294 3337 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 05:16:07.266580 kubelet[3337]: I0715 05:16:07.265986 3337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:16:07.281219 kubelet[3337]: E0715 05:16:07.280251 3337 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-211\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-21-211" Jul 15 05:16:07.330093 kubelet[3337]: I0715 05:16:07.329996 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-211" podStartSLOduration=3.329973983 podStartE2EDuration="3.329973983s" podCreationTimestamp="2025-07-15 05:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:07.310191196 +0000 UTC m=+1.218838043" watchObservedRunningTime="2025-07-15 05:16:07.329973983 +0000 UTC m=+1.238620830" Jul 15 05:16:07.350546 kubelet[3337]: I0715 05:16:07.350484 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-211" podStartSLOduration=1.350462537 podStartE2EDuration="1.350462537s" podCreationTimestamp="2025-07-15 05:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:07.330887384 +0000 UTC m=+1.239534232" watchObservedRunningTime="2025-07-15 05:16:07.350462537 +0000 UTC m=+1.259109382" Jul 15 05:16:07.371415 kubelet[3337]: I0715 05:16:07.371338 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-211" podStartSLOduration=1.371317398 podStartE2EDuration="1.371317398s" podCreationTimestamp="2025-07-15 05:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:07.351866072 +0000 UTC m=+1.260512918" watchObservedRunningTime="2025-07-15 05:16:07.371317398 +0000 UTC m=+1.279964245" Jul 15 05:16:07.590521 update_engine[1984]: I20250715 05:16:07.589715 1984 update_attempter.cc:509] Updating boot flags... Jul 15 05:16:12.116699 kubelet[3337]: I0715 05:16:12.116593 3337 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 05:16:12.117170 containerd[2010]: time="2025-07-15T05:16:12.117030467Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 05:16:12.117503 kubelet[3337]: I0715 05:16:12.117313 3337 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 05:16:12.835519 systemd[1]: Created slice kubepods-besteffort-pod997a8869_9e72_4b58_b9e1_1dd82bd2d121.slice - libcontainer container kubepods-besteffort-pod997a8869_9e72_4b58_b9e1_1dd82bd2d121.slice. Jul 15 05:16:12.869823 kubelet[3337]: I0715 05:16:12.869783 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/997a8869-9e72-4b58-b9e1-1dd82bd2d121-lib-modules\") pod \"kube-proxy-6nr9v\" (UID: \"997a8869-9e72-4b58-b9e1-1dd82bd2d121\") " pod="kube-system/kube-proxy-6nr9v" Jul 15 05:16:12.869993 kubelet[3337]: I0715 05:16:12.869866 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmgb9\" (UniqueName: \"kubernetes.io/projected/997a8869-9e72-4b58-b9e1-1dd82bd2d121-kube-api-access-vmgb9\") pod \"kube-proxy-6nr9v\" (UID: \"997a8869-9e72-4b58-b9e1-1dd82bd2d121\") " pod="kube-system/kube-proxy-6nr9v" Jul 15 05:16:12.869993 kubelet[3337]: I0715 05:16:12.869889 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/997a8869-9e72-4b58-b9e1-1dd82bd2d121-kube-proxy\") pod \"kube-proxy-6nr9v\" (UID: \"997a8869-9e72-4b58-b9e1-1dd82bd2d121\") " pod="kube-system/kube-proxy-6nr9v" Jul 15 05:16:12.869993 kubelet[3337]: I0715 05:16:12.869942 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/997a8869-9e72-4b58-b9e1-1dd82bd2d121-xtables-lock\") pod \"kube-proxy-6nr9v\" (UID: \"997a8869-9e72-4b58-b9e1-1dd82bd2d121\") " pod="kube-system/kube-proxy-6nr9v" Jul 15 05:16:13.143634 containerd[2010]: time="2025-07-15T05:16:13.143569292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6nr9v,Uid:997a8869-9e72-4b58-b9e1-1dd82bd2d121,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:13.169155 containerd[2010]: time="2025-07-15T05:16:13.169117409Z" level=info msg="connecting to shim 2d5238ea2be1297cdd9f6c52fae3a71518a010d7aaa4640dae75392a2d9c0ebb" address="unix:///run/containerd/s/8ce41d282e2c1c04c2c20fb511ca4576c07d02091f30c9f6e7c1ba5deb50b076" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:13.195883 systemd[1]: Started cri-containerd-2d5238ea2be1297cdd9f6c52fae3a71518a010d7aaa4640dae75392a2d9c0ebb.scope - libcontainer container 2d5238ea2be1297cdd9f6c52fae3a71518a010d7aaa4640dae75392a2d9c0ebb. Jul 15 05:16:13.228640 containerd[2010]: time="2025-07-15T05:16:13.228328890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6nr9v,Uid:997a8869-9e72-4b58-b9e1-1dd82bd2d121,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d5238ea2be1297cdd9f6c52fae3a71518a010d7aaa4640dae75392a2d9c0ebb\"" Jul 15 05:16:13.235151 containerd[2010]: time="2025-07-15T05:16:13.235106956Z" level=info msg="CreateContainer within sandbox \"2d5238ea2be1297cdd9f6c52fae3a71518a010d7aaa4640dae75392a2d9c0ebb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 05:16:13.252685 containerd[2010]: time="2025-07-15T05:16:13.251035409Z" level=info msg="Container 88bd99d2d3b6702b559159a613336f9e0f9edd51bcfce2ab2e0469fadc5ad616: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:13.258385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3437273897.mount: Deactivated successfully. Jul 15 05:16:13.276596 containerd[2010]: time="2025-07-15T05:16:13.276497587Z" level=info msg="CreateContainer within sandbox \"2d5238ea2be1297cdd9f6c52fae3a71518a010d7aaa4640dae75392a2d9c0ebb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"88bd99d2d3b6702b559159a613336f9e0f9edd51bcfce2ab2e0469fadc5ad616\"" Jul 15 05:16:13.279100 containerd[2010]: time="2025-07-15T05:16:13.279064140Z" level=info msg="StartContainer for \"88bd99d2d3b6702b559159a613336f9e0f9edd51bcfce2ab2e0469fadc5ad616\"" Jul 15 05:16:13.284546 containerd[2010]: time="2025-07-15T05:16:13.284500598Z" level=info msg="connecting to shim 88bd99d2d3b6702b559159a613336f9e0f9edd51bcfce2ab2e0469fadc5ad616" address="unix:///run/containerd/s/8ce41d282e2c1c04c2c20fb511ca4576c07d02091f30c9f6e7c1ba5deb50b076" protocol=ttrpc version=3 Jul 15 05:16:13.314015 systemd[1]: Started cri-containerd-88bd99d2d3b6702b559159a613336f9e0f9edd51bcfce2ab2e0469fadc5ad616.scope - libcontainer container 88bd99d2d3b6702b559159a613336f9e0f9edd51bcfce2ab2e0469fadc5ad616. Jul 15 05:16:13.326077 systemd[1]: Created slice kubepods-besteffort-pod38d9fd65_5609_475b_8476_48715487440f.slice - libcontainer container kubepods-besteffort-pod38d9fd65_5609_475b_8476_48715487440f.slice. Jul 15 05:16:13.372928 kubelet[3337]: I0715 05:16:13.372803 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dmhh\" (UniqueName: \"kubernetes.io/projected/38d9fd65-5609-475b-8476-48715487440f-kube-api-access-6dmhh\") pod \"tigera-operator-747864d56d-jkr47\" (UID: \"38d9fd65-5609-475b-8476-48715487440f\") " pod="tigera-operator/tigera-operator-747864d56d-jkr47" Jul 15 05:16:13.372928 kubelet[3337]: I0715 05:16:13.372857 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/38d9fd65-5609-475b-8476-48715487440f-var-lib-calico\") pod \"tigera-operator-747864d56d-jkr47\" (UID: \"38d9fd65-5609-475b-8476-48715487440f\") " pod="tigera-operator/tigera-operator-747864d56d-jkr47" Jul 15 05:16:13.387036 containerd[2010]: time="2025-07-15T05:16:13.387004984Z" level=info msg="StartContainer for \"88bd99d2d3b6702b559159a613336f9e0f9edd51bcfce2ab2e0469fadc5ad616\" returns successfully" Jul 15 05:16:13.633867 containerd[2010]: time="2025-07-15T05:16:13.633827086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-jkr47,Uid:38d9fd65-5609-475b-8476-48715487440f,Namespace:tigera-operator,Attempt:0,}" Jul 15 05:16:13.663282 containerd[2010]: time="2025-07-15T05:16:13.663231514Z" level=info msg="connecting to shim 367fc8b72133424f8d46fa933f246762409f2eba0ba86761faffd8b5817c87ef" address="unix:///run/containerd/s/e49023da0dbace2487041b0c9cdc823ba2ab4eb46b6a3ab8420834dc25a11b25" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:13.686853 systemd[1]: Started cri-containerd-367fc8b72133424f8d46fa933f246762409f2eba0ba86761faffd8b5817c87ef.scope - libcontainer container 367fc8b72133424f8d46fa933f246762409f2eba0ba86761faffd8b5817c87ef. Jul 15 05:16:13.736093 containerd[2010]: time="2025-07-15T05:16:13.736061701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-jkr47,Uid:38d9fd65-5609-475b-8476-48715487440f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"367fc8b72133424f8d46fa933f246762409f2eba0ba86761faffd8b5817c87ef\"" Jul 15 05:16:13.738037 containerd[2010]: time="2025-07-15T05:16:13.737979010Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 05:16:14.312099 kubelet[3337]: I0715 05:16:14.312046 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6nr9v" podStartSLOduration=2.312029376 podStartE2EDuration="2.312029376s" podCreationTimestamp="2025-07-15 05:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:14.311649782 +0000 UTC m=+8.220296622" watchObservedRunningTime="2025-07-15 05:16:14.312029376 +0000 UTC m=+8.220676223" Jul 15 05:16:14.986208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1763763290.mount: Deactivated successfully. Jul 15 05:16:15.687311 containerd[2010]: time="2025-07-15T05:16:15.687246310Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:15.688459 containerd[2010]: time="2025-07-15T05:16:15.688409339Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 15 05:16:15.689726 containerd[2010]: time="2025-07-15T05:16:15.689678797Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:15.692230 containerd[2010]: time="2025-07-15T05:16:15.692164199Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:15.693044 containerd[2010]: time="2025-07-15T05:16:15.692904591Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.954730188s" Jul 15 05:16:15.693044 containerd[2010]: time="2025-07-15T05:16:15.692940934Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 15 05:16:15.697641 containerd[2010]: time="2025-07-15T05:16:15.697607735Z" level=info msg="CreateContainer within sandbox \"367fc8b72133424f8d46fa933f246762409f2eba0ba86761faffd8b5817c87ef\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 05:16:15.708684 containerd[2010]: time="2025-07-15T05:16:15.707136059Z" level=info msg="Container d300fed5e8b99022ee0fd33b5368703ae459a773ce020a31233683e00b1a3c18: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:15.717782 containerd[2010]: time="2025-07-15T05:16:15.717641304Z" level=info msg="CreateContainer within sandbox \"367fc8b72133424f8d46fa933f246762409f2eba0ba86761faffd8b5817c87ef\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d300fed5e8b99022ee0fd33b5368703ae459a773ce020a31233683e00b1a3c18\"" Jul 15 05:16:15.718682 containerd[2010]: time="2025-07-15T05:16:15.718272875Z" level=info msg="StartContainer for \"d300fed5e8b99022ee0fd33b5368703ae459a773ce020a31233683e00b1a3c18\"" Jul 15 05:16:15.719331 containerd[2010]: time="2025-07-15T05:16:15.719308107Z" level=info msg="connecting to shim d300fed5e8b99022ee0fd33b5368703ae459a773ce020a31233683e00b1a3c18" address="unix:///run/containerd/s/e49023da0dbace2487041b0c9cdc823ba2ab4eb46b6a3ab8420834dc25a11b25" protocol=ttrpc version=3 Jul 15 05:16:15.742863 systemd[1]: Started cri-containerd-d300fed5e8b99022ee0fd33b5368703ae459a773ce020a31233683e00b1a3c18.scope - libcontainer container d300fed5e8b99022ee0fd33b5368703ae459a773ce020a31233683e00b1a3c18. Jul 15 05:16:15.772548 containerd[2010]: time="2025-07-15T05:16:15.772500205Z" level=info msg="StartContainer for \"d300fed5e8b99022ee0fd33b5368703ae459a773ce020a31233683e00b1a3c18\" returns successfully" Jul 15 05:16:17.614861 kubelet[3337]: I0715 05:16:17.614738 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-jkr47" podStartSLOduration=2.658388831 podStartE2EDuration="4.61472193s" podCreationTimestamp="2025-07-15 05:16:13 +0000 UTC" firstStartedPulling="2025-07-15 05:16:13.737566511 +0000 UTC m=+7.646213335" lastFinishedPulling="2025-07-15 05:16:15.693899593 +0000 UTC m=+9.602546434" observedRunningTime="2025-07-15 05:16:16.325123098 +0000 UTC m=+10.233769938" watchObservedRunningTime="2025-07-15 05:16:17.61472193 +0000 UTC m=+11.523368776" Jul 15 05:16:22.520478 sudo[2392]: pam_unix(sudo:session): session closed for user root Jul 15 05:16:22.546628 sshd[2391]: Connection closed by 139.178.89.65 port 56280 Jul 15 05:16:22.546500 sshd-session[2388]: pam_unix(sshd:session): session closed for user core Jul 15 05:16:22.556953 systemd[1]: sshd@8-172.31.21.211:22-139.178.89.65:56280.service: Deactivated successfully. Jul 15 05:16:22.561508 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 05:16:22.565142 systemd[1]: session-9.scope: Consumed 6.772s CPU time, 154.4M memory peak. Jul 15 05:16:22.569073 systemd-logind[1983]: Session 9 logged out. Waiting for processes to exit. Jul 15 05:16:22.574276 systemd-logind[1983]: Removed session 9. Jul 15 05:16:27.164773 systemd[1]: Created slice kubepods-besteffort-pod5a32df71_c069_4e81_a160_d190d6886db2.slice - libcontainer container kubepods-besteffort-pod5a32df71_c069_4e81_a160_d190d6886db2.slice. Jul 15 05:16:27.262351 kubelet[3337]: I0715 05:16:27.262283 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a32df71-c069-4e81-a160-d190d6886db2-tigera-ca-bundle\") pod \"calico-typha-59794c788d-m4txk\" (UID: \"5a32df71-c069-4e81-a160-d190d6886db2\") " pod="calico-system/calico-typha-59794c788d-m4txk" Jul 15 05:16:27.262351 kubelet[3337]: I0715 05:16:27.262327 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5a32df71-c069-4e81-a160-d190d6886db2-typha-certs\") pod \"calico-typha-59794c788d-m4txk\" (UID: \"5a32df71-c069-4e81-a160-d190d6886db2\") " pod="calico-system/calico-typha-59794c788d-m4txk" Jul 15 05:16:27.262795 kubelet[3337]: I0715 05:16:27.262407 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vq67\" (UniqueName: \"kubernetes.io/projected/5a32df71-c069-4e81-a160-d190d6886db2-kube-api-access-8vq67\") pod \"calico-typha-59794c788d-m4txk\" (UID: \"5a32df71-c069-4e81-a160-d190d6886db2\") " pod="calico-system/calico-typha-59794c788d-m4txk" Jul 15 05:16:27.477981 containerd[2010]: time="2025-07-15T05:16:27.477508822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59794c788d-m4txk,Uid:5a32df71-c069-4e81-a160-d190d6886db2,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:27.535697 containerd[2010]: time="2025-07-15T05:16:27.535556643Z" level=info msg="connecting to shim 4765121e5905572965ae7f7385e1bd55a0ae5ea646e858361670f44430ea21e0" address="unix:///run/containerd/s/550c22d3018efa3fdb593bb6f847f4779f57460b869da3ae8fa940682b86119b" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:27.583939 systemd[1]: Started cri-containerd-4765121e5905572965ae7f7385e1bd55a0ae5ea646e858361670f44430ea21e0.scope - libcontainer container 4765121e5905572965ae7f7385e1bd55a0ae5ea646e858361670f44430ea21e0. Jul 15 05:16:27.617773 systemd[1]: Created slice kubepods-besteffort-pod3ad4bd63_8d85_4e15_a3ff_eafdf2b3e220.slice - libcontainer container kubepods-besteffort-pod3ad4bd63_8d85_4e15_a3ff_eafdf2b3e220.slice. Jul 15 05:16:27.667694 kubelet[3337]: I0715 05:16:27.664568 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220-tigera-ca-bundle\") pod \"calico-node-l7qrw\" (UID: \"3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220\") " pod="calico-system/calico-node-l7qrw" Jul 15 05:16:27.667694 kubelet[3337]: I0715 05:16:27.664637 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220-var-lib-calico\") pod \"calico-node-l7qrw\" (UID: \"3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220\") " pod="calico-system/calico-node-l7qrw" Jul 15 05:16:27.667694 kubelet[3337]: I0715 05:16:27.664688 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220-var-run-calico\") pod \"calico-node-l7qrw\" (UID: \"3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220\") " pod="calico-system/calico-node-l7qrw" Jul 15 05:16:27.667694 kubelet[3337]: I0715 05:16:27.664713 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220-cni-net-dir\") pod \"calico-node-l7qrw\" (UID: \"3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220\") " pod="calico-system/calico-node-l7qrw" Jul 15 05:16:27.667694 kubelet[3337]: I0715 05:16:27.665416 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf28h\" (UniqueName: \"kubernetes.io/projected/3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220-kube-api-access-sf28h\") pod \"calico-node-l7qrw\" (UID: \"3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220\") " pod="calico-system/calico-node-l7qrw" Jul 15 05:16:27.668010 kubelet[3337]: I0715 05:16:27.665520 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220-cni-bin-dir\") pod \"calico-node-l7qrw\" (UID: \"3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220\") " pod="calico-system/calico-node-l7qrw" Jul 15 05:16:27.668010 kubelet[3337]: I0715 05:16:27.665562 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220-cni-log-dir\") pod \"calico-node-l7qrw\" (UID: \"3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220\") " pod="calico-system/calico-node-l7qrw" Jul 15 05:16:27.668010 kubelet[3337]: I0715 05:16:27.665589 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220-lib-modules\") pod \"calico-node-l7qrw\" (UID: \"3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220\") " pod="calico-system/calico-node-l7qrw" Jul 15 05:16:27.668010 kubelet[3337]: I0715 05:16:27.665615 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220-xtables-lock\") pod \"calico-node-l7qrw\" (UID: \"3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220\") " pod="calico-system/calico-node-l7qrw" Jul 15 05:16:27.668010 kubelet[3337]: I0715 05:16:27.665653 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220-node-certs\") pod \"calico-node-l7qrw\" (UID: \"3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220\") " pod="calico-system/calico-node-l7qrw" Jul 15 05:16:27.668208 kubelet[3337]: I0715 05:16:27.665699 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220-policysync\") pod \"calico-node-l7qrw\" (UID: \"3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220\") " pod="calico-system/calico-node-l7qrw" Jul 15 05:16:27.668208 kubelet[3337]: I0715 05:16:27.665723 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220-flexvol-driver-host\") pod \"calico-node-l7qrw\" (UID: \"3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220\") " pod="calico-system/calico-node-l7qrw" Jul 15 05:16:27.697081 containerd[2010]: time="2025-07-15T05:16:27.697035466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59794c788d-m4txk,Uid:5a32df71-c069-4e81-a160-d190d6886db2,Namespace:calico-system,Attempt:0,} returns sandbox id \"4765121e5905572965ae7f7385e1bd55a0ae5ea646e858361670f44430ea21e0\"" Jul 15 05:16:27.700291 containerd[2010]: time="2025-07-15T05:16:27.700241051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 05:16:27.774057 kubelet[3337]: E0715 05:16:27.773888 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.774057 kubelet[3337]: W0715 05:16:27.773916 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.774057 kubelet[3337]: E0715 05:16:27.773941 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.779396 kubelet[3337]: E0715 05:16:27.779309 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.779396 kubelet[3337]: W0715 05:16:27.779335 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.779396 kubelet[3337]: E0715 05:16:27.779357 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.806981 kubelet[3337]: E0715 05:16:27.806890 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.806981 kubelet[3337]: W0715 05:16:27.806915 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.806981 kubelet[3337]: E0715 05:16:27.806937 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.927998 containerd[2010]: time="2025-07-15T05:16:27.927533902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l7qrw,Uid:3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:27.939851 kubelet[3337]: E0715 05:16:27.939068 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfc8q" podUID="d668f1c9-e444-45ed-aa1c-55630e5a2640" Jul 15 05:16:27.954736 kubelet[3337]: E0715 05:16:27.954696 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.955454 kubelet[3337]: W0715 05:16:27.955262 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.955454 kubelet[3337]: E0715 05:16:27.955304 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.956312 kubelet[3337]: E0715 05:16:27.956255 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.956312 kubelet[3337]: W0715 05:16:27.956272 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.956312 kubelet[3337]: E0715 05:16:27.956290 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.956946 kubelet[3337]: E0715 05:16:27.956871 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.956946 kubelet[3337]: W0715 05:16:27.956886 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.956946 kubelet[3337]: E0715 05:16:27.956901 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.958916 kubelet[3337]: E0715 05:16:27.958896 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.959322 kubelet[3337]: W0715 05:16:27.959141 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.959322 kubelet[3337]: E0715 05:16:27.959163 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.960935 kubelet[3337]: E0715 05:16:27.960685 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.960935 kubelet[3337]: W0715 05:16:27.960700 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.960935 kubelet[3337]: E0715 05:16:27.960714 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.963146 kubelet[3337]: E0715 05:16:27.962803 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.963694 kubelet[3337]: W0715 05:16:27.963046 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.963694 kubelet[3337]: E0715 05:16:27.963333 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.967343 kubelet[3337]: E0715 05:16:27.966374 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.967343 kubelet[3337]: W0715 05:16:27.966391 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.967343 kubelet[3337]: E0715 05:16:27.966408 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.967343 kubelet[3337]: E0715 05:16:27.967046 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.967343 kubelet[3337]: W0715 05:16:27.967058 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.967343 kubelet[3337]: E0715 05:16:27.967072 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.969811 kubelet[3337]: E0715 05:16:27.969793 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.969919 kubelet[3337]: W0715 05:16:27.969906 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.969998 kubelet[3337]: E0715 05:16:27.969985 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.970366 kubelet[3337]: E0715 05:16:27.970326 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.970512 kubelet[3337]: W0715 05:16:27.970479 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.970609 kubelet[3337]: E0715 05:16:27.970597 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.970955 kubelet[3337]: E0715 05:16:27.970942 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.971274 kubelet[3337]: W0715 05:16:27.970989 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.971274 kubelet[3337]: E0715 05:16:27.971004 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.971777 kubelet[3337]: E0715 05:16:27.971743 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.971960 kubelet[3337]: W0715 05:16:27.971757 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.972113 kubelet[3337]: E0715 05:16:27.972024 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.972900 kubelet[3337]: E0715 05:16:27.972874 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.973145 kubelet[3337]: W0715 05:16:27.973074 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.973145 kubelet[3337]: E0715 05:16:27.973096 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.973951 kubelet[3337]: E0715 05:16:27.973932 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.974130 kubelet[3337]: W0715 05:16:27.974006 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.974130 kubelet[3337]: E0715 05:16:27.974022 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.975862 kubelet[3337]: E0715 05:16:27.975739 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.975862 kubelet[3337]: W0715 05:16:27.975755 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.975862 kubelet[3337]: E0715 05:16:27.975770 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.976679 kubelet[3337]: E0715 05:16:27.976370 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.976876 kubelet[3337]: W0715 05:16:27.976763 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.976876 kubelet[3337]: E0715 05:16:27.976784 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.980584 kubelet[3337]: E0715 05:16:27.979877 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.980584 kubelet[3337]: W0715 05:16:27.979891 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.980584 kubelet[3337]: E0715 05:16:27.979905 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.980584 kubelet[3337]: E0715 05:16:27.980108 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.980584 kubelet[3337]: W0715 05:16:27.980117 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.980584 kubelet[3337]: E0715 05:16:27.980128 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.980584 kubelet[3337]: E0715 05:16:27.980324 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.980584 kubelet[3337]: W0715 05:16:27.980334 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.980584 kubelet[3337]: E0715 05:16:27.980345 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.982100 kubelet[3337]: E0715 05:16:27.981733 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.982100 kubelet[3337]: W0715 05:16:27.981760 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.982100 kubelet[3337]: E0715 05:16:27.981774 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.985806 kubelet[3337]: E0715 05:16:27.985453 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.985806 kubelet[3337]: W0715 05:16:27.985476 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.985806 kubelet[3337]: E0715 05:16:27.985493 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.985806 kubelet[3337]: I0715 05:16:27.985532 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d668f1c9-e444-45ed-aa1c-55630e5a2640-kubelet-dir\") pod \"csi-node-driver-gfc8q\" (UID: \"d668f1c9-e444-45ed-aa1c-55630e5a2640\") " pod="calico-system/csi-node-driver-gfc8q" Jul 15 05:16:27.987769 kubelet[3337]: E0715 05:16:27.986376 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.987769 kubelet[3337]: W0715 05:16:27.986395 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.987769 kubelet[3337]: E0715 05:16:27.986410 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.987769 kubelet[3337]: I0715 05:16:27.986436 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d668f1c9-e444-45ed-aa1c-55630e5a2640-registration-dir\") pod \"csi-node-driver-gfc8q\" (UID: \"d668f1c9-e444-45ed-aa1c-55630e5a2640\") " pod="calico-system/csi-node-driver-gfc8q" Jul 15 05:16:27.988486 kubelet[3337]: E0715 05:16:27.988224 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.988486 kubelet[3337]: W0715 05:16:27.988241 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.988486 kubelet[3337]: E0715 05:16:27.988265 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.988486 kubelet[3337]: I0715 05:16:27.988292 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d668f1c9-e444-45ed-aa1c-55630e5a2640-varrun\") pod \"csi-node-driver-gfc8q\" (UID: \"d668f1c9-e444-45ed-aa1c-55630e5a2640\") " pod="calico-system/csi-node-driver-gfc8q" Jul 15 05:16:27.990149 kubelet[3337]: E0715 05:16:27.989415 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.990149 kubelet[3337]: W0715 05:16:27.989431 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.990149 kubelet[3337]: E0715 05:16:27.989445 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.990149 kubelet[3337]: I0715 05:16:27.989470 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d668f1c9-e444-45ed-aa1c-55630e5a2640-socket-dir\") pod \"csi-node-driver-gfc8q\" (UID: \"d668f1c9-e444-45ed-aa1c-55630e5a2640\") " pod="calico-system/csi-node-driver-gfc8q" Jul 15 05:16:27.992455 kubelet[3337]: E0715 05:16:27.992134 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.992455 kubelet[3337]: W0715 05:16:27.992150 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.992455 kubelet[3337]: E0715 05:16:27.992163 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.992455 kubelet[3337]: I0715 05:16:27.992193 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kng8z\" (UniqueName: \"kubernetes.io/projected/d668f1c9-e444-45ed-aa1c-55630e5a2640-kube-api-access-kng8z\") pod \"csi-node-driver-gfc8q\" (UID: \"d668f1c9-e444-45ed-aa1c-55630e5a2640\") " pod="calico-system/csi-node-driver-gfc8q" Jul 15 05:16:27.993648 kubelet[3337]: E0715 05:16:27.993627 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.993648 kubelet[3337]: W0715 05:16:27.993647 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.994243 kubelet[3337]: E0715 05:16:27.993696 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.995103 kubelet[3337]: E0715 05:16:27.994832 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.995103 kubelet[3337]: W0715 05:16:27.994846 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.995103 kubelet[3337]: E0715 05:16:27.994860 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.995838 kubelet[3337]: E0715 05:16:27.995741 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.996193 kubelet[3337]: W0715 05:16:27.996006 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.996193 kubelet[3337]: E0715 05:16:27.996026 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.996949 kubelet[3337]: E0715 05:16:27.996925 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.997391 kubelet[3337]: W0715 05:16:27.997107 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.997391 kubelet[3337]: E0715 05:16:27.997125 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:27.998599 kubelet[3337]: E0715 05:16:27.998511 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:27.998599 kubelet[3337]: W0715 05:16:27.998531 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:27.998599 kubelet[3337]: E0715 05:16:27.998545 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.001131 kubelet[3337]: E0715 05:16:27.999491 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.001131 kubelet[3337]: W0715 05:16:28.000638 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.001131 kubelet[3337]: E0715 05:16:28.000674 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.001995 kubelet[3337]: E0715 05:16:28.001867 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.001995 kubelet[3337]: W0715 05:16:28.001882 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.001995 kubelet[3337]: E0715 05:16:28.001897 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.003096 kubelet[3337]: E0715 05:16:28.003082 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.003314 kubelet[3337]: W0715 05:16:28.003299 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.003581 kubelet[3337]: E0715 05:16:28.003486 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.005009 kubelet[3337]: E0715 05:16:28.004867 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.005009 kubelet[3337]: W0715 05:16:28.004966 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.005009 kubelet[3337]: E0715 05:16:28.004982 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.006796 kubelet[3337]: E0715 05:16:28.006520 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.006796 kubelet[3337]: W0715 05:16:28.006537 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.006796 kubelet[3337]: E0715 05:16:28.006551 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.008559 containerd[2010]: time="2025-07-15T05:16:28.008225995Z" level=info msg="connecting to shim 112cca7794fdb3548b9211ad9ea1d7694bfbb0d01ff668cdc4d8b9d487ee5f00" address="unix:///run/containerd/s/38b5c604f099516f6e0d05795d33948a32cf6e87671db1bcc2246ade723cc84d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:28.053042 systemd[1]: Started cri-containerd-112cca7794fdb3548b9211ad9ea1d7694bfbb0d01ff668cdc4d8b9d487ee5f00.scope - libcontainer container 112cca7794fdb3548b9211ad9ea1d7694bfbb0d01ff668cdc4d8b9d487ee5f00. Jul 15 05:16:28.094064 kubelet[3337]: E0715 05:16:28.094003 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.094064 kubelet[3337]: W0715 05:16:28.094027 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.094064 kubelet[3337]: E0715 05:16:28.094052 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.095202 kubelet[3337]: E0715 05:16:28.095174 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.095202 kubelet[3337]: W0715 05:16:28.095198 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.095514 kubelet[3337]: E0715 05:16:28.095222 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.096299 kubelet[3337]: E0715 05:16:28.096233 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.096299 kubelet[3337]: W0715 05:16:28.096260 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.096299 kubelet[3337]: E0715 05:16:28.096279 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.097984 kubelet[3337]: E0715 05:16:28.097923 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.097984 kubelet[3337]: W0715 05:16:28.097944 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.097984 kubelet[3337]: E0715 05:16:28.097961 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.098824 kubelet[3337]: E0715 05:16:28.098199 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.098824 kubelet[3337]: W0715 05:16:28.098210 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.098824 kubelet[3337]: E0715 05:16:28.098222 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.099203 kubelet[3337]: E0715 05:16:28.098972 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.099203 kubelet[3337]: W0715 05:16:28.098984 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.099203 kubelet[3337]: E0715 05:16:28.098997 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.099203 kubelet[3337]: E0715 05:16:28.099204 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.099367 kubelet[3337]: W0715 05:16:28.099213 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.099367 kubelet[3337]: E0715 05:16:28.099225 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.100093 kubelet[3337]: E0715 05:16:28.100072 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.100093 kubelet[3337]: W0715 05:16:28.100092 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.100402 kubelet[3337]: E0715 05:16:28.100107 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.100706 kubelet[3337]: E0715 05:16:28.100684 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.100706 kubelet[3337]: W0715 05:16:28.100705 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.100831 kubelet[3337]: E0715 05:16:28.100719 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.101190 kubelet[3337]: E0715 05:16:28.101170 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.101368 kubelet[3337]: W0715 05:16:28.101213 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.101368 kubelet[3337]: E0715 05:16:28.101228 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.101733 kubelet[3337]: E0715 05:16:28.101656 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.102564 kubelet[3337]: W0715 05:16:28.101734 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.102564 kubelet[3337]: E0715 05:16:28.101749 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.102880 kubelet[3337]: E0715 05:16:28.102858 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.102880 kubelet[3337]: W0715 05:16:28.102874 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.102989 kubelet[3337]: E0715 05:16:28.102888 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.103205 kubelet[3337]: E0715 05:16:28.103189 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.103205 kubelet[3337]: W0715 05:16:28.103204 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.103322 kubelet[3337]: E0715 05:16:28.103217 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.103468 kubelet[3337]: E0715 05:16:28.103452 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.103468 kubelet[3337]: W0715 05:16:28.103467 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.103570 kubelet[3337]: E0715 05:16:28.103479 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.103902 kubelet[3337]: E0715 05:16:28.103710 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.103902 kubelet[3337]: W0715 05:16:28.103719 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.103902 kubelet[3337]: E0715 05:16:28.103731 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.104591 kubelet[3337]: E0715 05:16:28.103971 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.104591 kubelet[3337]: W0715 05:16:28.103981 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.104591 kubelet[3337]: E0715 05:16:28.103993 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.104591 kubelet[3337]: E0715 05:16:28.104286 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.104591 kubelet[3337]: W0715 05:16:28.104295 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.104591 kubelet[3337]: E0715 05:16:28.104307 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.104865 kubelet[3337]: E0715 05:16:28.104598 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.104865 kubelet[3337]: W0715 05:16:28.104610 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.104865 kubelet[3337]: E0715 05:16:28.104623 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.105686 kubelet[3337]: E0715 05:16:28.105105 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.105686 kubelet[3337]: W0715 05:16:28.105119 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.105686 kubelet[3337]: E0715 05:16:28.105144 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.105883 kubelet[3337]: E0715 05:16:28.105840 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.105883 kubelet[3337]: W0715 05:16:28.105853 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.105883 kubelet[3337]: E0715 05:16:28.105866 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.106077 kubelet[3337]: E0715 05:16:28.106060 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.106077 kubelet[3337]: W0715 05:16:28.106075 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.106221 kubelet[3337]: E0715 05:16:28.106086 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.107684 kubelet[3337]: E0715 05:16:28.106340 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.107684 kubelet[3337]: W0715 05:16:28.106351 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.107684 kubelet[3337]: E0715 05:16:28.106363 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.107841 kubelet[3337]: E0715 05:16:28.107738 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.107841 kubelet[3337]: W0715 05:16:28.107750 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.107841 kubelet[3337]: E0715 05:16:28.107763 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.108134 kubelet[3337]: E0715 05:16:28.108079 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.108134 kubelet[3337]: W0715 05:16:28.108093 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.108134 kubelet[3337]: E0715 05:16:28.108105 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.108819 kubelet[3337]: E0715 05:16:28.108762 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.108819 kubelet[3337]: W0715 05:16:28.108777 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.108819 kubelet[3337]: E0715 05:16:28.108790 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.117065 kubelet[3337]: E0715 05:16:28.117014 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:28.117065 kubelet[3337]: W0715 05:16:28.117037 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:28.117065 kubelet[3337]: E0715 05:16:28.117060 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:28.165237 containerd[2010]: time="2025-07-15T05:16:28.165185166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l7qrw,Uid:3ad4bd63-8d85-4e15-a3ff-eafdf2b3e220,Namespace:calico-system,Attempt:0,} returns sandbox id \"112cca7794fdb3548b9211ad9ea1d7694bfbb0d01ff668cdc4d8b9d487ee5f00\"" Jul 15 05:16:29.184770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3107506771.mount: Deactivated successfully. Jul 15 05:16:30.231966 kubelet[3337]: E0715 05:16:30.231824 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfc8q" podUID="d668f1c9-e444-45ed-aa1c-55630e5a2640" Jul 15 05:16:30.506199 containerd[2010]: time="2025-07-15T05:16:30.505209899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:30.509469 containerd[2010]: time="2025-07-15T05:16:30.509430728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 15 05:16:30.510867 containerd[2010]: time="2025-07-15T05:16:30.510833751Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:30.519197 containerd[2010]: time="2025-07-15T05:16:30.519156412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:30.519944 containerd[2010]: time="2025-07-15T05:16:30.519747822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.819451194s" Jul 15 05:16:30.519944 containerd[2010]: time="2025-07-15T05:16:30.519790707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 15 05:16:30.521339 containerd[2010]: time="2025-07-15T05:16:30.521312108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 05:16:30.554318 containerd[2010]: time="2025-07-15T05:16:30.554277031Z" level=info msg="CreateContainer within sandbox \"4765121e5905572965ae7f7385e1bd55a0ae5ea646e858361670f44430ea21e0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 05:16:30.564884 containerd[2010]: time="2025-07-15T05:16:30.564855458Z" level=info msg="Container a98b5860791f37124fea2eefee75cf2ef9e0fc6d27e749fcb43b4f7fa4d332d9: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:30.568449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201748625.mount: Deactivated successfully. Jul 15 05:16:30.579701 containerd[2010]: time="2025-07-15T05:16:30.578580885Z" level=info msg="CreateContainer within sandbox \"4765121e5905572965ae7f7385e1bd55a0ae5ea646e858361670f44430ea21e0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a98b5860791f37124fea2eefee75cf2ef9e0fc6d27e749fcb43b4f7fa4d332d9\"" Jul 15 05:16:30.582306 containerd[2010]: time="2025-07-15T05:16:30.582106752Z" level=info msg="StartContainer for \"a98b5860791f37124fea2eefee75cf2ef9e0fc6d27e749fcb43b4f7fa4d332d9\"" Jul 15 05:16:30.584682 containerd[2010]: time="2025-07-15T05:16:30.583796899Z" level=info msg="connecting to shim a98b5860791f37124fea2eefee75cf2ef9e0fc6d27e749fcb43b4f7fa4d332d9" address="unix:///run/containerd/s/550c22d3018efa3fdb593bb6f847f4779f57460b869da3ae8fa940682b86119b" protocol=ttrpc version=3 Jul 15 05:16:30.654046 systemd[1]: Started cri-containerd-a98b5860791f37124fea2eefee75cf2ef9e0fc6d27e749fcb43b4f7fa4d332d9.scope - libcontainer container a98b5860791f37124fea2eefee75cf2ef9e0fc6d27e749fcb43b4f7fa4d332d9. Jul 15 05:16:30.739817 containerd[2010]: time="2025-07-15T05:16:30.739768667Z" level=info msg="StartContainer for \"a98b5860791f37124fea2eefee75cf2ef9e0fc6d27e749fcb43b4f7fa4d332d9\" returns successfully" Jul 15 05:16:31.411062 kubelet[3337]: E0715 05:16:31.411023 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.411062 kubelet[3337]: W0715 05:16:31.411052 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.411675 kubelet[3337]: E0715 05:16:31.411083 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.411675 kubelet[3337]: E0715 05:16:31.411644 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.411675 kubelet[3337]: W0715 05:16:31.411672 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.411791 kubelet[3337]: E0715 05:16:31.411688 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.411961 kubelet[3337]: E0715 05:16:31.411947 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.412098 kubelet[3337]: W0715 05:16:31.411998 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.412098 kubelet[3337]: E0715 05:16:31.412011 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.412843 kubelet[3337]: E0715 05:16:31.412773 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.413484 kubelet[3337]: W0715 05:16:31.413399 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.413484 kubelet[3337]: E0715 05:16:31.413416 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.413792 kubelet[3337]: E0715 05:16:31.413772 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.413792 kubelet[3337]: W0715 05:16:31.413784 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.413792 kubelet[3337]: E0715 05:16:31.413794 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.414081 kubelet[3337]: E0715 05:16:31.414053 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.414081 kubelet[3337]: W0715 05:16:31.414065 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.414081 kubelet[3337]: E0715 05:16:31.414074 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.414238 kubelet[3337]: E0715 05:16:31.414229 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.414238 kubelet[3337]: W0715 05:16:31.414238 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.414314 kubelet[3337]: E0715 05:16:31.414245 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.414430 kubelet[3337]: E0715 05:16:31.414407 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.414430 kubelet[3337]: W0715 05:16:31.414427 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.414548 kubelet[3337]: E0715 05:16:31.414440 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.414624 kubelet[3337]: E0715 05:16:31.414610 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.414624 kubelet[3337]: W0715 05:16:31.414621 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.414727 kubelet[3337]: E0715 05:16:31.414631 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.414833 kubelet[3337]: E0715 05:16:31.414815 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.414833 kubelet[3337]: W0715 05:16:31.414828 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.414912 kubelet[3337]: E0715 05:16:31.414838 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.415010 kubelet[3337]: E0715 05:16:31.414998 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.415010 kubelet[3337]: W0715 05:16:31.415007 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.415099 kubelet[3337]: E0715 05:16:31.415014 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.415680 kubelet[3337]: E0715 05:16:31.415646 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.415730 kubelet[3337]: W0715 05:16:31.415684 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.415730 kubelet[3337]: E0715 05:16:31.415695 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.415945 kubelet[3337]: E0715 05:16:31.415881 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.415945 kubelet[3337]: W0715 05:16:31.415891 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.415945 kubelet[3337]: E0715 05:16:31.415900 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.416138 kubelet[3337]: E0715 05:16:31.416072 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.416138 kubelet[3337]: W0715 05:16:31.416081 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.416138 kubelet[3337]: E0715 05:16:31.416088 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.416734 kubelet[3337]: E0715 05:16:31.416717 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.416734 kubelet[3337]: W0715 05:16:31.416730 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.416849 kubelet[3337]: E0715 05:16:31.416740 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.434172 kubelet[3337]: E0715 05:16:31.434138 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.434172 kubelet[3337]: W0715 05:16:31.434163 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.434172 kubelet[3337]: E0715 05:16:31.434183 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.434672 kubelet[3337]: E0715 05:16:31.434389 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.434672 kubelet[3337]: W0715 05:16:31.434396 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.434672 kubelet[3337]: E0715 05:16:31.434404 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.434763 kubelet[3337]: E0715 05:16:31.434750 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.434815 kubelet[3337]: W0715 05:16:31.434766 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.434815 kubelet[3337]: E0715 05:16:31.434778 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.434973 kubelet[3337]: E0715 05:16:31.434959 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.434973 kubelet[3337]: W0715 05:16:31.434970 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.435029 kubelet[3337]: E0715 05:16:31.434978 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.435181 kubelet[3337]: E0715 05:16:31.435168 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.435181 kubelet[3337]: W0715 05:16:31.435178 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.435241 kubelet[3337]: E0715 05:16:31.435185 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.435395 kubelet[3337]: E0715 05:16:31.435382 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.435431 kubelet[3337]: W0715 05:16:31.435399 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.435431 kubelet[3337]: E0715 05:16:31.435407 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.435632 kubelet[3337]: E0715 05:16:31.435618 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.435632 kubelet[3337]: W0715 05:16:31.435631 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.435717 kubelet[3337]: E0715 05:16:31.435639 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.435917 kubelet[3337]: E0715 05:16:31.435901 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.435917 kubelet[3337]: W0715 05:16:31.435913 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.435991 kubelet[3337]: E0715 05:16:31.435922 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.436140 kubelet[3337]: E0715 05:16:31.436120 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.436140 kubelet[3337]: W0715 05:16:31.436134 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.436338 kubelet[3337]: E0715 05:16:31.436145 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.436393 kubelet[3337]: E0715 05:16:31.436385 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.436422 kubelet[3337]: W0715 05:16:31.436392 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.436422 kubelet[3337]: E0715 05:16:31.436402 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.436652 kubelet[3337]: E0715 05:16:31.436633 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.436721 kubelet[3337]: W0715 05:16:31.436646 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.436721 kubelet[3337]: E0715 05:16:31.436682 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.436989 kubelet[3337]: E0715 05:16:31.436974 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.436989 kubelet[3337]: W0715 05:16:31.436986 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.437049 kubelet[3337]: E0715 05:16:31.436996 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.437394 kubelet[3337]: E0715 05:16:31.437373 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.437394 kubelet[3337]: W0715 05:16:31.437388 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.437394 kubelet[3337]: E0715 05:16:31.437397 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.437547 kubelet[3337]: E0715 05:16:31.437542 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.437601 kubelet[3337]: W0715 05:16:31.437548 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.437601 kubelet[3337]: E0715 05:16:31.437555 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.437756 kubelet[3337]: E0715 05:16:31.437735 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.437756 kubelet[3337]: W0715 05:16:31.437751 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.437862 kubelet[3337]: E0715 05:16:31.437762 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.437986 kubelet[3337]: E0715 05:16:31.437968 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.437986 kubelet[3337]: W0715 05:16:31.437980 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.438061 kubelet[3337]: E0715 05:16:31.437990 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.438276 kubelet[3337]: E0715 05:16:31.438262 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.438276 kubelet[3337]: W0715 05:16:31.438272 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.438339 kubelet[3337]: E0715 05:16:31.438280 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.438466 kubelet[3337]: E0715 05:16:31.438449 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:16:31.438466 kubelet[3337]: W0715 05:16:31.438459 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:16:31.438466 kubelet[3337]: E0715 05:16:31.438466 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:16:31.669748 containerd[2010]: time="2025-07-15T05:16:31.669449404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:31.672193 containerd[2010]: time="2025-07-15T05:16:31.671700594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 15 05:16:31.673714 containerd[2010]: time="2025-07-15T05:16:31.673191280Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:31.678544 containerd[2010]: time="2025-07-15T05:16:31.678494172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:31.679416 containerd[2010]: time="2025-07-15T05:16:31.679329246Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.157794312s" Jul 15 05:16:31.679416 containerd[2010]: time="2025-07-15T05:16:31.679369442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 15 05:16:31.685614 containerd[2010]: time="2025-07-15T05:16:31.685579708Z" level=info msg="CreateContainer within sandbox \"112cca7794fdb3548b9211ad9ea1d7694bfbb0d01ff668cdc4d8b9d487ee5f00\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 05:16:31.694679 containerd[2010]: time="2025-07-15T05:16:31.694635717Z" level=info msg="Container 90de8b09b77070bb3edecc8a0f125d1d1840e6d8a1d9b4d9da3885408a769fba: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:31.716524 containerd[2010]: time="2025-07-15T05:16:31.716455867Z" level=info msg="CreateContainer within sandbox \"112cca7794fdb3548b9211ad9ea1d7694bfbb0d01ff668cdc4d8b9d487ee5f00\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"90de8b09b77070bb3edecc8a0f125d1d1840e6d8a1d9b4d9da3885408a769fba\"" Jul 15 05:16:31.717717 containerd[2010]: time="2025-07-15T05:16:31.717340332Z" level=info msg="StartContainer for \"90de8b09b77070bb3edecc8a0f125d1d1840e6d8a1d9b4d9da3885408a769fba\"" Jul 15 05:16:31.719150 containerd[2010]: time="2025-07-15T05:16:31.719118996Z" level=info msg="connecting to shim 90de8b09b77070bb3edecc8a0f125d1d1840e6d8a1d9b4d9da3885408a769fba" address="unix:///run/containerd/s/38b5c604f099516f6e0d05795d33948a32cf6e87671db1bcc2246ade723cc84d" protocol=ttrpc version=3 Jul 15 05:16:31.745895 systemd[1]: Started cri-containerd-90de8b09b77070bb3edecc8a0f125d1d1840e6d8a1d9b4d9da3885408a769fba.scope - libcontainer container 90de8b09b77070bb3edecc8a0f125d1d1840e6d8a1d9b4d9da3885408a769fba. Jul 15 05:16:31.796382 containerd[2010]: time="2025-07-15T05:16:31.796300329Z" level=info msg="StartContainer for \"90de8b09b77070bb3edecc8a0f125d1d1840e6d8a1d9b4d9da3885408a769fba\" returns successfully" Jul 15 05:16:31.813323 systemd[1]: cri-containerd-90de8b09b77070bb3edecc8a0f125d1d1840e6d8a1d9b4d9da3885408a769fba.scope: Deactivated successfully. Jul 15 05:16:31.845906 containerd[2010]: time="2025-07-15T05:16:31.845859568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"90de8b09b77070bb3edecc8a0f125d1d1840e6d8a1d9b4d9da3885408a769fba\" id:\"90de8b09b77070bb3edecc8a0f125d1d1840e6d8a1d9b4d9da3885408a769fba\" pid:4283 exited_at:{seconds:1752556591 nanos:815871890}" Jul 15 05:16:31.851267 containerd[2010]: time="2025-07-15T05:16:31.851204540Z" level=info msg="received exit event container_id:\"90de8b09b77070bb3edecc8a0f125d1d1840e6d8a1d9b4d9da3885408a769fba\" id:\"90de8b09b77070bb3edecc8a0f125d1d1840e6d8a1d9b4d9da3885408a769fba\" pid:4283 exited_at:{seconds:1752556591 nanos:815871890}" Jul 15 05:16:31.885030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90de8b09b77070bb3edecc8a0f125d1d1840e6d8a1d9b4d9da3885408a769fba-rootfs.mount: Deactivated successfully. Jul 15 05:16:32.230142 kubelet[3337]: E0715 05:16:32.230086 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfc8q" podUID="d668f1c9-e444-45ed-aa1c-55630e5a2640" Jul 15 05:16:32.364523 containerd[2010]: time="2025-07-15T05:16:32.364165652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 05:16:32.367535 kubelet[3337]: I0715 05:16:32.366011 3337 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:16:32.393682 kubelet[3337]: I0715 05:16:32.393539 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59794c788d-m4txk" podStartSLOduration=2.57218546 podStartE2EDuration="5.393516131s" podCreationTimestamp="2025-07-15 05:16:27 +0000 UTC" firstStartedPulling="2025-07-15 05:16:27.699735608 +0000 UTC m=+21.608382431" lastFinishedPulling="2025-07-15 05:16:30.521066278 +0000 UTC m=+24.429713102" observedRunningTime="2025-07-15 05:16:31.379185177 +0000 UTC m=+25.287832020" watchObservedRunningTime="2025-07-15 05:16:32.393516131 +0000 UTC m=+26.302162981" Jul 15 05:16:34.241172 kubelet[3337]: E0715 05:16:34.239404 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfc8q" podUID="d668f1c9-e444-45ed-aa1c-55630e5a2640" Jul 15 05:16:35.904384 containerd[2010]: time="2025-07-15T05:16:35.904334034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:35.905507 containerd[2010]: time="2025-07-15T05:16:35.905474668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 15 05:16:35.907152 containerd[2010]: time="2025-07-15T05:16:35.907100193Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:35.909875 containerd[2010]: time="2025-07-15T05:16:35.909812705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:35.910416 containerd[2010]: time="2025-07-15T05:16:35.910264760Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.546055859s" Jul 15 05:16:35.910416 containerd[2010]: time="2025-07-15T05:16:35.910294822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 15 05:16:35.916874 containerd[2010]: time="2025-07-15T05:16:35.916830174Z" level=info msg="CreateContainer within sandbox \"112cca7794fdb3548b9211ad9ea1d7694bfbb0d01ff668cdc4d8b9d487ee5f00\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 05:16:35.929690 containerd[2010]: time="2025-07-15T05:16:35.929598955Z" level=info msg="Container 38ec071ed06666b77cff23daa3e1230bd0a42df9e161332245110b9140e6b202: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:35.945051 containerd[2010]: time="2025-07-15T05:16:35.945003402Z" level=info msg="CreateContainer within sandbox \"112cca7794fdb3548b9211ad9ea1d7694bfbb0d01ff668cdc4d8b9d487ee5f00\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"38ec071ed06666b77cff23daa3e1230bd0a42df9e161332245110b9140e6b202\"" Jul 15 05:16:35.945945 containerd[2010]: time="2025-07-15T05:16:35.945704366Z" level=info msg="StartContainer for \"38ec071ed06666b77cff23daa3e1230bd0a42df9e161332245110b9140e6b202\"" Jul 15 05:16:35.949131 containerd[2010]: time="2025-07-15T05:16:35.949053844Z" level=info msg="connecting to shim 38ec071ed06666b77cff23daa3e1230bd0a42df9e161332245110b9140e6b202" address="unix:///run/containerd/s/38b5c604f099516f6e0d05795d33948a32cf6e87671db1bcc2246ade723cc84d" protocol=ttrpc version=3 Jul 15 05:16:35.978874 systemd[1]: Started cri-containerd-38ec071ed06666b77cff23daa3e1230bd0a42df9e161332245110b9140e6b202.scope - libcontainer container 38ec071ed06666b77cff23daa3e1230bd0a42df9e161332245110b9140e6b202. Jul 15 05:16:36.029337 containerd[2010]: time="2025-07-15T05:16:36.029224790Z" level=info msg="StartContainer for \"38ec071ed06666b77cff23daa3e1230bd0a42df9e161332245110b9140e6b202\" returns successfully" Jul 15 05:16:36.231933 kubelet[3337]: E0715 05:16:36.231560 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfc8q" podUID="d668f1c9-e444-45ed-aa1c-55630e5a2640" Jul 15 05:16:37.183479 systemd[1]: cri-containerd-38ec071ed06666b77cff23daa3e1230bd0a42df9e161332245110b9140e6b202.scope: Deactivated successfully. Jul 15 05:16:37.183892 systemd[1]: cri-containerd-38ec071ed06666b77cff23daa3e1230bd0a42df9e161332245110b9140e6b202.scope: Consumed 560ms CPU time, 163.8M memory peak, 6.8M read from disk, 171.2M written to disk. Jul 15 05:16:37.187110 containerd[2010]: time="2025-07-15T05:16:37.186714906Z" level=info msg="received exit event container_id:\"38ec071ed06666b77cff23daa3e1230bd0a42df9e161332245110b9140e6b202\" id:\"38ec071ed06666b77cff23daa3e1230bd0a42df9e161332245110b9140e6b202\" pid:4339 exited_at:{seconds:1752556597 nanos:184878433}" Jul 15 05:16:37.187110 containerd[2010]: time="2025-07-15T05:16:37.187081576Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38ec071ed06666b77cff23daa3e1230bd0a42df9e161332245110b9140e6b202\" id:\"38ec071ed06666b77cff23daa3e1230bd0a42df9e161332245110b9140e6b202\" pid:4339 exited_at:{seconds:1752556597 nanos:184878433}" Jul 15 05:16:37.229866 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38ec071ed06666b77cff23daa3e1230bd0a42df9e161332245110b9140e6b202-rootfs.mount: Deactivated successfully. Jul 15 05:16:37.261507 kubelet[3337]: I0715 05:16:37.261426 3337 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 05:16:37.341711 systemd[1]: Created slice kubepods-besteffort-poda14197d6_2780_4c4a_9c8c_3f00c12904e5.slice - libcontainer container kubepods-besteffort-poda14197d6_2780_4c4a_9c8c_3f00c12904e5.slice. Jul 15 05:16:37.351105 systemd[1]: Created slice kubepods-burstable-podbc6d907d_23fb_4a2b_bf72_3b55f4a815fb.slice - libcontainer container kubepods-burstable-podbc6d907d_23fb_4a2b_bf72_3b55f4a815fb.slice. Jul 15 05:16:37.362621 systemd[1]: Created slice kubepods-besteffort-pod5a9bb308_2535_472f_9c1c_5a5226fa3191.slice - libcontainer container kubepods-besteffort-pod5a9bb308_2535_472f_9c1c_5a5226fa3191.slice. Jul 15 05:16:37.366990 systemd[1]: Created slice kubepods-burstable-poddceb10b0_f2a4_468d_8f0c_c0d5cbe355da.slice - libcontainer container kubepods-burstable-poddceb10b0_f2a4_468d_8f0c_c0d5cbe355da.slice. Jul 15 05:16:37.374907 systemd[1]: Created slice kubepods-besteffort-podeac85450_9178_4ce3_a16e_099c7bccfe9f.slice - libcontainer container kubepods-besteffort-podeac85450_9178_4ce3_a16e_099c7bccfe9f.slice. Jul 15 05:16:37.382777 systemd[1]: Created slice kubepods-besteffort-pod089f7323_e931_4192_ab88_cfa8c0d68fec.slice - libcontainer container kubepods-besteffort-pod089f7323_e931_4192_ab88_cfa8c0d68fec.slice. Jul 15 05:16:37.390216 systemd[1]: Created slice kubepods-besteffort-pod9256fa51_e128_4b6d_af1f_407f753c667b.slice - libcontainer container kubepods-besteffort-pod9256fa51_e128_4b6d_af1f_407f753c667b.slice. Jul 15 05:16:37.392026 kubelet[3337]: I0715 05:16:37.391796 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dceb10b0-f2a4-468d-8f0c-c0d5cbe355da-config-volume\") pod \"coredns-674b8bbfcf-bvn2j\" (UID: \"dceb10b0-f2a4-468d-8f0c-c0d5cbe355da\") " pod="kube-system/coredns-674b8bbfcf-bvn2j" Jul 15 05:16:37.392546 kubelet[3337]: I0715 05:16:37.392529 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac85450-9178-4ce3-a16e-099c7bccfe9f-config\") pod \"goldmane-768f4c5c69-8hfgl\" (UID: \"eac85450-9178-4ce3-a16e-099c7bccfe9f\") " pod="calico-system/goldmane-768f4c5c69-8hfgl" Jul 15 05:16:37.393249 kubelet[3337]: I0715 05:16:37.393229 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eac85450-9178-4ce3-a16e-099c7bccfe9f-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-8hfgl\" (UID: \"eac85450-9178-4ce3-a16e-099c7bccfe9f\") " pod="calico-system/goldmane-768f4c5c69-8hfgl" Jul 15 05:16:37.393456 kubelet[3337]: I0715 05:16:37.393443 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/089f7323-e931-4192-ab88-cfa8c0d68fec-calico-apiserver-certs\") pod \"calico-apiserver-6d747b47d9-9kmx9\" (UID: \"089f7323-e931-4192-ab88-cfa8c0d68fec\") " pod="calico-apiserver/calico-apiserver-6d747b47d9-9kmx9" Jul 15 05:16:37.394136 kubelet[3337]: I0715 05:16:37.393648 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pnnw\" (UniqueName: \"kubernetes.io/projected/5a9bb308-2535-472f-9c1c-5a5226fa3191-kube-api-access-8pnnw\") pod \"calico-apiserver-6d747b47d9-86lt6\" (UID: \"5a9bb308-2535-472f-9c1c-5a5226fa3191\") " pod="calico-apiserver/calico-apiserver-6d747b47d9-86lt6" Jul 15 05:16:37.394136 kubelet[3337]: I0715 05:16:37.393695 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a14197d6-2780-4c4a-9c8c-3f00c12904e5-whisker-backend-key-pair\") pod \"whisker-64d4b9794c-ch65b\" (UID: \"a14197d6-2780-4c4a-9c8c-3f00c12904e5\") " pod="calico-system/whisker-64d4b9794c-ch65b" Jul 15 05:16:37.394136 kubelet[3337]: I0715 05:16:37.393712 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzk5j\" (UniqueName: \"kubernetes.io/projected/dceb10b0-f2a4-468d-8f0c-c0d5cbe355da-kube-api-access-qzk5j\") pod \"coredns-674b8bbfcf-bvn2j\" (UID: \"dceb10b0-f2a4-468d-8f0c-c0d5cbe355da\") " pod="kube-system/coredns-674b8bbfcf-bvn2j" Jul 15 05:16:37.394136 kubelet[3337]: I0715 05:16:37.393727 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/eac85450-9178-4ce3-a16e-099c7bccfe9f-goldmane-key-pair\") pod \"goldmane-768f4c5c69-8hfgl\" (UID: \"eac85450-9178-4ce3-a16e-099c7bccfe9f\") " pod="calico-system/goldmane-768f4c5c69-8hfgl" Jul 15 05:16:37.394136 kubelet[3337]: I0715 05:16:37.393742 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5b4v\" (UniqueName: \"kubernetes.io/projected/eac85450-9178-4ce3-a16e-099c7bccfe9f-kube-api-access-g5b4v\") pod \"goldmane-768f4c5c69-8hfgl\" (UID: \"eac85450-9178-4ce3-a16e-099c7bccfe9f\") " pod="calico-system/goldmane-768f4c5c69-8hfgl" Jul 15 05:16:37.394280 kubelet[3337]: I0715 05:16:37.393771 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9256fa51-e128-4b6d-af1f-407f753c667b-tigera-ca-bundle\") pod \"calico-kube-controllers-76587bdddf-m5mzb\" (UID: \"9256fa51-e128-4b6d-af1f-407f753c667b\") " pod="calico-system/calico-kube-controllers-76587bdddf-m5mzb" Jul 15 05:16:37.394280 kubelet[3337]: I0715 05:16:37.393791 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5a9bb308-2535-472f-9c1c-5a5226fa3191-calico-apiserver-certs\") pod \"calico-apiserver-6d747b47d9-86lt6\" (UID: \"5a9bb308-2535-472f-9c1c-5a5226fa3191\") " pod="calico-apiserver/calico-apiserver-6d747b47d9-86lt6" Jul 15 05:16:37.394280 kubelet[3337]: I0715 05:16:37.393809 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc6d907d-23fb-4a2b-bf72-3b55f4a815fb-config-volume\") pod \"coredns-674b8bbfcf-n4hpt\" (UID: \"bc6d907d-23fb-4a2b-bf72-3b55f4a815fb\") " pod="kube-system/coredns-674b8bbfcf-n4hpt" Jul 15 05:16:37.394280 kubelet[3337]: I0715 05:16:37.393836 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccsqb\" (UniqueName: \"kubernetes.io/projected/bc6d907d-23fb-4a2b-bf72-3b55f4a815fb-kube-api-access-ccsqb\") pod \"coredns-674b8bbfcf-n4hpt\" (UID: \"bc6d907d-23fb-4a2b-bf72-3b55f4a815fb\") " pod="kube-system/coredns-674b8bbfcf-n4hpt" Jul 15 05:16:37.394280 kubelet[3337]: I0715 05:16:37.393855 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkt6f\" (UniqueName: \"kubernetes.io/projected/089f7323-e931-4192-ab88-cfa8c0d68fec-kube-api-access-hkt6f\") pod \"calico-apiserver-6d747b47d9-9kmx9\" (UID: \"089f7323-e931-4192-ab88-cfa8c0d68fec\") " pod="calico-apiserver/calico-apiserver-6d747b47d9-9kmx9" Jul 15 05:16:37.394406 kubelet[3337]: I0715 05:16:37.393874 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpq6k\" (UniqueName: \"kubernetes.io/projected/9256fa51-e128-4b6d-af1f-407f753c667b-kube-api-access-dpq6k\") pod \"calico-kube-controllers-76587bdddf-m5mzb\" (UID: \"9256fa51-e128-4b6d-af1f-407f753c667b\") " pod="calico-system/calico-kube-controllers-76587bdddf-m5mzb" Jul 15 05:16:37.394406 kubelet[3337]: I0715 05:16:37.393891 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a14197d6-2780-4c4a-9c8c-3f00c12904e5-whisker-ca-bundle\") pod \"whisker-64d4b9794c-ch65b\" (UID: \"a14197d6-2780-4c4a-9c8c-3f00c12904e5\") " pod="calico-system/whisker-64d4b9794c-ch65b" Jul 15 05:16:37.394406 kubelet[3337]: I0715 05:16:37.393922 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkdgv\" (UniqueName: \"kubernetes.io/projected/a14197d6-2780-4c4a-9c8c-3f00c12904e5-kube-api-access-tkdgv\") pod \"whisker-64d4b9794c-ch65b\" (UID: \"a14197d6-2780-4c4a-9c8c-3f00c12904e5\") " pod="calico-system/whisker-64d4b9794c-ch65b" Jul 15 05:16:37.417566 containerd[2010]: time="2025-07-15T05:16:37.417475315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 05:16:37.654115 containerd[2010]: time="2025-07-15T05:16:37.654061141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64d4b9794c-ch65b,Uid:a14197d6-2780-4c4a-9c8c-3f00c12904e5,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:37.659370 containerd[2010]: time="2025-07-15T05:16:37.658800903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n4hpt,Uid:bc6d907d-23fb-4a2b-bf72-3b55f4a815fb,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:37.668324 containerd[2010]: time="2025-07-15T05:16:37.668135159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d747b47d9-86lt6,Uid:5a9bb308-2535-472f-9c1c-5a5226fa3191,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:16:37.670734 containerd[2010]: time="2025-07-15T05:16:37.670695172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bvn2j,Uid:dceb10b0-f2a4-468d-8f0c-c0d5cbe355da,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:37.695703 containerd[2010]: time="2025-07-15T05:16:37.694752975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d747b47d9-9kmx9,Uid:089f7323-e931-4192-ab88-cfa8c0d68fec,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:16:37.697676 containerd[2010]: time="2025-07-15T05:16:37.697610642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8hfgl,Uid:eac85450-9178-4ce3-a16e-099c7bccfe9f,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:37.699469 containerd[2010]: time="2025-07-15T05:16:37.699312140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76587bdddf-m5mzb,Uid:9256fa51-e128-4b6d-af1f-407f753c667b,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:38.026908 containerd[2010]: time="2025-07-15T05:16:38.026804968Z" level=error msg="Failed to destroy network for sandbox \"ec2c7e8af31da300ab9fc8c68d8032af550c57f34ed883c26f40eb595ac53c05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.028689 containerd[2010]: time="2025-07-15T05:16:38.028605217Z" level=error msg="Failed to destroy network for sandbox \"2e935322346876bb5011880659e795518978435dbe5b8333bb83243e242e41fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.029672 containerd[2010]: time="2025-07-15T05:16:38.029441101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8hfgl,Uid:eac85450-9178-4ce3-a16e-099c7bccfe9f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec2c7e8af31da300ab9fc8c68d8032af550c57f34ed883c26f40eb595ac53c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.032236 kubelet[3337]: E0715 05:16:38.030791 3337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec2c7e8af31da300ab9fc8c68d8032af550c57f34ed883c26f40eb595ac53c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.032236 kubelet[3337]: E0715 05:16:38.030859 3337 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec2c7e8af31da300ab9fc8c68d8032af550c57f34ed883c26f40eb595ac53c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-8hfgl" Jul 15 05:16:38.032236 kubelet[3337]: E0715 05:16:38.030891 3337 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec2c7e8af31da300ab9fc8c68d8032af550c57f34ed883c26f40eb595ac53c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-8hfgl" Jul 15 05:16:38.032511 kubelet[3337]: E0715 05:16:38.030946 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-8hfgl_calico-system(eac85450-9178-4ce3-a16e-099c7bccfe9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-8hfgl_calico-system(eac85450-9178-4ce3-a16e-099c7bccfe9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec2c7e8af31da300ab9fc8c68d8032af550c57f34ed883c26f40eb595ac53c05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-8hfgl" podUID="eac85450-9178-4ce3-a16e-099c7bccfe9f" Jul 15 05:16:38.035058 containerd[2010]: time="2025-07-15T05:16:38.035014065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64d4b9794c-ch65b,Uid:a14197d6-2780-4c4a-9c8c-3f00c12904e5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e935322346876bb5011880659e795518978435dbe5b8333bb83243e242e41fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.051605 kubelet[3337]: E0715 05:16:38.050748 3337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e935322346876bb5011880659e795518978435dbe5b8333bb83243e242e41fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.051605 kubelet[3337]: E0715 05:16:38.050807 3337 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e935322346876bb5011880659e795518978435dbe5b8333bb83243e242e41fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64d4b9794c-ch65b" Jul 15 05:16:38.051605 kubelet[3337]: E0715 05:16:38.050828 3337 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e935322346876bb5011880659e795518978435dbe5b8333bb83243e242e41fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64d4b9794c-ch65b" Jul 15 05:16:38.051812 kubelet[3337]: E0715 05:16:38.050881 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64d4b9794c-ch65b_calico-system(a14197d6-2780-4c4a-9c8c-3f00c12904e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64d4b9794c-ch65b_calico-system(a14197d6-2780-4c4a-9c8c-3f00c12904e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e935322346876bb5011880659e795518978435dbe5b8333bb83243e242e41fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64d4b9794c-ch65b" podUID="a14197d6-2780-4c4a-9c8c-3f00c12904e5" Jul 15 05:16:38.065425 containerd[2010]: time="2025-07-15T05:16:38.065386582Z" level=error msg="Failed to destroy network for sandbox \"f6a1197267e7a7a76d6cce5a7a7b952964c6ca5edbb9dcce64d36cf0b4fd6f71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.072274 containerd[2010]: time="2025-07-15T05:16:38.071936673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76587bdddf-m5mzb,Uid:9256fa51-e128-4b6d-af1f-407f753c667b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a1197267e7a7a76d6cce5a7a7b952964c6ca5edbb9dcce64d36cf0b4fd6f71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.072767 kubelet[3337]: E0715 05:16:38.072722 3337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a1197267e7a7a76d6cce5a7a7b952964c6ca5edbb9dcce64d36cf0b4fd6f71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.072851 kubelet[3337]: E0715 05:16:38.072788 3337 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a1197267e7a7a76d6cce5a7a7b952964c6ca5edbb9dcce64d36cf0b4fd6f71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76587bdddf-m5mzb" Jul 15 05:16:38.072851 kubelet[3337]: E0715 05:16:38.072809 3337 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a1197267e7a7a76d6cce5a7a7b952964c6ca5edbb9dcce64d36cf0b4fd6f71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76587bdddf-m5mzb" Jul 15 05:16:38.072909 kubelet[3337]: E0715 05:16:38.072857 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76587bdddf-m5mzb_calico-system(9256fa51-e128-4b6d-af1f-407f753c667b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76587bdddf-m5mzb_calico-system(9256fa51-e128-4b6d-af1f-407f753c667b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6a1197267e7a7a76d6cce5a7a7b952964c6ca5edbb9dcce64d36cf0b4fd6f71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76587bdddf-m5mzb" podUID="9256fa51-e128-4b6d-af1f-407f753c667b" Jul 15 05:16:38.078437 containerd[2010]: time="2025-07-15T05:16:38.078330367Z" level=error msg="Failed to destroy network for sandbox \"3801de339f74bed01827a6ae6351c5026d5f98306c786b5a3edeee50bd565a08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.080629 containerd[2010]: time="2025-07-15T05:16:38.080545280Z" level=error msg="Failed to destroy network for sandbox \"2f7b0cd12b0088cdbca348e1298f2967e4bcd3d9ea59d7967f421166e3a7ae22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.085000 containerd[2010]: time="2025-07-15T05:16:38.084537626Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n4hpt,Uid:bc6d907d-23fb-4a2b-bf72-3b55f4a815fb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3801de339f74bed01827a6ae6351c5026d5f98306c786b5a3edeee50bd565a08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.085176 kubelet[3337]: E0715 05:16:38.085070 3337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3801de339f74bed01827a6ae6351c5026d5f98306c786b5a3edeee50bd565a08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.085176 kubelet[3337]: E0715 05:16:38.085120 3337 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3801de339f74bed01827a6ae6351c5026d5f98306c786b5a3edeee50bd565a08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n4hpt" Jul 15 05:16:38.085176 kubelet[3337]: E0715 05:16:38.085141 3337 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3801de339f74bed01827a6ae6351c5026d5f98306c786b5a3edeee50bd565a08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n4hpt" Jul 15 05:16:38.085280 kubelet[3337]: E0715 05:16:38.085183 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-n4hpt_kube-system(bc6d907d-23fb-4a2b-bf72-3b55f4a815fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-n4hpt_kube-system(bc6d907d-23fb-4a2b-bf72-3b55f4a815fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3801de339f74bed01827a6ae6351c5026d5f98306c786b5a3edeee50bd565a08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-n4hpt" podUID="bc6d907d-23fb-4a2b-bf72-3b55f4a815fb" Jul 15 05:16:38.085649 containerd[2010]: time="2025-07-15T05:16:38.085614575Z" level=error msg="Failed to destroy network for sandbox \"336699f581f2820d9dfc7f3edfeb794bbe535a3ed894fa094367e25db05e4859\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.087697 containerd[2010]: time="2025-07-15T05:16:38.087263698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bvn2j,Uid:dceb10b0-f2a4-468d-8f0c-c0d5cbe355da,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f7b0cd12b0088cdbca348e1298f2967e4bcd3d9ea59d7967f421166e3a7ae22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.087883 kubelet[3337]: E0715 05:16:38.087487 3337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f7b0cd12b0088cdbca348e1298f2967e4bcd3d9ea59d7967f421166e3a7ae22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.087883 kubelet[3337]: E0715 05:16:38.087533 3337 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f7b0cd12b0088cdbca348e1298f2967e4bcd3d9ea59d7967f421166e3a7ae22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bvn2j" Jul 15 05:16:38.087883 kubelet[3337]: E0715 05:16:38.087557 3337 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f7b0cd12b0088cdbca348e1298f2967e4bcd3d9ea59d7967f421166e3a7ae22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bvn2j" Jul 15 05:16:38.088641 kubelet[3337]: E0715 05:16:38.087600 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bvn2j_kube-system(dceb10b0-f2a4-468d-8f0c-c0d5cbe355da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bvn2j_kube-system(dceb10b0-f2a4-468d-8f0c-c0d5cbe355da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f7b0cd12b0088cdbca348e1298f2967e4bcd3d9ea59d7967f421166e3a7ae22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bvn2j" podUID="dceb10b0-f2a4-468d-8f0c-c0d5cbe355da" Jul 15 05:16:38.088884 containerd[2010]: time="2025-07-15T05:16:38.088302696Z" level=error msg="Failed to destroy network for sandbox \"68f5888e000878473724d9182d061eebde2a1ef05eaa1acf1bfb9a02dfae9b69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.089932 containerd[2010]: time="2025-07-15T05:16:38.089739500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d747b47d9-86lt6,Uid:5a9bb308-2535-472f-9c1c-5a5226fa3191,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"336699f581f2820d9dfc7f3edfeb794bbe535a3ed894fa094367e25db05e4859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.090736 kubelet[3337]: E0715 05:16:38.090198 3337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"336699f581f2820d9dfc7f3edfeb794bbe535a3ed894fa094367e25db05e4859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.090736 kubelet[3337]: E0715 05:16:38.090260 3337 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"336699f581f2820d9dfc7f3edfeb794bbe535a3ed894fa094367e25db05e4859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d747b47d9-86lt6" Jul 15 05:16:38.090736 kubelet[3337]: E0715 05:16:38.090279 3337 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"336699f581f2820d9dfc7f3edfeb794bbe535a3ed894fa094367e25db05e4859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d747b47d9-86lt6" Jul 15 05:16:38.090866 kubelet[3337]: E0715 05:16:38.090334 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d747b47d9-86lt6_calico-apiserver(5a9bb308-2535-472f-9c1c-5a5226fa3191)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d747b47d9-86lt6_calico-apiserver(5a9bb308-2535-472f-9c1c-5a5226fa3191)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"336699f581f2820d9dfc7f3edfeb794bbe535a3ed894fa094367e25db05e4859\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d747b47d9-86lt6" podUID="5a9bb308-2535-472f-9c1c-5a5226fa3191" Jul 15 05:16:38.092012 containerd[2010]: time="2025-07-15T05:16:38.091974257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d747b47d9-9kmx9,Uid:089f7323-e931-4192-ab88-cfa8c0d68fec,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f5888e000878473724d9182d061eebde2a1ef05eaa1acf1bfb9a02dfae9b69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.092556 kubelet[3337]: E0715 05:16:38.092427 3337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f5888e000878473724d9182d061eebde2a1ef05eaa1acf1bfb9a02dfae9b69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.092556 kubelet[3337]: E0715 05:16:38.092480 3337 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f5888e000878473724d9182d061eebde2a1ef05eaa1acf1bfb9a02dfae9b69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d747b47d9-9kmx9" Jul 15 05:16:38.092556 kubelet[3337]: E0715 05:16:38.092500 3337 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f5888e000878473724d9182d061eebde2a1ef05eaa1acf1bfb9a02dfae9b69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d747b47d9-9kmx9" Jul 15 05:16:38.092741 kubelet[3337]: E0715 05:16:38.092717 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d747b47d9-9kmx9_calico-apiserver(089f7323-e931-4192-ab88-cfa8c0d68fec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d747b47d9-9kmx9_calico-apiserver(089f7323-e931-4192-ab88-cfa8c0d68fec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68f5888e000878473724d9182d061eebde2a1ef05eaa1acf1bfb9a02dfae9b69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d747b47d9-9kmx9" podUID="089f7323-e931-4192-ab88-cfa8c0d68fec" Jul 15 05:16:38.240076 systemd[1]: Created slice kubepods-besteffort-podd668f1c9_e444_45ed_aa1c_55630e5a2640.slice - libcontainer container kubepods-besteffort-podd668f1c9_e444_45ed_aa1c_55630e5a2640.slice. Jul 15 05:16:38.242711 containerd[2010]: time="2025-07-15T05:16:38.242611270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfc8q,Uid:d668f1c9-e444-45ed-aa1c-55630e5a2640,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:38.305726 containerd[2010]: time="2025-07-15T05:16:38.305017763Z" level=error msg="Failed to destroy network for sandbox \"9199ad97844c1538bc205d489544d58dad88d0c17d935657e0aae73b78c81556\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.310039 systemd[1]: run-netns-cni\x2d4437b20b\x2d5b94\x2d4bec\x2dfea5\x2d6137955b4c59.mount: Deactivated successfully. Jul 15 05:16:38.311090 containerd[2010]: time="2025-07-15T05:16:38.310995827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfc8q,Uid:d668f1c9-e444-45ed-aa1c-55630e5a2640,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9199ad97844c1538bc205d489544d58dad88d0c17d935657e0aae73b78c81556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.312061 kubelet[3337]: E0715 05:16:38.311651 3337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9199ad97844c1538bc205d489544d58dad88d0c17d935657e0aae73b78c81556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:16:38.312753 kubelet[3337]: E0715 05:16:38.312079 3337 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9199ad97844c1538bc205d489544d58dad88d0c17d935657e0aae73b78c81556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gfc8q" Jul 15 05:16:38.312753 kubelet[3337]: E0715 05:16:38.312114 3337 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9199ad97844c1538bc205d489544d58dad88d0c17d935657e0aae73b78c81556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gfc8q" Jul 15 05:16:38.312753 kubelet[3337]: E0715 05:16:38.312193 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gfc8q_calico-system(d668f1c9-e444-45ed-aa1c-55630e5a2640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gfc8q_calico-system(d668f1c9-e444-45ed-aa1c-55630e5a2640)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9199ad97844c1538bc205d489544d58dad88d0c17d935657e0aae73b78c81556\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gfc8q" podUID="d668f1c9-e444-45ed-aa1c-55630e5a2640" Jul 15 05:16:45.336085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560470798.mount: Deactivated successfully. Jul 15 05:16:45.395785 containerd[2010]: time="2025-07-15T05:16:45.385252248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:45.405359 containerd[2010]: time="2025-07-15T05:16:45.404906879Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:45.405527 containerd[2010]: time="2025-07-15T05:16:45.405503525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 15 05:16:45.410817 containerd[2010]: time="2025-07-15T05:16:45.410780286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:45.411331 containerd[2010]: time="2025-07-15T05:16:45.411304654Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.993782248s" Jul 15 05:16:45.411394 containerd[2010]: time="2025-07-15T05:16:45.411337160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 15 05:16:45.431170 containerd[2010]: time="2025-07-15T05:16:45.431131179Z" level=info msg="CreateContainer within sandbox \"112cca7794fdb3548b9211ad9ea1d7694bfbb0d01ff668cdc4d8b9d487ee5f00\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 05:16:45.470925 containerd[2010]: time="2025-07-15T05:16:45.470783796Z" level=info msg="Container 8273e4dd25bc0dec3dc3d00a276a7dda2fef2346da705a6d6fbe493f1e14ed3c: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:45.473380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1275137156.mount: Deactivated successfully. Jul 15 05:16:45.523513 containerd[2010]: time="2025-07-15T05:16:45.523461168Z" level=info msg="CreateContainer within sandbox \"112cca7794fdb3548b9211ad9ea1d7694bfbb0d01ff668cdc4d8b9d487ee5f00\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8273e4dd25bc0dec3dc3d00a276a7dda2fef2346da705a6d6fbe493f1e14ed3c\"" Jul 15 05:16:45.524210 containerd[2010]: time="2025-07-15T05:16:45.524087015Z" level=info msg="StartContainer for \"8273e4dd25bc0dec3dc3d00a276a7dda2fef2346da705a6d6fbe493f1e14ed3c\"" Jul 15 05:16:45.531838 containerd[2010]: time="2025-07-15T05:16:45.531787334Z" level=info msg="connecting to shim 8273e4dd25bc0dec3dc3d00a276a7dda2fef2346da705a6d6fbe493f1e14ed3c" address="unix:///run/containerd/s/38b5c604f099516f6e0d05795d33948a32cf6e87671db1bcc2246ade723cc84d" protocol=ttrpc version=3 Jul 15 05:16:45.651718 systemd[1]: Started cri-containerd-8273e4dd25bc0dec3dc3d00a276a7dda2fef2346da705a6d6fbe493f1e14ed3c.scope - libcontainer container 8273e4dd25bc0dec3dc3d00a276a7dda2fef2346da705a6d6fbe493f1e14ed3c. Jul 15 05:16:45.716770 containerd[2010]: time="2025-07-15T05:16:45.716729942Z" level=info msg="StartContainer for \"8273e4dd25bc0dec3dc3d00a276a7dda2fef2346da705a6d6fbe493f1e14ed3c\" returns successfully" Jul 15 05:16:46.082117 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 05:16:46.083038 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 05:16:46.581216 kubelet[3337]: I0715 05:16:46.578033 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l7qrw" podStartSLOduration=2.332743347 podStartE2EDuration="19.578010804s" podCreationTimestamp="2025-07-15 05:16:27 +0000 UTC" firstStartedPulling="2025-07-15 05:16:28.166727005 +0000 UTC m=+22.075373830" lastFinishedPulling="2025-07-15 05:16:45.411994451 +0000 UTC m=+39.320641287" observedRunningTime="2025-07-15 05:16:46.576844387 +0000 UTC m=+40.485491235" watchObservedRunningTime="2025-07-15 05:16:46.578010804 +0000 UTC m=+40.486657652" Jul 15 05:16:46.585383 kubelet[3337]: I0715 05:16:46.584789 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdgv\" (UniqueName: \"kubernetes.io/projected/a14197d6-2780-4c4a-9c8c-3f00c12904e5-kube-api-access-tkdgv\") pod \"a14197d6-2780-4c4a-9c8c-3f00c12904e5\" (UID: \"a14197d6-2780-4c4a-9c8c-3f00c12904e5\") " Jul 15 05:16:46.585626 kubelet[3337]: I0715 05:16:46.585593 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a14197d6-2780-4c4a-9c8c-3f00c12904e5-whisker-ca-bundle\") pod \"a14197d6-2780-4c4a-9c8c-3f00c12904e5\" (UID: \"a14197d6-2780-4c4a-9c8c-3f00c12904e5\") " Jul 15 05:16:46.586646 kubelet[3337]: I0715 05:16:46.585917 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a14197d6-2780-4c4a-9c8c-3f00c12904e5-whisker-backend-key-pair\") pod \"a14197d6-2780-4c4a-9c8c-3f00c12904e5\" (UID: \"a14197d6-2780-4c4a-9c8c-3f00c12904e5\") " Jul 15 05:16:46.590531 kubelet[3337]: I0715 05:16:46.589697 3337 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a14197d6-2780-4c4a-9c8c-3f00c12904e5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a14197d6-2780-4c4a-9c8c-3f00c12904e5" (UID: "a14197d6-2780-4c4a-9c8c-3f00c12904e5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 05:16:46.620409 kubelet[3337]: I0715 05:16:46.620365 3337 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a14197d6-2780-4c4a-9c8c-3f00c12904e5-kube-api-access-tkdgv" (OuterVolumeSpecName: "kube-api-access-tkdgv") pod "a14197d6-2780-4c4a-9c8c-3f00c12904e5" (UID: "a14197d6-2780-4c4a-9c8c-3f00c12904e5"). InnerVolumeSpecName "kube-api-access-tkdgv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 05:16:46.622242 kubelet[3337]: I0715 05:16:46.622212 3337 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a14197d6-2780-4c4a-9c8c-3f00c12904e5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a14197d6-2780-4c4a-9c8c-3f00c12904e5" (UID: "a14197d6-2780-4c4a-9c8c-3f00c12904e5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 05:16:46.622457 systemd[1]: var-lib-kubelet-pods-a14197d6\x2d2780\x2d4c4a\x2d9c8c\x2d3f00c12904e5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 05:16:46.622595 systemd[1]: var-lib-kubelet-pods-a14197d6\x2d2780\x2d4c4a\x2d9c8c\x2d3f00c12904e5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtkdgv.mount: Deactivated successfully. Jul 15 05:16:46.687154 kubelet[3337]: I0715 05:16:46.687124 3337 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a14197d6-2780-4c4a-9c8c-3f00c12904e5-whisker-backend-key-pair\") on node \"ip-172-31-21-211\" DevicePath \"\"" Jul 15 05:16:46.687295 kubelet[3337]: I0715 05:16:46.687174 3337 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdgv\" (UniqueName: \"kubernetes.io/projected/a14197d6-2780-4c4a-9c8c-3f00c12904e5-kube-api-access-tkdgv\") on node \"ip-172-31-21-211\" DevicePath \"\"" Jul 15 05:16:46.687295 kubelet[3337]: I0715 05:16:46.687192 3337 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a14197d6-2780-4c4a-9c8c-3f00c12904e5-whisker-ca-bundle\") on node \"ip-172-31-21-211\" DevicePath \"\"" Jul 15 05:16:46.842838 containerd[2010]: time="2025-07-15T05:16:46.842673347Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8273e4dd25bc0dec3dc3d00a276a7dda2fef2346da705a6d6fbe493f1e14ed3c\" id:\"15546f7ea95cac850b1ac59846e26c8a350d9f68fe7e748dfda6f1fdaa25ab20\" pid:4682 exit_status:1 exited_at:{seconds:1752556606 nanos:832334653}" Jul 15 05:16:47.548060 systemd[1]: Removed slice kubepods-besteffort-poda14197d6_2780_4c4a_9c8c_3f00c12904e5.slice - libcontainer container kubepods-besteffort-poda14197d6_2780_4c4a_9c8c_3f00c12904e5.slice. Jul 15 05:16:47.662235 containerd[2010]: time="2025-07-15T05:16:47.662187869Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8273e4dd25bc0dec3dc3d00a276a7dda2fef2346da705a6d6fbe493f1e14ed3c\" id:\"b93e0b60f57cba18888fb42e569abcd183b5ddf84ad9030e20edfc35cfbd1e7f\" pid:4706 exit_status:1 exited_at:{seconds:1752556607 nanos:661393371}" Jul 15 05:16:47.715242 systemd[1]: Created slice kubepods-besteffort-podd8791827_7a67_4ea5_8c60_f16d0f5521b3.slice - libcontainer container kubepods-besteffort-podd8791827_7a67_4ea5_8c60_f16d0f5521b3.slice. Jul 15 05:16:47.794597 kubelet[3337]: I0715 05:16:47.794426 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8791827-7a67-4ea5-8c60-f16d0f5521b3-whisker-backend-key-pair\") pod \"whisker-665859d6f7-wrsm5\" (UID: \"d8791827-7a67-4ea5-8c60-f16d0f5521b3\") " pod="calico-system/whisker-665859d6f7-wrsm5" Jul 15 05:16:47.794597 kubelet[3337]: I0715 05:16:47.794482 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5bb4\" (UniqueName: \"kubernetes.io/projected/d8791827-7a67-4ea5-8c60-f16d0f5521b3-kube-api-access-g5bb4\") pod \"whisker-665859d6f7-wrsm5\" (UID: \"d8791827-7a67-4ea5-8c60-f16d0f5521b3\") " pod="calico-system/whisker-665859d6f7-wrsm5" Jul 15 05:16:47.794597 kubelet[3337]: I0715 05:16:47.794514 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8791827-7a67-4ea5-8c60-f16d0f5521b3-whisker-ca-bundle\") pod \"whisker-665859d6f7-wrsm5\" (UID: \"d8791827-7a67-4ea5-8c60-f16d0f5521b3\") " pod="calico-system/whisker-665859d6f7-wrsm5" Jul 15 05:16:48.022812 containerd[2010]: time="2025-07-15T05:16:48.022210898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-665859d6f7-wrsm5,Uid:d8791827-7a67-4ea5-8c60-f16d0f5521b3,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:48.234796 kubelet[3337]: I0715 05:16:48.234712 3337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a14197d6-2780-4c4a-9c8c-3f00c12904e5" path="/var/lib/kubelet/pods/a14197d6-2780-4c4a-9c8c-3f00c12904e5/volumes" Jul 15 05:16:48.621197 containerd[2010]: time="2025-07-15T05:16:48.621139138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8273e4dd25bc0dec3dc3d00a276a7dda2fef2346da705a6d6fbe493f1e14ed3c\" id:\"469904cb2bd4a36410720c26ee4d434246a6f3b7c1ec737511191ab3d46fa81d\" pid:4842 exit_status:1 exited_at:{seconds:1752556608 nanos:620851621}" Jul 15 05:16:48.746166 (udev-worker)[4644]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:16:48.749756 systemd-networkd[1814]: calia8fbc5bdd9d: Link UP Jul 15 05:16:48.749955 systemd-networkd[1814]: calia8fbc5bdd9d: Gained carrier Jul 15 05:16:48.770966 containerd[2010]: 2025-07-15 05:16:48.185 [INFO][4809] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:16:48.770966 containerd[2010]: 2025-07-15 05:16:48.259 [INFO][4809] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--211-k8s-whisker--665859d6f7--wrsm5-eth0 whisker-665859d6f7- calico-system d8791827-7a67-4ea5-8c60-f16d0f5521b3 886 0 2025-07-15 05:16:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:665859d6f7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-21-211 whisker-665859d6f7-wrsm5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia8fbc5bdd9d [] [] }} ContainerID="6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" Namespace="calico-system" Pod="whisker-665859d6f7-wrsm5" WorkloadEndpoint="ip--172--31--21--211-k8s-whisker--665859d6f7--wrsm5-" Jul 15 05:16:48.770966 containerd[2010]: 2025-07-15 05:16:48.260 [INFO][4809] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" Namespace="calico-system" Pod="whisker-665859d6f7-wrsm5" WorkloadEndpoint="ip--172--31--21--211-k8s-whisker--665859d6f7--wrsm5-eth0" Jul 15 05:16:48.770966 containerd[2010]: 2025-07-15 05:16:48.654 [INFO][4820] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" HandleID="k8s-pod-network.6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" Workload="ip--172--31--21--211-k8s-whisker--665859d6f7--wrsm5-eth0" Jul 15 05:16:48.771252 containerd[2010]: 2025-07-15 05:16:48.658 [INFO][4820] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" HandleID="k8s-pod-network.6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" Workload="ip--172--31--21--211-k8s-whisker--665859d6f7--wrsm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000359d70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-211", "pod":"whisker-665859d6f7-wrsm5", "timestamp":"2025-07-15 05:16:48.654483407 +0000 UTC"}, Hostname:"ip-172-31-21-211", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:48.771252 containerd[2010]: 2025-07-15 05:16:48.658 [INFO][4820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:48.771252 containerd[2010]: 2025-07-15 05:16:48.659 [INFO][4820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:48.771252 containerd[2010]: 2025-07-15 05:16:48.660 [INFO][4820] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-211' Jul 15 05:16:48.771252 containerd[2010]: 2025-07-15 05:16:48.682 [INFO][4820] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" host="ip-172-31-21-211" Jul 15 05:16:48.771252 containerd[2010]: 2025-07-15 05:16:48.705 [INFO][4820] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-211" Jul 15 05:16:48.771252 containerd[2010]: 2025-07-15 05:16:48.711 [INFO][4820] ipam/ipam.go 511: Trying affinity for 192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:48.771252 containerd[2010]: 2025-07-15 05:16:48.713 [INFO][4820] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:48.771252 containerd[2010]: 2025-07-15 05:16:48.716 [INFO][4820] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:48.771500 containerd[2010]: 2025-07-15 05:16:48.716 [INFO][4820] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.128/26 handle="k8s-pod-network.6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" host="ip-172-31-21-211" Jul 15 05:16:48.771500 containerd[2010]: 2025-07-15 05:16:48.717 [INFO][4820] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6 Jul 15 05:16:48.771500 containerd[2010]: 2025-07-15 05:16:48.722 [INFO][4820] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.128/26 handle="k8s-pod-network.6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" host="ip-172-31-21-211" Jul 15 05:16:48.771500 containerd[2010]: 2025-07-15 05:16:48.730 [INFO][4820] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.129/26] block=192.168.116.128/26 handle="k8s-pod-network.6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" host="ip-172-31-21-211" Jul 15 05:16:48.771500 containerd[2010]: 2025-07-15 05:16:48.730 [INFO][4820] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.129/26] handle="k8s-pod-network.6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" host="ip-172-31-21-211" Jul 15 05:16:48.771500 containerd[2010]: 2025-07-15 05:16:48.730 [INFO][4820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:48.771500 containerd[2010]: 2025-07-15 05:16:48.730 [INFO][4820] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.129/26] IPv6=[] ContainerID="6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" HandleID="k8s-pod-network.6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" Workload="ip--172--31--21--211-k8s-whisker--665859d6f7--wrsm5-eth0" Jul 15 05:16:48.771898 containerd[2010]: 2025-07-15 05:16:48.732 [INFO][4809] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" Namespace="calico-system" Pod="whisker-665859d6f7-wrsm5" WorkloadEndpoint="ip--172--31--21--211-k8s-whisker--665859d6f7--wrsm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--211-k8s-whisker--665859d6f7--wrsm5-eth0", GenerateName:"whisker-665859d6f7-", Namespace:"calico-system", SelfLink:"", UID:"d8791827-7a67-4ea5-8c60-f16d0f5521b3", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"665859d6f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-211", ContainerID:"", Pod:"whisker-665859d6f7-wrsm5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.116.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia8fbc5bdd9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:48.771898 containerd[2010]: 2025-07-15 05:16:48.732 [INFO][4809] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.129/32] ContainerID="6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" Namespace="calico-system" Pod="whisker-665859d6f7-wrsm5" WorkloadEndpoint="ip--172--31--21--211-k8s-whisker--665859d6f7--wrsm5-eth0" Jul 15 05:16:48.772054 containerd[2010]: 2025-07-15 05:16:48.733 [INFO][4809] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8fbc5bdd9d ContainerID="6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" Namespace="calico-system" Pod="whisker-665859d6f7-wrsm5" WorkloadEndpoint="ip--172--31--21--211-k8s-whisker--665859d6f7--wrsm5-eth0" Jul 15 05:16:48.772054 containerd[2010]: 2025-07-15 05:16:48.751 [INFO][4809] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" Namespace="calico-system" Pod="whisker-665859d6f7-wrsm5" WorkloadEndpoint="ip--172--31--21--211-k8s-whisker--665859d6f7--wrsm5-eth0" Jul 15 05:16:48.772106 containerd[2010]: 2025-07-15 05:16:48.751 [INFO][4809] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" Namespace="calico-system" Pod="whisker-665859d6f7-wrsm5" WorkloadEndpoint="ip--172--31--21--211-k8s-whisker--665859d6f7--wrsm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--211-k8s-whisker--665859d6f7--wrsm5-eth0", GenerateName:"whisker-665859d6f7-", Namespace:"calico-system", SelfLink:"", UID:"d8791827-7a67-4ea5-8c60-f16d0f5521b3", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"665859d6f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-211", ContainerID:"6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6", Pod:"whisker-665859d6f7-wrsm5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.116.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia8fbc5bdd9d", MAC:"92:f0:c4:26:60:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:48.772246 containerd[2010]: 2025-07-15 05:16:48.765 [INFO][4809] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" Namespace="calico-system" Pod="whisker-665859d6f7-wrsm5" WorkloadEndpoint="ip--172--31--21--211-k8s-whisker--665859d6f7--wrsm5-eth0" Jul 15 05:16:48.927369 containerd[2010]: time="2025-07-15T05:16:48.926475950Z" level=info msg="connecting to shim 6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6" address="unix:///run/containerd/s/b642a77089b67adbe0943a6b14810ecc9053b3d5784cb8b7fd88d55b158d2006" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:48.954859 systemd[1]: Started cri-containerd-6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6.scope - libcontainer container 6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6. Jul 15 05:16:49.019002 containerd[2010]: time="2025-07-15T05:16:49.018960699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-665859d6f7-wrsm5,Uid:d8791827-7a67-4ea5-8c60-f16d0f5521b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6\"" Jul 15 05:16:49.034335 containerd[2010]: time="2025-07-15T05:16:49.034292084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 05:16:49.227290 containerd[2010]: time="2025-07-15T05:16:49.227042688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8hfgl,Uid:eac85450-9178-4ce3-a16e-099c7bccfe9f,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:49.227290 containerd[2010]: time="2025-07-15T05:16:49.227119610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfc8q,Uid:d668f1c9-e444-45ed-aa1c-55630e5a2640,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:49.475106 systemd-networkd[1814]: cali5ad0528fa36: Link UP Jul 15 05:16:49.477822 systemd-networkd[1814]: cali5ad0528fa36: Gained carrier Jul 15 05:16:49.523446 containerd[2010]: 2025-07-15 05:16:49.285 [INFO][4926] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:16:49.523446 containerd[2010]: 2025-07-15 05:16:49.302 [INFO][4926] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--211-k8s-csi--node--driver--gfc8q-eth0 csi-node-driver- calico-system d668f1c9-e444-45ed-aa1c-55630e5a2640 706 0 2025-07-15 05:16:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-21-211 csi-node-driver-gfc8q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5ad0528fa36 [] [] }} ContainerID="0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" Namespace="calico-system" Pod="csi-node-driver-gfc8q" WorkloadEndpoint="ip--172--31--21--211-k8s-csi--node--driver--gfc8q-" Jul 15 05:16:49.523446 containerd[2010]: 2025-07-15 05:16:49.303 [INFO][4926] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" Namespace="calico-system" Pod="csi-node-driver-gfc8q" WorkloadEndpoint="ip--172--31--21--211-k8s-csi--node--driver--gfc8q-eth0" Jul 15 05:16:49.523446 containerd[2010]: 2025-07-15 05:16:49.373 [INFO][4939] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" HandleID="k8s-pod-network.0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" Workload="ip--172--31--21--211-k8s-csi--node--driver--gfc8q-eth0" Jul 15 05:16:49.524222 containerd[2010]: 2025-07-15 05:16:49.374 [INFO][4939] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" HandleID="k8s-pod-network.0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" Workload="ip--172--31--21--211-k8s-csi--node--driver--gfc8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5b30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-211", "pod":"csi-node-driver-gfc8q", "timestamp":"2025-07-15 05:16:49.37298744 +0000 UTC"}, Hostname:"ip-172-31-21-211", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:49.524222 containerd[2010]: 2025-07-15 05:16:49.374 [INFO][4939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:49.524222 containerd[2010]: 2025-07-15 05:16:49.374 [INFO][4939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:49.524222 containerd[2010]: 2025-07-15 05:16:49.374 [INFO][4939] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-211' Jul 15 05:16:49.524222 containerd[2010]: 2025-07-15 05:16:49.388 [INFO][4939] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" host="ip-172-31-21-211" Jul 15 05:16:49.524222 containerd[2010]: 2025-07-15 05:16:49.401 [INFO][4939] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-211" Jul 15 05:16:49.524222 containerd[2010]: 2025-07-15 05:16:49.416 [INFO][4939] ipam/ipam.go 511: Trying affinity for 192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:49.524222 containerd[2010]: 2025-07-15 05:16:49.419 [INFO][4939] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:49.524222 containerd[2010]: 2025-07-15 05:16:49.429 [INFO][4939] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:49.524652 containerd[2010]: 2025-07-15 05:16:49.429 [INFO][4939] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.128/26 handle="k8s-pod-network.0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" host="ip-172-31-21-211" Jul 15 05:16:49.524652 containerd[2010]: 2025-07-15 05:16:49.434 [INFO][4939] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0 Jul 15 05:16:49.524652 containerd[2010]: 2025-07-15 05:16:49.443 [INFO][4939] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.128/26 handle="k8s-pod-network.0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" host="ip-172-31-21-211" Jul 15 05:16:49.524652 containerd[2010]: 2025-07-15 05:16:49.453 [INFO][4939] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.130/26] block=192.168.116.128/26 handle="k8s-pod-network.0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" host="ip-172-31-21-211" Jul 15 05:16:49.524652 containerd[2010]: 2025-07-15 05:16:49.454 [INFO][4939] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.130/26] handle="k8s-pod-network.0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" host="ip-172-31-21-211" Jul 15 05:16:49.524652 containerd[2010]: 2025-07-15 05:16:49.454 [INFO][4939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:49.524652 containerd[2010]: 2025-07-15 05:16:49.454 [INFO][4939] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.130/26] IPv6=[] ContainerID="0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" HandleID="k8s-pod-network.0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" Workload="ip--172--31--21--211-k8s-csi--node--driver--gfc8q-eth0" Jul 15 05:16:49.525836 containerd[2010]: 2025-07-15 05:16:49.470 [INFO][4926] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" Namespace="calico-system" Pod="csi-node-driver-gfc8q" WorkloadEndpoint="ip--172--31--21--211-k8s-csi--node--driver--gfc8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--211-k8s-csi--node--driver--gfc8q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d668f1c9-e444-45ed-aa1c-55630e5a2640", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-211", ContainerID:"", Pod:"csi-node-driver-gfc8q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5ad0528fa36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:49.525955 containerd[2010]: 2025-07-15 05:16:49.470 [INFO][4926] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.130/32] ContainerID="0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" Namespace="calico-system" Pod="csi-node-driver-gfc8q" WorkloadEndpoint="ip--172--31--21--211-k8s-csi--node--driver--gfc8q-eth0" Jul 15 05:16:49.525955 containerd[2010]: 2025-07-15 05:16:49.470 [INFO][4926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ad0528fa36 ContainerID="0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" Namespace="calico-system" Pod="csi-node-driver-gfc8q" WorkloadEndpoint="ip--172--31--21--211-k8s-csi--node--driver--gfc8q-eth0" Jul 15 05:16:49.525955 containerd[2010]: 2025-07-15 05:16:49.482 [INFO][4926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" Namespace="calico-system" Pod="csi-node-driver-gfc8q" WorkloadEndpoint="ip--172--31--21--211-k8s-csi--node--driver--gfc8q-eth0" Jul 15 05:16:49.526085 containerd[2010]: 2025-07-15 05:16:49.483 [INFO][4926] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" Namespace="calico-system" Pod="csi-node-driver-gfc8q" WorkloadEndpoint="ip--172--31--21--211-k8s-csi--node--driver--gfc8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--211-k8s-csi--node--driver--gfc8q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d668f1c9-e444-45ed-aa1c-55630e5a2640", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-211", ContainerID:"0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0", Pod:"csi-node-driver-gfc8q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5ad0528fa36", MAC:"16:68:e1:7e:89:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:49.526182 containerd[2010]: 2025-07-15 05:16:49.519 [INFO][4926] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" Namespace="calico-system" Pod="csi-node-driver-gfc8q" WorkloadEndpoint="ip--172--31--21--211-k8s-csi--node--driver--gfc8q-eth0" Jul 15 05:16:49.582554 containerd[2010]: time="2025-07-15T05:16:49.582510540Z" level=info msg="connecting to shim 0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0" address="unix:///run/containerd/s/64074b93411947a77e1f5ccdf3a237def9da0df286036437fb735781b827f03d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:49.648073 systemd[1]: Started cri-containerd-0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0.scope - libcontainer container 0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0. Jul 15 05:16:49.653782 systemd-networkd[1814]: calid6796c3f941: Link UP Jul 15 05:16:49.654083 systemd-networkd[1814]: calid6796c3f941: Gained carrier Jul 15 05:16:49.681694 containerd[2010]: 2025-07-15 05:16:49.325 [INFO][4915] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:16:49.681694 containerd[2010]: 2025-07-15 05:16:49.350 [INFO][4915] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--211-k8s-goldmane--768f4c5c69--8hfgl-eth0 goldmane-768f4c5c69- calico-system eac85450-9178-4ce3-a16e-099c7bccfe9f 815 0 2025-07-15 05:16:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-21-211 goldmane-768f4c5c69-8hfgl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid6796c3f941 [] [] }} ContainerID="0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" Namespace="calico-system" Pod="goldmane-768f4c5c69-8hfgl" WorkloadEndpoint="ip--172--31--21--211-k8s-goldmane--768f4c5c69--8hfgl-" Jul 15 05:16:49.681694 containerd[2010]: 2025-07-15 05:16:49.351 [INFO][4915] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" Namespace="calico-system" Pod="goldmane-768f4c5c69-8hfgl" WorkloadEndpoint="ip--172--31--21--211-k8s-goldmane--768f4c5c69--8hfgl-eth0" Jul 15 05:16:49.681694 containerd[2010]: 2025-07-15 05:16:49.468 [INFO][4954] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" HandleID="k8s-pod-network.0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" Workload="ip--172--31--21--211-k8s-goldmane--768f4c5c69--8hfgl-eth0" Jul 15 05:16:49.682017 containerd[2010]: 2025-07-15 05:16:49.468 [INFO][4954] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" HandleID="k8s-pod-network.0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" Workload="ip--172--31--21--211-k8s-goldmane--768f4c5c69--8hfgl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-211", "pod":"goldmane-768f4c5c69-8hfgl", "timestamp":"2025-07-15 05:16:49.468299275 +0000 UTC"}, Hostname:"ip-172-31-21-211", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:49.682017 containerd[2010]: 2025-07-15 05:16:49.468 [INFO][4954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:49.682017 containerd[2010]: 2025-07-15 05:16:49.469 [INFO][4954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:49.682017 containerd[2010]: 2025-07-15 05:16:49.469 [INFO][4954] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-211' Jul 15 05:16:49.682017 containerd[2010]: 2025-07-15 05:16:49.522 [INFO][4954] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" host="ip-172-31-21-211" Jul 15 05:16:49.682017 containerd[2010]: 2025-07-15 05:16:49.538 [INFO][4954] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-211" Jul 15 05:16:49.682017 containerd[2010]: 2025-07-15 05:16:49.559 [INFO][4954] ipam/ipam.go 511: Trying affinity for 192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:49.682017 containerd[2010]: 2025-07-15 05:16:49.570 [INFO][4954] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:49.682017 containerd[2010]: 2025-07-15 05:16:49.577 [INFO][4954] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:49.685358 containerd[2010]: 2025-07-15 05:16:49.577 [INFO][4954] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.128/26 handle="k8s-pod-network.0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" host="ip-172-31-21-211" Jul 15 05:16:49.685358 containerd[2010]: 2025-07-15 05:16:49.585 [INFO][4954] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f Jul 15 05:16:49.685358 containerd[2010]: 2025-07-15 05:16:49.598 [INFO][4954] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.128/26 handle="k8s-pod-network.0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" host="ip-172-31-21-211" Jul 15 05:16:49.685358 containerd[2010]: 2025-07-15 05:16:49.623 [INFO][4954] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.131/26] block=192.168.116.128/26 handle="k8s-pod-network.0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" host="ip-172-31-21-211" Jul 15 05:16:49.685358 containerd[2010]: 2025-07-15 05:16:49.623 [INFO][4954] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.131/26] handle="k8s-pod-network.0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" host="ip-172-31-21-211" Jul 15 05:16:49.685358 containerd[2010]: 2025-07-15 05:16:49.624 [INFO][4954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:49.685358 containerd[2010]: 2025-07-15 05:16:49.624 [INFO][4954] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.131/26] IPv6=[] ContainerID="0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" HandleID="k8s-pod-network.0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" Workload="ip--172--31--21--211-k8s-goldmane--768f4c5c69--8hfgl-eth0" Jul 15 05:16:49.685642 containerd[2010]: 2025-07-15 05:16:49.638 [INFO][4915] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" Namespace="calico-system" Pod="goldmane-768f4c5c69-8hfgl" WorkloadEndpoint="ip--172--31--21--211-k8s-goldmane--768f4c5c69--8hfgl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--211-k8s-goldmane--768f4c5c69--8hfgl-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"eac85450-9178-4ce3-a16e-099c7bccfe9f", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-211", ContainerID:"", Pod:"goldmane-768f4c5c69-8hfgl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.116.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid6796c3f941", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:49.685642 containerd[2010]: 2025-07-15 05:16:49.639 [INFO][4915] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.131/32] ContainerID="0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" Namespace="calico-system" Pod="goldmane-768f4c5c69-8hfgl" WorkloadEndpoint="ip--172--31--21--211-k8s-goldmane--768f4c5c69--8hfgl-eth0" Jul 15 05:16:49.685864 containerd[2010]: 2025-07-15 05:16:49.639 [INFO][4915] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6796c3f941 ContainerID="0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" Namespace="calico-system" Pod="goldmane-768f4c5c69-8hfgl" WorkloadEndpoint="ip--172--31--21--211-k8s-goldmane--768f4c5c69--8hfgl-eth0" Jul 15 05:16:49.685864 containerd[2010]: 2025-07-15 05:16:49.654 [INFO][4915] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" Namespace="calico-system" Pod="goldmane-768f4c5c69-8hfgl" WorkloadEndpoint="ip--172--31--21--211-k8s-goldmane--768f4c5c69--8hfgl-eth0" Jul 15 05:16:49.685942 containerd[2010]: 2025-07-15 05:16:49.655 [INFO][4915] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" Namespace="calico-system" Pod="goldmane-768f4c5c69-8hfgl" WorkloadEndpoint="ip--172--31--21--211-k8s-goldmane--768f4c5c69--8hfgl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--211-k8s-goldmane--768f4c5c69--8hfgl-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"eac85450-9178-4ce3-a16e-099c7bccfe9f", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-211", ContainerID:"0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f", Pod:"goldmane-768f4c5c69-8hfgl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.116.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid6796c3f941", MAC:"e2:40:37:50:54:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:49.686044 containerd[2010]: 2025-07-15 05:16:49.674 [INFO][4915] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" Namespace="calico-system" Pod="goldmane-768f4c5c69-8hfgl" WorkloadEndpoint="ip--172--31--21--211-k8s-goldmane--768f4c5c69--8hfgl-eth0" Jul 15 05:16:49.757032 containerd[2010]: time="2025-07-15T05:16:49.756853593Z" level=info msg="connecting to shim 0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f" address="unix:///run/containerd/s/a67dac17bc6ce44a23150190bb7e88358686edc140725e449ba45449a214c193" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:49.767752 containerd[2010]: time="2025-07-15T05:16:49.767635021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfc8q,Uid:d668f1c9-e444-45ed-aa1c-55630e5a2640,Namespace:calico-system,Attempt:0,} returns sandbox id \"0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0\"" Jul 15 05:16:49.803922 systemd[1]: Started cri-containerd-0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f.scope - libcontainer container 0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f. Jul 15 05:16:49.861505 containerd[2010]: time="2025-07-15T05:16:49.861468567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8hfgl,Uid:eac85450-9178-4ce3-a16e-099c7bccfe9f,Namespace:calico-system,Attempt:0,} returns sandbox id \"0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f\"" Jul 15 05:16:50.100096 systemd-networkd[1814]: calia8fbc5bdd9d: Gained IPv6LL Jul 15 05:16:50.227684 containerd[2010]: time="2025-07-15T05:16:50.227343606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76587bdddf-m5mzb,Uid:9256fa51-e128-4b6d-af1f-407f753c667b,Namespace:calico-system,Attempt:0,}" Jul 15 05:16:50.227684 containerd[2010]: time="2025-07-15T05:16:50.227457079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d747b47d9-9kmx9,Uid:089f7323-e931-4192-ab88-cfa8c0d68fec,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:16:50.227684 containerd[2010]: time="2025-07-15T05:16:50.227588558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d747b47d9-86lt6,Uid:5a9bb308-2535-472f-9c1c-5a5226fa3191,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:16:50.450761 containerd[2010]: time="2025-07-15T05:16:50.450630909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:50.455552 containerd[2010]: time="2025-07-15T05:16:50.455482790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 15 05:16:50.457690 containerd[2010]: time="2025-07-15T05:16:50.457585513Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:50.469348 containerd[2010]: time="2025-07-15T05:16:50.469291065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:50.470428 containerd[2010]: time="2025-07-15T05:16:50.470289308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.435939368s" Jul 15 05:16:50.470428 containerd[2010]: time="2025-07-15T05:16:50.470326179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 15 05:16:50.473030 systemd-networkd[1814]: cali3f98176bfc2: Link UP Jul 15 05:16:50.474713 systemd-networkd[1814]: cali3f98176bfc2: Gained carrier Jul 15 05:16:50.485642 containerd[2010]: time="2025-07-15T05:16:50.485590440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 05:16:50.494856 containerd[2010]: 2025-07-15 05:16:50.298 [INFO][5094] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:16:50.494856 containerd[2010]: 2025-07-15 05:16:50.324 [INFO][5094] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--86lt6-eth0 calico-apiserver-6d747b47d9- calico-apiserver 5a9bb308-2535-472f-9c1c-5a5226fa3191 812 0 2025-07-15 05:16:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d747b47d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-211 calico-apiserver-6d747b47d9-86lt6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3f98176bfc2 [] [] }} ContainerID="458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" Namespace="calico-apiserver" Pod="calico-apiserver-6d747b47d9-86lt6" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--86lt6-" Jul 15 05:16:50.494856 containerd[2010]: 2025-07-15 05:16:50.324 [INFO][5094] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" Namespace="calico-apiserver" Pod="calico-apiserver-6d747b47d9-86lt6" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--86lt6-eth0" Jul 15 05:16:50.494856 containerd[2010]: 2025-07-15 05:16:50.395 [INFO][5120] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" HandleID="k8s-pod-network.458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" Workload="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--86lt6-eth0" Jul 15 05:16:50.495118 containerd[2010]: 2025-07-15 05:16:50.395 [INFO][5120] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" HandleID="k8s-pod-network.458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" Workload="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--86lt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-211", "pod":"calico-apiserver-6d747b47d9-86lt6", "timestamp":"2025-07-15 05:16:50.395571308 +0000 UTC"}, Hostname:"ip-172-31-21-211", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:50.495118 containerd[2010]: 2025-07-15 05:16:50.395 [INFO][5120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:50.495118 containerd[2010]: 2025-07-15 05:16:50.395 [INFO][5120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:50.495118 containerd[2010]: 2025-07-15 05:16:50.395 [INFO][5120] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-211' Jul 15 05:16:50.495118 containerd[2010]: 2025-07-15 05:16:50.409 [INFO][5120] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" host="ip-172-31-21-211" Jul 15 05:16:50.495118 containerd[2010]: 2025-07-15 05:16:50.419 [INFO][5120] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-211" Jul 15 05:16:50.495118 containerd[2010]: 2025-07-15 05:16:50.427 [INFO][5120] ipam/ipam.go 511: Trying affinity for 192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:50.495118 containerd[2010]: 2025-07-15 05:16:50.430 [INFO][5120] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:50.495118 containerd[2010]: 2025-07-15 05:16:50.433 [INFO][5120] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:50.495334 containerd[2010]: 2025-07-15 05:16:50.433 [INFO][5120] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.128/26 handle="k8s-pod-network.458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" host="ip-172-31-21-211" Jul 15 05:16:50.495334 containerd[2010]: 2025-07-15 05:16:50.435 [INFO][5120] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c Jul 15 05:16:50.495334 containerd[2010]: 2025-07-15 05:16:50.440 [INFO][5120] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.128/26 handle="k8s-pod-network.458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" host="ip-172-31-21-211" Jul 15 05:16:50.495334 containerd[2010]: 2025-07-15 05:16:50.453 [INFO][5120] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.132/26] block=192.168.116.128/26 handle="k8s-pod-network.458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" host="ip-172-31-21-211" Jul 15 05:16:50.495334 containerd[2010]: 2025-07-15 05:16:50.453 [INFO][5120] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.132/26] handle="k8s-pod-network.458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" host="ip-172-31-21-211" Jul 15 05:16:50.495334 containerd[2010]: 2025-07-15 05:16:50.453 [INFO][5120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:50.495334 containerd[2010]: 2025-07-15 05:16:50.453 [INFO][5120] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.132/26] IPv6=[] ContainerID="458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" HandleID="k8s-pod-network.458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" Workload="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--86lt6-eth0" Jul 15 05:16:50.495968 containerd[2010]: 2025-07-15 05:16:50.457 [INFO][5094] cni-plugin/k8s.go 418: Populated endpoint ContainerID="458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" Namespace="calico-apiserver" Pod="calico-apiserver-6d747b47d9-86lt6" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--86lt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--86lt6-eth0", GenerateName:"calico-apiserver-6d747b47d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a9bb308-2535-472f-9c1c-5a5226fa3191", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d747b47d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-211", ContainerID:"", Pod:"calico-apiserver-6d747b47d9-86lt6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f98176bfc2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:50.496043 containerd[2010]: 2025-07-15 05:16:50.459 [INFO][5094] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.132/32] ContainerID="458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" Namespace="calico-apiserver" Pod="calico-apiserver-6d747b47d9-86lt6" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--86lt6-eth0" Jul 15 05:16:50.496043 containerd[2010]: 2025-07-15 05:16:50.459 [INFO][5094] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f98176bfc2 ContainerID="458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" Namespace="calico-apiserver" Pod="calico-apiserver-6d747b47d9-86lt6" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--86lt6-eth0" Jul 15 05:16:50.496043 containerd[2010]: 2025-07-15 05:16:50.473 [INFO][5094] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" Namespace="calico-apiserver" Pod="calico-apiserver-6d747b47d9-86lt6" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--86lt6-eth0" Jul 15 05:16:50.496117 containerd[2010]: 2025-07-15 05:16:50.475 [INFO][5094] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" Namespace="calico-apiserver" Pod="calico-apiserver-6d747b47d9-86lt6" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--86lt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--86lt6-eth0", GenerateName:"calico-apiserver-6d747b47d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a9bb308-2535-472f-9c1c-5a5226fa3191", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d747b47d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-211", ContainerID:"458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c", Pod:"calico-apiserver-6d747b47d9-86lt6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f98176bfc2", MAC:"02:6d:c8:4a:04:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:50.496230 containerd[2010]: 2025-07-15 05:16:50.492 [INFO][5094] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" Namespace="calico-apiserver" Pod="calico-apiserver-6d747b47d9-86lt6" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--86lt6-eth0" Jul 15 05:16:50.519375 containerd[2010]: time="2025-07-15T05:16:50.519216955Z" level=info msg="CreateContainer within sandbox \"6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 05:16:50.538476 containerd[2010]: time="2025-07-15T05:16:50.538438198Z" level=info msg="Container ca774c0343569cc1ccbbfde9704e075af3681ba595e04ed2dbd634b60a9fbad3: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:50.538614 containerd[2010]: time="2025-07-15T05:16:50.538592217Z" level=info msg="connecting to shim 458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c" address="unix:///run/containerd/s/ac5ed6ff5c6e8e0dba88254cf4cbe1a4727917d2e65a7b8c35e1ef60eedf53c2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:50.546187 systemd-networkd[1814]: cali5ad0528fa36: Gained IPv6LL Jul 15 05:16:50.555656 containerd[2010]: time="2025-07-15T05:16:50.555620686Z" level=info msg="CreateContainer within sandbox \"6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ca774c0343569cc1ccbbfde9704e075af3681ba595e04ed2dbd634b60a9fbad3\"" Jul 15 05:16:50.557686 containerd[2010]: time="2025-07-15T05:16:50.557165194Z" level=info msg="StartContainer for \"ca774c0343569cc1ccbbfde9704e075af3681ba595e04ed2dbd634b60a9fbad3\"" Jul 15 05:16:50.559960 containerd[2010]: time="2025-07-15T05:16:50.559934138Z" level=info msg="connecting to shim ca774c0343569cc1ccbbfde9704e075af3681ba595e04ed2dbd634b60a9fbad3" address="unix:///run/containerd/s/b642a77089b67adbe0943a6b14810ecc9053b3d5784cb8b7fd88d55b158d2006" protocol=ttrpc version=3 Jul 15 05:16:50.575892 systemd[1]: Started cri-containerd-458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c.scope - libcontainer container 458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c. Jul 15 05:16:50.592822 systemd[1]: Started cri-containerd-ca774c0343569cc1ccbbfde9704e075af3681ba595e04ed2dbd634b60a9fbad3.scope - libcontainer container ca774c0343569cc1ccbbfde9704e075af3681ba595e04ed2dbd634b60a9fbad3. Jul 15 05:16:50.596525 systemd-networkd[1814]: cali4a6b392f93a: Link UP Jul 15 05:16:50.600738 systemd-networkd[1814]: cali4a6b392f93a: Gained carrier Jul 15 05:16:50.642849 containerd[2010]: 2025-07-15 05:16:50.301 [INFO][5086] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:16:50.642849 containerd[2010]: 2025-07-15 05:16:50.339 [INFO][5086] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--9kmx9-eth0 calico-apiserver-6d747b47d9- calico-apiserver 089f7323-e931-4192-ab88-cfa8c0d68fec 810 0 2025-07-15 05:16:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d747b47d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-211 calico-apiserver-6d747b47d9-9kmx9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4a6b392f93a [] [] }} ContainerID="f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" Namespace="calico-apiserver" Pod="calico-apiserver-6d747b47d9-9kmx9" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--9kmx9-" Jul 15 05:16:50.642849 containerd[2010]: 2025-07-15 05:16:50.339 [INFO][5086] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" Namespace="calico-apiserver" Pod="calico-apiserver-6d747b47d9-9kmx9" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--9kmx9-eth0" Jul 15 05:16:50.642849 containerd[2010]: 2025-07-15 05:16:50.395 [INFO][5127] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" HandleID="k8s-pod-network.f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" Workload="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--9kmx9-eth0" Jul 15 05:16:50.643137 containerd[2010]: 2025-07-15 05:16:50.395 [INFO][5127] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" HandleID="k8s-pod-network.f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" Workload="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--9kmx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fbe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-211", "pod":"calico-apiserver-6d747b47d9-9kmx9", "timestamp":"2025-07-15 05:16:50.395327838 +0000 UTC"}, Hostname:"ip-172-31-21-211", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:50.643137 containerd[2010]: 2025-07-15 05:16:50.395 [INFO][5127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:50.643137 containerd[2010]: 2025-07-15 05:16:50.454 [INFO][5127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:50.643137 containerd[2010]: 2025-07-15 05:16:50.454 [INFO][5127] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-211' Jul 15 05:16:50.643137 containerd[2010]: 2025-07-15 05:16:50.511 [INFO][5127] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" host="ip-172-31-21-211" Jul 15 05:16:50.643137 containerd[2010]: 2025-07-15 05:16:50.531 [INFO][5127] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-211" Jul 15 05:16:50.643137 containerd[2010]: 2025-07-15 05:16:50.539 [INFO][5127] ipam/ipam.go 511: Trying affinity for 192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:50.643137 containerd[2010]: 2025-07-15 05:16:50.542 [INFO][5127] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:50.643137 containerd[2010]: 2025-07-15 05:16:50.548 [INFO][5127] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:50.643397 containerd[2010]: 2025-07-15 05:16:50.548 [INFO][5127] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.128/26 handle="k8s-pod-network.f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" host="ip-172-31-21-211" Jul 15 05:16:50.643397 containerd[2010]: 2025-07-15 05:16:50.550 [INFO][5127] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b Jul 15 05:16:50.643397 containerd[2010]: 2025-07-15 05:16:50.564 [INFO][5127] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.128/26 handle="k8s-pod-network.f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" host="ip-172-31-21-211" Jul 15 05:16:50.643397 containerd[2010]: 2025-07-15 05:16:50.581 [INFO][5127] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.133/26] block=192.168.116.128/26 handle="k8s-pod-network.f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" host="ip-172-31-21-211" Jul 15 05:16:50.643397 containerd[2010]: 2025-07-15 05:16:50.581 [INFO][5127] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.133/26] handle="k8s-pod-network.f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" host="ip-172-31-21-211" Jul 15 05:16:50.643397 containerd[2010]: 2025-07-15 05:16:50.581 [INFO][5127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:50.643397 containerd[2010]: 2025-07-15 05:16:50.581 [INFO][5127] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.133/26] IPv6=[] ContainerID="f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" HandleID="k8s-pod-network.f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" Workload="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--9kmx9-eth0" Jul 15 05:16:50.643561 containerd[2010]: 2025-07-15 05:16:50.592 [INFO][5086] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" Namespace="calico-apiserver" Pod="calico-apiserver-6d747b47d9-9kmx9" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--9kmx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--9kmx9-eth0", GenerateName:"calico-apiserver-6d747b47d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"089f7323-e931-4192-ab88-cfa8c0d68fec", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d747b47d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-211", ContainerID:"", Pod:"calico-apiserver-6d747b47d9-9kmx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a6b392f93a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:50.643619 containerd[2010]: 2025-07-15 05:16:50.592 [INFO][5086] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.133/32] ContainerID="f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" Namespace="calico-apiserver" Pod="calico-apiserver-6d747b47d9-9kmx9" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--9kmx9-eth0" Jul 15 05:16:50.643619 containerd[2010]: 2025-07-15 05:16:50.592 [INFO][5086] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a6b392f93a ContainerID="f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" Namespace="calico-apiserver" Pod="calico-apiserver-6d747b47d9-9kmx9" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--9kmx9-eth0" Jul 15 05:16:50.643619 containerd[2010]: 2025-07-15 05:16:50.601 [INFO][5086] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" Namespace="calico-apiserver" Pod="calico-apiserver-6d747b47d9-9kmx9" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--9kmx9-eth0" Jul 15 05:16:50.645005 containerd[2010]: 2025-07-15 05:16:50.602 [INFO][5086] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" Namespace="calico-apiserver" Pod="calico-apiserver-6d747b47d9-9kmx9" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--9kmx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--9kmx9-eth0", GenerateName:"calico-apiserver-6d747b47d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"089f7323-e931-4192-ab88-cfa8c0d68fec", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d747b47d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-211", ContainerID:"f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b", Pod:"calico-apiserver-6d747b47d9-9kmx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a6b392f93a", MAC:"6a:5b:e1:f1:9c:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:50.645088 containerd[2010]: 2025-07-15 05:16:50.624 [INFO][5086] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" Namespace="calico-apiserver" Pod="calico-apiserver-6d747b47d9-9kmx9" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--apiserver--6d747b47d9--9kmx9-eth0" Jul 15 05:16:50.699898 systemd-networkd[1814]: cali402c11803dc: Link UP Jul 15 05:16:50.701636 systemd-networkd[1814]: cali402c11803dc: Gained carrier Jul 15 05:16:50.709019 containerd[2010]: time="2025-07-15T05:16:50.708983070Z" level=info msg="connecting to shim f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b" address="unix:///run/containerd/s/95989268b0908cd0748a221326b9f60bfb11a3937cbf28313965eb13baccb07a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:50.742657 containerd[2010]: 2025-07-15 05:16:50.301 [INFO][5081] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:16:50.742657 containerd[2010]: 2025-07-15 05:16:50.337 [INFO][5081] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--211-k8s-calico--kube--controllers--76587bdddf--m5mzb-eth0 calico-kube-controllers-76587bdddf- calico-system 9256fa51-e128-4b6d-af1f-407f753c667b 816 0 2025-07-15 05:16:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76587bdddf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-21-211 calico-kube-controllers-76587bdddf-m5mzb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali402c11803dc [] [] }} ContainerID="4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" Namespace="calico-system" Pod="calico-kube-controllers-76587bdddf-m5mzb" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--kube--controllers--76587bdddf--m5mzb-" Jul 15 05:16:50.742657 containerd[2010]: 2025-07-15 05:16:50.337 [INFO][5081] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" Namespace="calico-system" Pod="calico-kube-controllers-76587bdddf-m5mzb" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--kube--controllers--76587bdddf--m5mzb-eth0" Jul 15 05:16:50.742657 containerd[2010]: 2025-07-15 05:16:50.410 [INFO][5125] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" HandleID="k8s-pod-network.4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" Workload="ip--172--31--21--211-k8s-calico--kube--controllers--76587bdddf--m5mzb-eth0" Jul 15 05:16:50.743760 containerd[2010]: 2025-07-15 05:16:50.412 [INFO][5125] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" HandleID="k8s-pod-network.4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" Workload="ip--172--31--21--211-k8s-calico--kube--controllers--76587bdddf--m5mzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123d70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-211", "pod":"calico-kube-controllers-76587bdddf-m5mzb", "timestamp":"2025-07-15 05:16:50.410722004 +0000 UTC"}, Hostname:"ip-172-31-21-211", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:50.743760 containerd[2010]: 2025-07-15 05:16:50.413 [INFO][5125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:50.743760 containerd[2010]: 2025-07-15 05:16:50.582 [INFO][5125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:50.743760 containerd[2010]: 2025-07-15 05:16:50.582 [INFO][5125] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-211' Jul 15 05:16:50.743760 containerd[2010]: 2025-07-15 05:16:50.616 [INFO][5125] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" host="ip-172-31-21-211" Jul 15 05:16:50.743760 containerd[2010]: 2025-07-15 05:16:50.632 [INFO][5125] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-211" Jul 15 05:16:50.743760 containerd[2010]: 2025-07-15 05:16:50.645 [INFO][5125] ipam/ipam.go 511: Trying affinity for 192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:50.743760 containerd[2010]: 2025-07-15 05:16:50.651 [INFO][5125] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:50.743760 containerd[2010]: 2025-07-15 05:16:50.658 [INFO][5125] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:50.745527 containerd[2010]: 2025-07-15 05:16:50.659 [INFO][5125] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.128/26 handle="k8s-pod-network.4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" host="ip-172-31-21-211" Jul 15 05:16:50.745527 containerd[2010]: 2025-07-15 05:16:50.662 [INFO][5125] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7 Jul 15 05:16:50.745527 containerd[2010]: 2025-07-15 05:16:50.669 [INFO][5125] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.128/26 handle="k8s-pod-network.4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" host="ip-172-31-21-211" Jul 15 05:16:50.745527 containerd[2010]: 2025-07-15 05:16:50.686 [INFO][5125] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.134/26] block=192.168.116.128/26 handle="k8s-pod-network.4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" host="ip-172-31-21-211" Jul 15 05:16:50.745527 containerd[2010]: 2025-07-15 05:16:50.687 [INFO][5125] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.134/26] handle="k8s-pod-network.4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" host="ip-172-31-21-211" Jul 15 05:16:50.745527 containerd[2010]: 2025-07-15 05:16:50.687 [INFO][5125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:50.745527 containerd[2010]: 2025-07-15 05:16:50.687 [INFO][5125] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.134/26] IPv6=[] ContainerID="4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" HandleID="k8s-pod-network.4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" Workload="ip--172--31--21--211-k8s-calico--kube--controllers--76587bdddf--m5mzb-eth0" Jul 15 05:16:50.745717 containerd[2010]: 2025-07-15 05:16:50.693 [INFO][5081] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" Namespace="calico-system" Pod="calico-kube-controllers-76587bdddf-m5mzb" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--kube--controllers--76587bdddf--m5mzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--211-k8s-calico--kube--controllers--76587bdddf--m5mzb-eth0", GenerateName:"calico-kube-controllers-76587bdddf-", Namespace:"calico-system", SelfLink:"", UID:"9256fa51-e128-4b6d-af1f-407f753c667b", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76587bdddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-211", ContainerID:"", Pod:"calico-kube-controllers-76587bdddf-m5mzb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.116.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali402c11803dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:50.745786 containerd[2010]: 2025-07-15 05:16:50.693 [INFO][5081] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.134/32] ContainerID="4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" Namespace="calico-system" Pod="calico-kube-controllers-76587bdddf-m5mzb" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--kube--controllers--76587bdddf--m5mzb-eth0" Jul 15 05:16:50.745786 containerd[2010]: 2025-07-15 05:16:50.693 [INFO][5081] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali402c11803dc ContainerID="4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" Namespace="calico-system" Pod="calico-kube-controllers-76587bdddf-m5mzb" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--kube--controllers--76587bdddf--m5mzb-eth0" Jul 15 05:16:50.745786 containerd[2010]: 2025-07-15 05:16:50.703 [INFO][5081] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" Namespace="calico-system" Pod="calico-kube-controllers-76587bdddf-m5mzb" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--kube--controllers--76587bdddf--m5mzb-eth0" Jul 15 05:16:50.745859 containerd[2010]: 2025-07-15 05:16:50.705 [INFO][5081] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" Namespace="calico-system" Pod="calico-kube-controllers-76587bdddf-m5mzb" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--kube--controllers--76587bdddf--m5mzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--211-k8s-calico--kube--controllers--76587bdddf--m5mzb-eth0", GenerateName:"calico-kube-controllers-76587bdddf-", Namespace:"calico-system", SelfLink:"", UID:"9256fa51-e128-4b6d-af1f-407f753c667b", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76587bdddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-211", ContainerID:"4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7", Pod:"calico-kube-controllers-76587bdddf-m5mzb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.116.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali402c11803dc", MAC:"72:a4:3f:76:b9:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:50.745919 containerd[2010]: 2025-07-15 05:16:50.724 [INFO][5081] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" Namespace="calico-system" Pod="calico-kube-controllers-76587bdddf-m5mzb" WorkloadEndpoint="ip--172--31--21--211-k8s-calico--kube--controllers--76587bdddf--m5mzb-eth0" Jul 15 05:16:50.749958 systemd[1]: Started cri-containerd-f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b.scope - libcontainer container f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b. Jul 15 05:16:50.797497 containerd[2010]: time="2025-07-15T05:16:50.797459301Z" level=info msg="StartContainer for \"ca774c0343569cc1ccbbfde9704e075af3681ba595e04ed2dbd634b60a9fbad3\" returns successfully" Jul 15 05:16:50.802827 systemd-networkd[1814]: calid6796c3f941: Gained IPv6LL Jul 15 05:16:50.805641 containerd[2010]: time="2025-07-15T05:16:50.805521620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d747b47d9-86lt6,Uid:5a9bb308-2535-472f-9c1c-5a5226fa3191,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c\"" Jul 15 05:16:50.824179 containerd[2010]: time="2025-07-15T05:16:50.824092805Z" level=info msg="connecting to shim 4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7" address="unix:///run/containerd/s/62b65ee76b99573dcd5af3467ac410c7848c305e6de44412d9613f101070f6b8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:50.877814 systemd[1]: Started cri-containerd-4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7.scope - libcontainer container 4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7. Jul 15 05:16:50.894733 containerd[2010]: time="2025-07-15T05:16:50.893836968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d747b47d9-9kmx9,Uid:089f7323-e931-4192-ab88-cfa8c0d68fec,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b\"" Jul 15 05:16:50.974369 containerd[2010]: time="2025-07-15T05:16:50.974212926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76587bdddf-m5mzb,Uid:9256fa51-e128-4b6d-af1f-407f753c667b,Namespace:calico-system,Attempt:0,} returns sandbox id \"4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7\"" Jul 15 05:16:51.231409 containerd[2010]: time="2025-07-15T05:16:51.231268590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bvn2j,Uid:dceb10b0-f2a4-468d-8f0c-c0d5cbe355da,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:51.356557 systemd-networkd[1814]: cali40685d174cc: Link UP Jul 15 05:16:51.357170 systemd-networkd[1814]: cali40685d174cc: Gained carrier Jul 15 05:16:51.372740 containerd[2010]: 2025-07-15 05:16:51.259 [INFO][5352] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:16:51.372740 containerd[2010]: 2025-07-15 05:16:51.271 [INFO][5352] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--211-k8s-coredns--674b8bbfcf--bvn2j-eth0 coredns-674b8bbfcf- kube-system dceb10b0-f2a4-468d-8f0c-c0d5cbe355da 811 0 2025-07-15 05:16:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-211 coredns-674b8bbfcf-bvn2j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali40685d174cc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" Namespace="kube-system" Pod="coredns-674b8bbfcf-bvn2j" WorkloadEndpoint="ip--172--31--21--211-k8s-coredns--674b8bbfcf--bvn2j-" Jul 15 05:16:51.372740 containerd[2010]: 2025-07-15 05:16:51.271 [INFO][5352] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" Namespace="kube-system" Pod="coredns-674b8bbfcf-bvn2j" WorkloadEndpoint="ip--172--31--21--211-k8s-coredns--674b8bbfcf--bvn2j-eth0" Jul 15 05:16:51.372740 containerd[2010]: 2025-07-15 05:16:51.303 [INFO][5364] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" HandleID="k8s-pod-network.c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" Workload="ip--172--31--21--211-k8s-coredns--674b8bbfcf--bvn2j-eth0" Jul 15 05:16:51.373060 containerd[2010]: 2025-07-15 05:16:51.304 [INFO][5364] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" HandleID="k8s-pod-network.c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" Workload="ip--172--31--21--211-k8s-coredns--674b8bbfcf--bvn2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-211", "pod":"coredns-674b8bbfcf-bvn2j", "timestamp":"2025-07-15 05:16:51.303926449 +0000 UTC"}, Hostname:"ip-172-31-21-211", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:51.373060 containerd[2010]: 2025-07-15 05:16:51.304 [INFO][5364] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:51.373060 containerd[2010]: 2025-07-15 05:16:51.304 [INFO][5364] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:51.373060 containerd[2010]: 2025-07-15 05:16:51.304 [INFO][5364] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-211' Jul 15 05:16:51.373060 containerd[2010]: 2025-07-15 05:16:51.313 [INFO][5364] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" host="ip-172-31-21-211" Jul 15 05:16:51.373060 containerd[2010]: 2025-07-15 05:16:51.320 [INFO][5364] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-211" Jul 15 05:16:51.373060 containerd[2010]: 2025-07-15 05:16:51.326 [INFO][5364] ipam/ipam.go 511: Trying affinity for 192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:51.373060 containerd[2010]: 2025-07-15 05:16:51.329 [INFO][5364] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:51.373060 containerd[2010]: 2025-07-15 05:16:51.332 [INFO][5364] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:51.373347 containerd[2010]: 2025-07-15 05:16:51.332 [INFO][5364] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.128/26 handle="k8s-pod-network.c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" host="ip-172-31-21-211" Jul 15 05:16:51.373347 containerd[2010]: 2025-07-15 05:16:51.334 [INFO][5364] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215 Jul 15 05:16:51.373347 containerd[2010]: 2025-07-15 05:16:51.342 [INFO][5364] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.128/26 handle="k8s-pod-network.c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" host="ip-172-31-21-211" Jul 15 05:16:51.373347 containerd[2010]: 2025-07-15 05:16:51.351 [INFO][5364] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.135/26] block=192.168.116.128/26 handle="k8s-pod-network.c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" host="ip-172-31-21-211" Jul 15 05:16:51.373347 containerd[2010]: 2025-07-15 05:16:51.352 [INFO][5364] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.135/26] handle="k8s-pod-network.c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" host="ip-172-31-21-211" Jul 15 05:16:51.373347 containerd[2010]: 2025-07-15 05:16:51.352 [INFO][5364] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:51.373347 containerd[2010]: 2025-07-15 05:16:51.352 [INFO][5364] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.135/26] IPv6=[] ContainerID="c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" HandleID="k8s-pod-network.c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" Workload="ip--172--31--21--211-k8s-coredns--674b8bbfcf--bvn2j-eth0" Jul 15 05:16:51.374068 containerd[2010]: 2025-07-15 05:16:51.354 [INFO][5352] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" Namespace="kube-system" Pod="coredns-674b8bbfcf-bvn2j" WorkloadEndpoint="ip--172--31--21--211-k8s-coredns--674b8bbfcf--bvn2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--211-k8s-coredns--674b8bbfcf--bvn2j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dceb10b0-f2a4-468d-8f0c-c0d5cbe355da", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-211", ContainerID:"", Pod:"coredns-674b8bbfcf-bvn2j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40685d174cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:51.374068 containerd[2010]: 2025-07-15 05:16:51.354 [INFO][5352] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.135/32] ContainerID="c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" Namespace="kube-system" Pod="coredns-674b8bbfcf-bvn2j" WorkloadEndpoint="ip--172--31--21--211-k8s-coredns--674b8bbfcf--bvn2j-eth0" Jul 15 05:16:51.374068 containerd[2010]: 2025-07-15 05:16:51.354 [INFO][5352] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40685d174cc ContainerID="c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" Namespace="kube-system" Pod="coredns-674b8bbfcf-bvn2j" WorkloadEndpoint="ip--172--31--21--211-k8s-coredns--674b8bbfcf--bvn2j-eth0" Jul 15 05:16:51.374068 containerd[2010]: 2025-07-15 05:16:51.356 [INFO][5352] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" Namespace="kube-system" Pod="coredns-674b8bbfcf-bvn2j" WorkloadEndpoint="ip--172--31--21--211-k8s-coredns--674b8bbfcf--bvn2j-eth0" Jul 15 05:16:51.374068 containerd[2010]: 2025-07-15 05:16:51.357 [INFO][5352] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" Namespace="kube-system" Pod="coredns-674b8bbfcf-bvn2j" WorkloadEndpoint="ip--172--31--21--211-k8s-coredns--674b8bbfcf--bvn2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--211-k8s-coredns--674b8bbfcf--bvn2j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dceb10b0-f2a4-468d-8f0c-c0d5cbe355da", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-211", ContainerID:"c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215", Pod:"coredns-674b8bbfcf-bvn2j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40685d174cc", MAC:"6e:8e:60:ab:81:39", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:51.374068 containerd[2010]: 2025-07-15 05:16:51.369 [INFO][5352] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" Namespace="kube-system" Pod="coredns-674b8bbfcf-bvn2j" WorkloadEndpoint="ip--172--31--21--211-k8s-coredns--674b8bbfcf--bvn2j-eth0" Jul 15 05:16:51.409476 containerd[2010]: time="2025-07-15T05:16:51.409396148Z" level=info msg="connecting to shim c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215" address="unix:///run/containerd/s/2337c075cd83cde81394bb0a67c88829c6c97411c97e233baab3f4d807cd1488" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:51.432886 systemd[1]: Started cri-containerd-c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215.scope - libcontainer container c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215. Jul 15 05:16:51.486446 containerd[2010]: time="2025-07-15T05:16:51.486297351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bvn2j,Uid:dceb10b0-f2a4-468d-8f0c-c0d5cbe355da,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215\"" Jul 15 05:16:51.495522 containerd[2010]: time="2025-07-15T05:16:51.495475077Z" level=info msg="CreateContainer within sandbox \"c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:16:51.523846 containerd[2010]: time="2025-07-15T05:16:51.523798788Z" level=info msg="Container 9bbabbdbbd1d845730fead83107e991ea7092433d283424c2583158bb15da97b: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:51.536423 containerd[2010]: time="2025-07-15T05:16:51.535932578Z" level=info msg="CreateContainer within sandbox \"c9f8d224ba4215455cba6c7e850370f8bca467ebac91c931ae8270df1c32b215\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bbabbdbbd1d845730fead83107e991ea7092433d283424c2583158bb15da97b\"" Jul 15 05:16:51.537475 containerd[2010]: time="2025-07-15T05:16:51.537391956Z" level=info msg="StartContainer for \"9bbabbdbbd1d845730fead83107e991ea7092433d283424c2583158bb15da97b\"" Jul 15 05:16:51.539377 containerd[2010]: time="2025-07-15T05:16:51.539107754Z" level=info msg="connecting to shim 9bbabbdbbd1d845730fead83107e991ea7092433d283424c2583158bb15da97b" address="unix:///run/containerd/s/2337c075cd83cde81394bb0a67c88829c6c97411c97e233baab3f4d807cd1488" protocol=ttrpc version=3 Jul 15 05:16:51.572879 systemd[1]: Started cri-containerd-9bbabbdbbd1d845730fead83107e991ea7092433d283424c2583158bb15da97b.scope - libcontainer container 9bbabbdbbd1d845730fead83107e991ea7092433d283424c2583158bb15da97b. Jul 15 05:16:51.621154 containerd[2010]: time="2025-07-15T05:16:51.621114552Z" level=info msg="StartContainer for \"9bbabbdbbd1d845730fead83107e991ea7092433d283424c2583158bb15da97b\" returns successfully" Jul 15 05:16:51.761969 systemd-networkd[1814]: cali3f98176bfc2: Gained IPv6LL Jul 15 05:16:51.825874 systemd-networkd[1814]: cali4a6b392f93a: Gained IPv6LL Jul 15 05:16:51.937630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2096606080.mount: Deactivated successfully. Jul 15 05:16:52.231499 containerd[2010]: time="2025-07-15T05:16:52.230885004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n4hpt,Uid:bc6d907d-23fb-4a2b-bf72-3b55f4a815fb,Namespace:kube-system,Attempt:0,}" Jul 15 05:16:52.338125 systemd-networkd[1814]: cali402c11803dc: Gained IPv6LL Jul 15 05:16:52.463649 systemd-networkd[1814]: cali20d3a2809d2: Link UP Jul 15 05:16:52.465183 systemd-networkd[1814]: cali20d3a2809d2: Gained carrier Jul 15 05:16:52.474073 containerd[2010]: time="2025-07-15T05:16:52.473709958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:52.474584 containerd[2010]: time="2025-07-15T05:16:52.474330021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 15 05:16:52.478121 containerd[2010]: time="2025-07-15T05:16:52.477270889Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:52.483197 containerd[2010]: time="2025-07-15T05:16:52.482729548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:52.491642 containerd[2010]: time="2025-07-15T05:16:52.491535981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.005896671s" Jul 15 05:16:52.491642 containerd[2010]: time="2025-07-15T05:16:52.491584718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 15 05:16:52.493423 containerd[2010]: time="2025-07-15T05:16:52.493391468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.307 [INFO][5483] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.324 [INFO][5483] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--211-k8s-coredns--674b8bbfcf--n4hpt-eth0 coredns-674b8bbfcf- kube-system bc6d907d-23fb-4a2b-bf72-3b55f4a815fb 809 0 2025-07-15 05:16:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-211 coredns-674b8bbfcf-n4hpt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali20d3a2809d2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" Namespace="kube-system" Pod="coredns-674b8bbfcf-n4hpt" WorkloadEndpoint="ip--172--31--21--211-k8s-coredns--674b8bbfcf--n4hpt-" Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.324 [INFO][5483] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" Namespace="kube-system" Pod="coredns-674b8bbfcf-n4hpt" WorkloadEndpoint="ip--172--31--21--211-k8s-coredns--674b8bbfcf--n4hpt-eth0" Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.382 [INFO][5494] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" HandleID="k8s-pod-network.988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" Workload="ip--172--31--21--211-k8s-coredns--674b8bbfcf--n4hpt-eth0" Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.383 [INFO][5494] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" HandleID="k8s-pod-network.988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" Workload="ip--172--31--21--211-k8s-coredns--674b8bbfcf--n4hpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf640), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-211", "pod":"coredns-674b8bbfcf-n4hpt", "timestamp":"2025-07-15 05:16:52.382706214 +0000 UTC"}, Hostname:"ip-172-31-21-211", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.383 [INFO][5494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.383 [INFO][5494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.383 [INFO][5494] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-211' Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.398 [INFO][5494] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" host="ip-172-31-21-211" Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.409 [INFO][5494] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-211" Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.421 [INFO][5494] ipam/ipam.go 511: Trying affinity for 192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.425 [INFO][5494] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.431 [INFO][5494] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.128/26 host="ip-172-31-21-211" Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.431 [INFO][5494] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.128/26 handle="k8s-pod-network.988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" host="ip-172-31-21-211" Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.434 [INFO][5494] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7 Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.441 [INFO][5494] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.128/26 handle="k8s-pod-network.988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" host="ip-172-31-21-211" Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.456 [INFO][5494] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.136/26] block=192.168.116.128/26 handle="k8s-pod-network.988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" host="ip-172-31-21-211" Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.456 [INFO][5494] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.136/26] handle="k8s-pod-network.988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" host="ip-172-31-21-211" Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.456 [INFO][5494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:16:52.497450 containerd[2010]: 2025-07-15 05:16:52.456 [INFO][5494] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.136/26] IPv6=[] ContainerID="988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" HandleID="k8s-pod-network.988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" Workload="ip--172--31--21--211-k8s-coredns--674b8bbfcf--n4hpt-eth0" Jul 15 05:16:52.500070 containerd[2010]: 2025-07-15 05:16:52.460 [INFO][5483] cni-plugin/k8s.go 418: Populated endpoint ContainerID="988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" Namespace="kube-system" Pod="coredns-674b8bbfcf-n4hpt" WorkloadEndpoint="ip--172--31--21--211-k8s-coredns--674b8bbfcf--n4hpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--211-k8s-coredns--674b8bbfcf--n4hpt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bc6d907d-23fb-4a2b-bf72-3b55f4a815fb", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-211", ContainerID:"", Pod:"coredns-674b8bbfcf-n4hpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20d3a2809d2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:52.500070 containerd[2010]: 2025-07-15 05:16:52.461 [INFO][5483] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.136/32] ContainerID="988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" Namespace="kube-system" Pod="coredns-674b8bbfcf-n4hpt" WorkloadEndpoint="ip--172--31--21--211-k8s-coredns--674b8bbfcf--n4hpt-eth0" Jul 15 05:16:52.500070 containerd[2010]: 2025-07-15 05:16:52.461 [INFO][5483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20d3a2809d2 ContainerID="988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" Namespace="kube-system" Pod="coredns-674b8bbfcf-n4hpt" WorkloadEndpoint="ip--172--31--21--211-k8s-coredns--674b8bbfcf--n4hpt-eth0" Jul 15 05:16:52.500070 containerd[2010]: 2025-07-15 05:16:52.465 [INFO][5483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" Namespace="kube-system" Pod="coredns-674b8bbfcf-n4hpt" WorkloadEndpoint="ip--172--31--21--211-k8s-coredns--674b8bbfcf--n4hpt-eth0" Jul 15 05:16:52.500070 containerd[2010]: 2025-07-15 05:16:52.466 [INFO][5483] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" Namespace="kube-system" Pod="coredns-674b8bbfcf-n4hpt" WorkloadEndpoint="ip--172--31--21--211-k8s-coredns--674b8bbfcf--n4hpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--211-k8s-coredns--674b8bbfcf--n4hpt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bc6d907d-23fb-4a2b-bf72-3b55f4a815fb", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-211", ContainerID:"988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7", Pod:"coredns-674b8bbfcf-n4hpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20d3a2809d2", MAC:"26:5d:1e:11:1f:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:16:52.500070 containerd[2010]: 2025-07-15 05:16:52.491 [INFO][5483] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" Namespace="kube-system" Pod="coredns-674b8bbfcf-n4hpt" WorkloadEndpoint="ip--172--31--21--211-k8s-coredns--674b8bbfcf--n4hpt-eth0" Jul 15 05:16:52.509180 kubelet[3337]: I0715 05:16:52.509109 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bvn2j" podStartSLOduration=39.488362007 podStartE2EDuration="39.488362007s" podCreationTimestamp="2025-07-15 05:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:51.690373962 +0000 UTC m=+45.599020842" watchObservedRunningTime="2025-07-15 05:16:52.488362007 +0000 UTC m=+46.397008854" Jul 15 05:16:52.513638 containerd[2010]: time="2025-07-15T05:16:52.513594449Z" level=info msg="CreateContainer within sandbox \"0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 05:16:52.539531 containerd[2010]: time="2025-07-15T05:16:52.539491277Z" level=info msg="Container 7d2b5eda7c4d06a841b819a80794fa3407e361fcfdbb38694d1d7a040971e8e3: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:52.545252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4133976357.mount: Deactivated successfully. Jul 15 05:16:52.557980 containerd[2010]: time="2025-07-15T05:16:52.557937883Z" level=info msg="connecting to shim 988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7" address="unix:///run/containerd/s/5cb6286dc4cabdbe238392e60fe095e39b1a30fdda8c7dbd70a19e70ad2127ba" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:16:52.563072 containerd[2010]: time="2025-07-15T05:16:52.563026599Z" level=info msg="CreateContainer within sandbox \"0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7d2b5eda7c4d06a841b819a80794fa3407e361fcfdbb38694d1d7a040971e8e3\"" Jul 15 05:16:52.564395 containerd[2010]: time="2025-07-15T05:16:52.564361885Z" level=info msg="StartContainer for \"7d2b5eda7c4d06a841b819a80794fa3407e361fcfdbb38694d1d7a040971e8e3\"" Jul 15 05:16:52.571330 containerd[2010]: time="2025-07-15T05:16:52.569464894Z" level=info msg="connecting to shim 7d2b5eda7c4d06a841b819a80794fa3407e361fcfdbb38694d1d7a040971e8e3" address="unix:///run/containerd/s/64074b93411947a77e1f5ccdf3a237def9da0df286036437fb735781b827f03d" protocol=ttrpc version=3 Jul 15 05:16:52.606051 systemd[1]: Started cri-containerd-7d2b5eda7c4d06a841b819a80794fa3407e361fcfdbb38694d1d7a040971e8e3.scope - libcontainer container 7d2b5eda7c4d06a841b819a80794fa3407e361fcfdbb38694d1d7a040971e8e3. Jul 15 05:16:52.624511 systemd[1]: Started cri-containerd-988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7.scope - libcontainer container 988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7. Jul 15 05:16:52.657791 systemd-networkd[1814]: cali40685d174cc: Gained IPv6LL Jul 15 05:16:52.712782 containerd[2010]: time="2025-07-15T05:16:52.712352402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n4hpt,Uid:bc6d907d-23fb-4a2b-bf72-3b55f4a815fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7\"" Jul 15 05:16:52.717253 containerd[2010]: time="2025-07-15T05:16:52.717127331Z" level=info msg="StartContainer for \"7d2b5eda7c4d06a841b819a80794fa3407e361fcfdbb38694d1d7a040971e8e3\" returns successfully" Jul 15 05:16:52.728210 containerd[2010]: time="2025-07-15T05:16:52.727881283Z" level=info msg="CreateContainer within sandbox \"988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:16:52.747707 containerd[2010]: time="2025-07-15T05:16:52.746352871Z" level=info msg="Container f078e87fab8835d3dd2fe48fd3106d79d53a2c5ecbdede159094347de429dd2a: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:52.760627 containerd[2010]: time="2025-07-15T05:16:52.760580423Z" level=info msg="CreateContainer within sandbox \"988666e892c9caf3b6405a28fa90b9c35a9ca9637a0f2848d63f15cbdd55edc7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f078e87fab8835d3dd2fe48fd3106d79d53a2c5ecbdede159094347de429dd2a\"" Jul 15 05:16:52.761252 containerd[2010]: time="2025-07-15T05:16:52.761082430Z" level=info msg="StartContainer for \"f078e87fab8835d3dd2fe48fd3106d79d53a2c5ecbdede159094347de429dd2a\"" Jul 15 05:16:52.761929 containerd[2010]: time="2025-07-15T05:16:52.761903297Z" level=info msg="connecting to shim f078e87fab8835d3dd2fe48fd3106d79d53a2c5ecbdede159094347de429dd2a" address="unix:///run/containerd/s/5cb6286dc4cabdbe238392e60fe095e39b1a30fdda8c7dbd70a19e70ad2127ba" protocol=ttrpc version=3 Jul 15 05:16:52.780836 systemd[1]: Started cri-containerd-f078e87fab8835d3dd2fe48fd3106d79d53a2c5ecbdede159094347de429dd2a.scope - libcontainer container f078e87fab8835d3dd2fe48fd3106d79d53a2c5ecbdede159094347de429dd2a. Jul 15 05:16:52.816620 containerd[2010]: time="2025-07-15T05:16:52.816547610Z" level=info msg="StartContainer for \"f078e87fab8835d3dd2fe48fd3106d79d53a2c5ecbdede159094347de429dd2a\" returns successfully" Jul 15 05:16:53.617951 systemd-networkd[1814]: cali20d3a2809d2: Gained IPv6LL Jul 15 05:16:53.748263 kubelet[3337]: I0715 05:16:53.748159 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-n4hpt" podStartSLOduration=40.74810125 podStartE2EDuration="40.74810125s" podCreationTimestamp="2025-07-15 05:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:16:53.717738211 +0000 UTC m=+47.626385053" watchObservedRunningTime="2025-07-15 05:16:53.74810125 +0000 UTC m=+47.656748073" Jul 15 05:16:54.520869 kubelet[3337]: I0715 05:16:54.520827 3337 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:16:55.419031 systemd[1]: Started sshd@9-172.31.21.211:22-139.178.89.65:58354.service - OpenSSH per-connection server daemon (139.178.89.65:58354). Jul 15 05:16:55.663418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3904515886.mount: Deactivated successfully. Jul 15 05:16:55.738961 sshd[5678]: Accepted publickey for core from 139.178.89.65 port 58354 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:16:55.743184 sshd-session[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:16:55.754253 systemd-logind[1983]: New session 10 of user core. Jul 15 05:16:55.760023 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 05:16:56.525349 ntpd[1978]: Listen normally on 8 calia8fbc5bdd9d [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 05:16:56.527596 ntpd[1978]: 15 Jul 05:16:56 ntpd[1978]: Listen normally on 8 calia8fbc5bdd9d [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 05:16:56.527596 ntpd[1978]: 15 Jul 05:16:56 ntpd[1978]: Listen normally on 9 cali5ad0528fa36 [fe80::ecee:eeff:feee:eeee%5]:123 Jul 15 05:16:56.527596 ntpd[1978]: 15 Jul 05:16:56 ntpd[1978]: Listen normally on 10 calid6796c3f941 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 15 05:16:56.527596 ntpd[1978]: 15 Jul 05:16:56 ntpd[1978]: Listen normally on 11 cali3f98176bfc2 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 15 05:16:56.527596 ntpd[1978]: 15 Jul 05:16:56 ntpd[1978]: Listen normally on 12 cali4a6b392f93a [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 05:16:56.527596 ntpd[1978]: 15 Jul 05:16:56 ntpd[1978]: Listen normally on 13 cali402c11803dc [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 05:16:56.527596 ntpd[1978]: 15 Jul 05:16:56 ntpd[1978]: Listen normally on 14 cali40685d174cc [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 05:16:56.527596 ntpd[1978]: 15 Jul 05:16:56 ntpd[1978]: Listen normally on 15 cali20d3a2809d2 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 05:16:56.526402 ntpd[1978]: Listen normally on 9 cali5ad0528fa36 [fe80::ecee:eeff:feee:eeee%5]:123 Jul 15 05:16:56.526486 ntpd[1978]: Listen normally on 10 calid6796c3f941 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 15 05:16:56.526549 ntpd[1978]: Listen normally on 11 cali3f98176bfc2 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 15 05:16:56.526589 ntpd[1978]: Listen normally on 12 cali4a6b392f93a [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 05:16:56.526651 ntpd[1978]: Listen normally on 13 cali402c11803dc [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 05:16:56.526722 ntpd[1978]: Listen normally on 14 cali40685d174cc [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 05:16:56.526762 ntpd[1978]: Listen normally on 15 cali20d3a2809d2 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 05:16:57.049514 sshd[5706]: Connection closed by 139.178.89.65 port 58354 Jul 15 05:16:57.048685 sshd-session[5678]: pam_unix(sshd:session): session closed for user core Jul 15 05:16:57.066433 systemd[1]: sshd@9-172.31.21.211:22-139.178.89.65:58354.service: Deactivated successfully. Jul 15 05:16:57.075740 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 05:16:57.085924 systemd-logind[1983]: Session 10 logged out. Waiting for processes to exit. Jul 15 05:16:57.091152 systemd-logind[1983]: Removed session 10. Jul 15 05:16:57.442300 systemd-networkd[1814]: vxlan.calico: Link UP Jul 15 05:16:57.442311 systemd-networkd[1814]: vxlan.calico: Gained carrier Jul 15 05:16:57.446475 (udev-worker)[5748]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:16:57.506705 (udev-worker)[5745]: Network interface NamePolicy= disabled on kernel command line. Jul 15 05:16:57.790941 containerd[2010]: time="2025-07-15T05:16:57.790575555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 15 05:16:57.987945 containerd[2010]: time="2025-07-15T05:16:57.987888949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:58.043776 containerd[2010]: time="2025-07-15T05:16:58.042728723Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:58.046956 containerd[2010]: time="2025-07-15T05:16:58.046872803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:16:58.048814 containerd[2010]: time="2025-07-15T05:16:58.047833752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.554398799s" Jul 15 05:16:58.048814 containerd[2010]: time="2025-07-15T05:16:58.047869762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 15 05:16:58.050294 containerd[2010]: time="2025-07-15T05:16:58.050263650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 05:16:58.124051 containerd[2010]: time="2025-07-15T05:16:58.123873068Z" level=info msg="CreateContainer within sandbox \"0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 05:16:58.241851 containerd[2010]: time="2025-07-15T05:16:58.241807779Z" level=info msg="Container f8cfe945a2054d2af152586de854ee27420e44eb4e185f005e43d8bc3cdbaaca: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:16:58.279898 containerd[2010]: time="2025-07-15T05:16:58.279855365Z" level=info msg="CreateContainer within sandbox \"0489d2a67afb171e521aaf12b49456f97cb2539e75c1ec6e330342477f11e58f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f8cfe945a2054d2af152586de854ee27420e44eb4e185f005e43d8bc3cdbaaca\"" Jul 15 05:16:58.280854 containerd[2010]: time="2025-07-15T05:16:58.280812320Z" level=info msg="StartContainer for \"f8cfe945a2054d2af152586de854ee27420e44eb4e185f005e43d8bc3cdbaaca\"" Jul 15 05:16:58.283422 containerd[2010]: time="2025-07-15T05:16:58.283369531Z" level=info msg="connecting to shim f8cfe945a2054d2af152586de854ee27420e44eb4e185f005e43d8bc3cdbaaca" address="unix:///run/containerd/s/a67dac17bc6ce44a23150190bb7e88358686edc140725e449ba45449a214c193" protocol=ttrpc version=3 Jul 15 05:16:58.316881 systemd[1]: Started cri-containerd-f8cfe945a2054d2af152586de854ee27420e44eb4e185f005e43d8bc3cdbaaca.scope - libcontainer container f8cfe945a2054d2af152586de854ee27420e44eb4e185f005e43d8bc3cdbaaca. Jul 15 05:16:58.433096 containerd[2010]: time="2025-07-15T05:16:58.433054037Z" level=info msg="StartContainer for \"f8cfe945a2054d2af152586de854ee27420e44eb4e185f005e43d8bc3cdbaaca\" returns successfully" Jul 15 05:16:58.769914 kubelet[3337]: I0715 05:16:58.769838 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-8hfgl" podStartSLOduration=23.570164922 podStartE2EDuration="31.757041399s" podCreationTimestamp="2025-07-15 05:16:27 +0000 UTC" firstStartedPulling="2025-07-15 05:16:49.863027011 +0000 UTC m=+43.771673835" lastFinishedPulling="2025-07-15 05:16:58.049903483 +0000 UTC m=+51.958550312" observedRunningTime="2025-07-15 05:16:58.755462554 +0000 UTC m=+52.664109400" watchObservedRunningTime="2025-07-15 05:16:58.757041399 +0000 UTC m=+52.665688245" Jul 15 05:16:58.888714 containerd[2010]: time="2025-07-15T05:16:58.888397608Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8cfe945a2054d2af152586de854ee27420e44eb4e185f005e43d8bc3cdbaaca\" id:\"3918f1695e24d4e27f93feb35642c6744d1f8e7e826d8706ff15ea9a4fd01185\" pid:5858 exit_status:1 exited_at:{seconds:1752556618 nanos:879784746}" Jul 15 05:16:59.442053 systemd-networkd[1814]: vxlan.calico: Gained IPv6LL Jul 15 05:16:59.869741 containerd[2010]: time="2025-07-15T05:16:59.869653971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8cfe945a2054d2af152586de854ee27420e44eb4e185f005e43d8bc3cdbaaca\" id:\"5150285d3d964ccc61355f64bc179f63986b54eca0ad4b5d1e5663137a452b47\" pid:5882 exit_status:1 exited_at:{seconds:1752556619 nanos:869228907}" Jul 15 05:17:00.987707 containerd[2010]: time="2025-07-15T05:17:00.987544129Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8cfe945a2054d2af152586de854ee27420e44eb4e185f005e43d8bc3cdbaaca\" id:\"b1f1886baa0bdc18555282545103ec19ef80d5d1394f05c7ab53d8dbcec8195a\" pid:5908 exited_at:{seconds:1752556620 nanos:985938392}" Jul 15 05:17:01.014837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2703542936.mount: Deactivated successfully. Jul 15 05:17:01.048686 containerd[2010]: time="2025-07-15T05:17:01.047117895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:01.049998 containerd[2010]: time="2025-07-15T05:17:01.049956337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 15 05:17:01.051571 containerd[2010]: time="2025-07-15T05:17:01.051529114Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:01.056390 containerd[2010]: time="2025-07-15T05:17:01.056346972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:01.056992 containerd[2010]: time="2025-07-15T05:17:01.056956911Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.006642167s" Jul 15 05:17:01.056992 containerd[2010]: time="2025-07-15T05:17:01.056995223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 15 05:17:01.058408 containerd[2010]: time="2025-07-15T05:17:01.058376349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 05:17:01.072166 containerd[2010]: time="2025-07-15T05:17:01.072079305Z" level=info msg="CreateContainer within sandbox \"6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 05:17:01.085525 containerd[2010]: time="2025-07-15T05:17:01.084837984Z" level=info msg="Container fda93a9fc1b3d7e09a43cadb3b22963e40d9a05fa9c7e4917879b0cc407644cc: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:01.099076 containerd[2010]: time="2025-07-15T05:17:01.099023402Z" level=info msg="CreateContainer within sandbox \"6960ab3b70e3a8222009b07fc0b20220da638d73e5d9300691960a1ac0066ac6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"fda93a9fc1b3d7e09a43cadb3b22963e40d9a05fa9c7e4917879b0cc407644cc\"" Jul 15 05:17:01.100486 containerd[2010]: time="2025-07-15T05:17:01.099889713Z" level=info msg="StartContainer for \"fda93a9fc1b3d7e09a43cadb3b22963e40d9a05fa9c7e4917879b0cc407644cc\"" Jul 15 05:17:01.102041 containerd[2010]: time="2025-07-15T05:17:01.102012291Z" level=info msg="connecting to shim fda93a9fc1b3d7e09a43cadb3b22963e40d9a05fa9c7e4917879b0cc407644cc" address="unix:///run/containerd/s/b642a77089b67adbe0943a6b14810ecc9053b3d5784cb8b7fd88d55b158d2006" protocol=ttrpc version=3 Jul 15 05:17:01.144900 systemd[1]: Started cri-containerd-fda93a9fc1b3d7e09a43cadb3b22963e40d9a05fa9c7e4917879b0cc407644cc.scope - libcontainer container fda93a9fc1b3d7e09a43cadb3b22963e40d9a05fa9c7e4917879b0cc407644cc. Jul 15 05:17:01.209973 containerd[2010]: time="2025-07-15T05:17:01.209932207Z" level=info msg="StartContainer for \"fda93a9fc1b3d7e09a43cadb3b22963e40d9a05fa9c7e4917879b0cc407644cc\" returns successfully" Jul 15 05:17:01.525992 ntpd[1978]: Listen normally on 16 vxlan.calico 192.168.116.128:123 Jul 15 05:17:01.526564 ntpd[1978]: Listen normally on 17 vxlan.calico [fe80::6429:c3ff:febc:96bd%12]:123 Jul 15 05:17:01.527208 ntpd[1978]: 15 Jul 05:17:01 ntpd[1978]: Listen normally on 16 vxlan.calico 192.168.116.128:123 Jul 15 05:17:01.527208 ntpd[1978]: 15 Jul 05:17:01 ntpd[1978]: Listen normally on 17 vxlan.calico [fe80::6429:c3ff:febc:96bd%12]:123 Jul 15 05:17:02.091973 systemd[1]: Started sshd@10-172.31.21.211:22-139.178.89.65:56244.service - OpenSSH per-connection server daemon (139.178.89.65:56244). Jul 15 05:17:02.380686 sshd[5960]: Accepted publickey for core from 139.178.89.65 port 56244 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:02.384815 sshd-session[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:02.390870 systemd-logind[1983]: New session 11 of user core. Jul 15 05:17:02.396817 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 05:17:03.507691 sshd[5963]: Connection closed by 139.178.89.65 port 56244 Jul 15 05:17:03.507214 sshd-session[5960]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:03.514433 systemd[1]: sshd@10-172.31.21.211:22-139.178.89.65:56244.service: Deactivated successfully. Jul 15 05:17:03.518627 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 05:17:03.521970 systemd-logind[1983]: Session 11 logged out. Waiting for processes to exit. Jul 15 05:17:03.524140 systemd-logind[1983]: Removed session 11. Jul 15 05:17:04.111172 containerd[2010]: time="2025-07-15T05:17:04.111113945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:04.114838 containerd[2010]: time="2025-07-15T05:17:04.114789063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 15 05:17:04.125109 containerd[2010]: time="2025-07-15T05:17:04.125035667Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:04.131903 containerd[2010]: time="2025-07-15T05:17:04.131860823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:04.132595 containerd[2010]: time="2025-07-15T05:17:04.132565452Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.074161896s" Jul 15 05:17:04.132595 containerd[2010]: time="2025-07-15T05:17:04.132599023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 05:17:04.155322 containerd[2010]: time="2025-07-15T05:17:04.155290742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 05:17:04.184686 containerd[2010]: time="2025-07-15T05:17:04.184626736Z" level=info msg="CreateContainer within sandbox \"458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 05:17:04.207683 containerd[2010]: time="2025-07-15T05:17:04.204420894Z" level=info msg="Container 5c1e8d742f22ce0787d9939bc093d6df29bade16f5c55fe4df3bf9c8157fac52: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:04.225777 containerd[2010]: time="2025-07-15T05:17:04.225729136Z" level=info msg="CreateContainer within sandbox \"458332599a6ef9c5e3d58fef42d808c9267494bfb2999b8ab9f59f2772939c6c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5c1e8d742f22ce0787d9939bc093d6df29bade16f5c55fe4df3bf9c8157fac52\"" Jul 15 05:17:04.227730 containerd[2010]: time="2025-07-15T05:17:04.227474643Z" level=info msg="StartContainer for \"5c1e8d742f22ce0787d9939bc093d6df29bade16f5c55fe4df3bf9c8157fac52\"" Jul 15 05:17:04.229749 containerd[2010]: time="2025-07-15T05:17:04.229709110Z" level=info msg="connecting to shim 5c1e8d742f22ce0787d9939bc093d6df29bade16f5c55fe4df3bf9c8157fac52" address="unix:///run/containerd/s/ac5ed6ff5c6e8e0dba88254cf4cbe1a4727917d2e65a7b8c35e1ef60eedf53c2" protocol=ttrpc version=3 Jul 15 05:17:04.262938 systemd[1]: Started cri-containerd-5c1e8d742f22ce0787d9939bc093d6df29bade16f5c55fe4df3bf9c8157fac52.scope - libcontainer container 5c1e8d742f22ce0787d9939bc093d6df29bade16f5c55fe4df3bf9c8157fac52. Jul 15 05:17:04.341893 containerd[2010]: time="2025-07-15T05:17:04.341862844Z" level=info msg="StartContainer for \"5c1e8d742f22ce0787d9939bc093d6df29bade16f5c55fe4df3bf9c8157fac52\" returns successfully" Jul 15 05:17:04.610546 containerd[2010]: time="2025-07-15T05:17:04.610487751Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:04.613931 containerd[2010]: time="2025-07-15T05:17:04.613871817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 05:17:04.637190 containerd[2010]: time="2025-07-15T05:17:04.637075885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 481.605226ms" Jul 15 05:17:04.637190 containerd[2010]: time="2025-07-15T05:17:04.637143281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 05:17:04.639296 containerd[2010]: time="2025-07-15T05:17:04.639234209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 05:17:04.645483 containerd[2010]: time="2025-07-15T05:17:04.645035093Z" level=info msg="CreateContainer within sandbox \"f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 05:17:04.660555 containerd[2010]: time="2025-07-15T05:17:04.660509156Z" level=info msg="Container 124b9da2655f535d8a46d38200b160147a6a01550cc07cc107381265fa2a2c39: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:04.680003 containerd[2010]: time="2025-07-15T05:17:04.678545648Z" level=info msg="CreateContainer within sandbox \"f0b45d267f137bac6f4ad77b92370f4557d9a85fc1c6df4e673f9d7c4be9506b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"124b9da2655f535d8a46d38200b160147a6a01550cc07cc107381265fa2a2c39\"" Jul 15 05:17:04.682856 containerd[2010]: time="2025-07-15T05:17:04.682817655Z" level=info msg="StartContainer for \"124b9da2655f535d8a46d38200b160147a6a01550cc07cc107381265fa2a2c39\"" Jul 15 05:17:04.684306 containerd[2010]: time="2025-07-15T05:17:04.684246577Z" level=info msg="connecting to shim 124b9da2655f535d8a46d38200b160147a6a01550cc07cc107381265fa2a2c39" address="unix:///run/containerd/s/95989268b0908cd0748a221326b9f60bfb11a3937cbf28313965eb13baccb07a" protocol=ttrpc version=3 Jul 15 05:17:04.716914 systemd[1]: Started cri-containerd-124b9da2655f535d8a46d38200b160147a6a01550cc07cc107381265fa2a2c39.scope - libcontainer container 124b9da2655f535d8a46d38200b160147a6a01550cc07cc107381265fa2a2c39. Jul 15 05:17:04.826296 containerd[2010]: time="2025-07-15T05:17:04.826246497Z" level=info msg="StartContainer for \"124b9da2655f535d8a46d38200b160147a6a01550cc07cc107381265fa2a2c39\" returns successfully" Jul 15 05:17:04.962396 kubelet[3337]: I0715 05:17:04.950888 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d747b47d9-9kmx9" podStartSLOduration=28.163312058 podStartE2EDuration="41.905596303s" podCreationTimestamp="2025-07-15 05:16:23 +0000 UTC" firstStartedPulling="2025-07-15 05:16:50.895960432 +0000 UTC m=+44.804607256" lastFinishedPulling="2025-07-15 05:17:04.638244673 +0000 UTC m=+58.546891501" observedRunningTime="2025-07-15 05:17:04.904452609 +0000 UTC m=+58.813099456" watchObservedRunningTime="2025-07-15 05:17:04.905596303 +0000 UTC m=+58.814243147" Jul 15 05:17:04.962396 kubelet[3337]: I0715 05:17:04.962055 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-665859d6f7-wrsm5" podStartSLOduration=5.937899156 podStartE2EDuration="17.96203354s" podCreationTimestamp="2025-07-15 05:16:47 +0000 UTC" firstStartedPulling="2025-07-15 05:16:49.034096429 +0000 UTC m=+42.942743253" lastFinishedPulling="2025-07-15 05:17:01.05823081 +0000 UTC m=+54.966877637" observedRunningTime="2025-07-15 05:17:01.884489392 +0000 UTC m=+55.793136238" watchObservedRunningTime="2025-07-15 05:17:04.96203354 +0000 UTC m=+58.870680390" Jul 15 05:17:04.962993 kubelet[3337]: I0715 05:17:04.962826 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d747b47d9-86lt6" podStartSLOduration=28.619670087 podStartE2EDuration="41.962810271s" podCreationTimestamp="2025-07-15 05:16:23 +0000 UTC" firstStartedPulling="2025-07-15 05:16:50.807540079 +0000 UTC m=+44.716186917" lastFinishedPulling="2025-07-15 05:17:04.150680264 +0000 UTC m=+58.059327101" observedRunningTime="2025-07-15 05:17:04.962685525 +0000 UTC m=+58.871332363" watchObservedRunningTime="2025-07-15 05:17:04.962810271 +0000 UTC m=+58.871457119" Jul 15 05:17:05.917920 kubelet[3337]: I0715 05:17:05.917361 3337 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:17:08.547832 systemd[1]: Started sshd@11-172.31.21.211:22-139.178.89.65:56260.service - OpenSSH per-connection server daemon (139.178.89.65:56260). Jul 15 05:17:08.902149 sshd[6083]: Accepted publickey for core from 139.178.89.65 port 56260 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:08.905561 sshd-session[6083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:08.914037 systemd-logind[1983]: New session 12 of user core. Jul 15 05:17:08.916969 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 05:17:09.637080 containerd[2010]: time="2025-07-15T05:17:09.636144277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:09.644546 containerd[2010]: time="2025-07-15T05:17:09.644488129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 15 05:17:09.651228 containerd[2010]: time="2025-07-15T05:17:09.649749707Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:09.660603 containerd[2010]: time="2025-07-15T05:17:09.659882441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:09.660603 containerd[2010]: time="2025-07-15T05:17:09.660595668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.021324275s" Jul 15 05:17:09.661318 containerd[2010]: time="2025-07-15T05:17:09.660628405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 15 05:17:09.887550 containerd[2010]: time="2025-07-15T05:17:09.887110353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 05:17:10.111387 containerd[2010]: time="2025-07-15T05:17:10.111352121Z" level=info msg="CreateContainer within sandbox \"4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 05:17:10.134337 containerd[2010]: time="2025-07-15T05:17:10.133967553Z" level=info msg="Container d34a6e9d8b674fe267a1aa39e517167045381c7859326a2b40b90a0006b9bbd5: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:10.194399 sshd[6087]: Connection closed by 139.178.89.65 port 56260 Jul 15 05:17:10.194914 sshd-session[6083]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:10.208625 systemd[1]: sshd@11-172.31.21.211:22-139.178.89.65:56260.service: Deactivated successfully. Jul 15 05:17:10.217836 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 05:17:10.224923 systemd-logind[1983]: Session 12 logged out. Waiting for processes to exit. Jul 15 05:17:10.237691 containerd[2010]: time="2025-07-15T05:17:10.237440555Z" level=info msg="CreateContainer within sandbox \"4f1f4798de7edc6a98bcbc0174bd1f6949390fea1760f52365a5ef387473bfc7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d34a6e9d8b674fe267a1aa39e517167045381c7859326a2b40b90a0006b9bbd5\"" Jul 15 05:17:10.240202 systemd[1]: Started sshd@12-172.31.21.211:22-139.178.89.65:51654.service - OpenSSH per-connection server daemon (139.178.89.65:51654). Jul 15 05:17:10.242729 containerd[2010]: time="2025-07-15T05:17:10.242260805Z" level=info msg="StartContainer for \"d34a6e9d8b674fe267a1aa39e517167045381c7859326a2b40b90a0006b9bbd5\"" Jul 15 05:17:10.253061 systemd-logind[1983]: Removed session 12. Jul 15 05:17:10.258317 containerd[2010]: time="2025-07-15T05:17:10.258255763Z" level=info msg="connecting to shim d34a6e9d8b674fe267a1aa39e517167045381c7859326a2b40b90a0006b9bbd5" address="unix:///run/containerd/s/62b65ee76b99573dcd5af3467ac410c7848c305e6de44412d9613f101070f6b8" protocol=ttrpc version=3 Jul 15 05:17:10.297941 systemd[1]: Started cri-containerd-d34a6e9d8b674fe267a1aa39e517167045381c7859326a2b40b90a0006b9bbd5.scope - libcontainer container d34a6e9d8b674fe267a1aa39e517167045381c7859326a2b40b90a0006b9bbd5. Jul 15 05:17:10.425860 containerd[2010]: time="2025-07-15T05:17:10.425804502Z" level=info msg="StartContainer for \"d34a6e9d8b674fe267a1aa39e517167045381c7859326a2b40b90a0006b9bbd5\" returns successfully" Jul 15 05:17:10.442053 kubelet[3337]: I0715 05:17:10.442010 3337 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:17:10.488882 sshd[6104]: Accepted publickey for core from 139.178.89.65 port 51654 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:10.492420 sshd-session[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:10.502322 systemd-logind[1983]: New session 13 of user core. Jul 15 05:17:10.507913 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 05:17:10.913758 sshd[6145]: Connection closed by 139.178.89.65 port 51654 Jul 15 05:17:10.915611 sshd-session[6104]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:10.921282 systemd[1]: sshd@12-172.31.21.211:22-139.178.89.65:51654.service: Deactivated successfully. Jul 15 05:17:10.925456 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 05:17:10.929700 systemd-logind[1983]: Session 13 logged out. Waiting for processes to exit. Jul 15 05:17:10.931820 systemd-logind[1983]: Removed session 13. Jul 15 05:17:10.946656 systemd[1]: Started sshd@13-172.31.21.211:22-139.178.89.65:51668.service - OpenSSH per-connection server daemon (139.178.89.65:51668). Jul 15 05:17:11.132774 sshd[6158]: Accepted publickey for core from 139.178.89.65 port 51668 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:11.134063 sshd-session[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:11.139908 systemd-logind[1983]: New session 14 of user core. Jul 15 05:17:11.142811 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 05:17:11.175114 kubelet[3337]: I0715 05:17:11.168595 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76587bdddf-m5mzb" podStartSLOduration=24.289583738 podStartE2EDuration="43.168576738s" podCreationTimestamp="2025-07-15 05:16:28 +0000 UTC" firstStartedPulling="2025-07-15 05:16:50.984173444 +0000 UTC m=+44.892820269" lastFinishedPulling="2025-07-15 05:17:09.863166427 +0000 UTC m=+63.771813269" observedRunningTime="2025-07-15 05:17:11.168359892 +0000 UTC m=+65.077006739" watchObservedRunningTime="2025-07-15 05:17:11.168576738 +0000 UTC m=+65.077223575" Jul 15 05:17:11.259341 containerd[2010]: time="2025-07-15T05:17:11.259252078Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d34a6e9d8b674fe267a1aa39e517167045381c7859326a2b40b90a0006b9bbd5\" id:\"c72618f2520c5672b4ea3aa4ac774e289cf012f3541f00e3f8d2c22180190a95\" pid:6175 exited_at:{seconds:1752556631 nanos:244832715}" Jul 15 05:17:11.419477 sshd[6161]: Connection closed by 139.178.89.65 port 51668 Jul 15 05:17:11.423333 sshd-session[6158]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:11.431304 systemd[1]: sshd@13-172.31.21.211:22-139.178.89.65:51668.service: Deactivated successfully. Jul 15 05:17:11.436720 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 05:17:11.441016 systemd-logind[1983]: Session 14 logged out. Waiting for processes to exit. Jul 15 05:17:11.444626 systemd-logind[1983]: Removed session 14. Jul 15 05:17:12.798393 containerd[2010]: time="2025-07-15T05:17:12.798333771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:12.800474 containerd[2010]: time="2025-07-15T05:17:12.800426788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 15 05:17:12.812447 containerd[2010]: time="2025-07-15T05:17:12.812377201Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:12.824593 containerd[2010]: time="2025-07-15T05:17:12.824554211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:17:12.824922 containerd[2010]: time="2025-07-15T05:17:12.824890392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.937743352s" Jul 15 05:17:12.824975 containerd[2010]: time="2025-07-15T05:17:12.824929829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 15 05:17:12.844301 containerd[2010]: time="2025-07-15T05:17:12.844254151Z" level=info msg="CreateContainer within sandbox \"0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 05:17:12.893117 containerd[2010]: time="2025-07-15T05:17:12.893072970Z" level=info msg="Container b94460f2c7140bb9e795fd1474a017855f86c8b5b60ccb984833d7e1cc9eee00: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:17:12.979881 containerd[2010]: time="2025-07-15T05:17:12.979836365Z" level=info msg="CreateContainer within sandbox \"0a6241c9d8117e5fde60794692c35818835680d5042b294c004b073bb4253ac0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b94460f2c7140bb9e795fd1474a017855f86c8b5b60ccb984833d7e1cc9eee00\"" Jul 15 05:17:12.980641 containerd[2010]: time="2025-07-15T05:17:12.980608881Z" level=info msg="StartContainer for \"b94460f2c7140bb9e795fd1474a017855f86c8b5b60ccb984833d7e1cc9eee00\"" Jul 15 05:17:12.981942 containerd[2010]: time="2025-07-15T05:17:12.981912489Z" level=info msg="connecting to shim b94460f2c7140bb9e795fd1474a017855f86c8b5b60ccb984833d7e1cc9eee00" address="unix:///run/containerd/s/64074b93411947a77e1f5ccdf3a237def9da0df286036437fb735781b827f03d" protocol=ttrpc version=3 Jul 15 05:17:13.007926 systemd[1]: Started cri-containerd-b94460f2c7140bb9e795fd1474a017855f86c8b5b60ccb984833d7e1cc9eee00.scope - libcontainer container b94460f2c7140bb9e795fd1474a017855f86c8b5b60ccb984833d7e1cc9eee00. Jul 15 05:17:13.096432 containerd[2010]: time="2025-07-15T05:17:13.095819809Z" level=info msg="StartContainer for \"b94460f2c7140bb9e795fd1474a017855f86c8b5b60ccb984833d7e1cc9eee00\" returns successfully" Jul 15 05:17:13.232447 kubelet[3337]: I0715 05:17:13.231604 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gfc8q" podStartSLOduration=23.178501945 podStartE2EDuration="46.231580348s" podCreationTimestamp="2025-07-15 05:16:27 +0000 UTC" firstStartedPulling="2025-07-15 05:16:49.772907852 +0000 UTC m=+43.681554695" lastFinishedPulling="2025-07-15 05:17:12.825986274 +0000 UTC m=+66.734633098" observedRunningTime="2025-07-15 05:17:13.184138796 +0000 UTC m=+67.092785656" watchObservedRunningTime="2025-07-15 05:17:13.231580348 +0000 UTC m=+67.140227191" Jul 15 05:17:13.682343 kubelet[3337]: I0715 05:17:13.676259 3337 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 05:17:13.685032 kubelet[3337]: I0715 05:17:13.685010 3337 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 05:17:16.454849 systemd[1]: Started sshd@14-172.31.21.211:22-139.178.89.65:51674.service - OpenSSH per-connection server daemon (139.178.89.65:51674). Jul 15 05:17:16.693616 sshd[6240]: Accepted publickey for core from 139.178.89.65 port 51674 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:16.696186 sshd-session[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:16.701733 systemd-logind[1983]: New session 15 of user core. Jul 15 05:17:16.706853 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 05:17:17.241643 sshd[6243]: Connection closed by 139.178.89.65 port 51674 Jul 15 05:17:17.242297 sshd-session[6240]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:17.245697 systemd[1]: sshd@14-172.31.21.211:22-139.178.89.65:51674.service: Deactivated successfully. Jul 15 05:17:17.247694 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 05:17:17.249253 systemd-logind[1983]: Session 15 logged out. Waiting for processes to exit. Jul 15 05:17:17.251489 systemd-logind[1983]: Removed session 15. Jul 15 05:17:19.082579 containerd[2010]: time="2025-07-15T05:17:19.082509755Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8273e4dd25bc0dec3dc3d00a276a7dda2fef2346da705a6d6fbe493f1e14ed3c\" id:\"23a06ec4ef695ec8a9294483de2c202ed386eeaff0b7eeab3ca1642812216c66\" pid:6274 exited_at:{seconds:1752556639 nanos:82172094}" Jul 15 05:17:22.276465 systemd[1]: Started sshd@15-172.31.21.211:22-139.178.89.65:59208.service - OpenSSH per-connection server daemon (139.178.89.65:59208). Jul 15 05:17:22.480901 sshd[6289]: Accepted publickey for core from 139.178.89.65 port 59208 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:22.482453 sshd-session[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:22.487759 systemd-logind[1983]: New session 16 of user core. Jul 15 05:17:22.492894 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 05:17:22.857454 sshd[6292]: Connection closed by 139.178.89.65 port 59208 Jul 15 05:17:22.858366 sshd-session[6289]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:22.865064 systemd[1]: sshd@15-172.31.21.211:22-139.178.89.65:59208.service: Deactivated successfully. Jul 15 05:17:22.867536 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 05:17:22.869221 systemd-logind[1983]: Session 16 logged out. Waiting for processes to exit. Jul 15 05:17:22.871018 systemd-logind[1983]: Removed session 16. Jul 15 05:17:27.902253 systemd[1]: Started sshd@16-172.31.21.211:22-139.178.89.65:59214.service - OpenSSH per-connection server daemon (139.178.89.65:59214). Jul 15 05:17:28.168503 sshd[6310]: Accepted publickey for core from 139.178.89.65 port 59214 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:28.171967 sshd-session[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:28.181798 systemd-logind[1983]: New session 17 of user core. Jul 15 05:17:28.187908 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 05:17:29.004688 sshd[6313]: Connection closed by 139.178.89.65 port 59214 Jul 15 05:17:29.006012 sshd-session[6310]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:29.012881 systemd-logind[1983]: Session 17 logged out. Waiting for processes to exit. Jul 15 05:17:29.015309 systemd[1]: sshd@16-172.31.21.211:22-139.178.89.65:59214.service: Deactivated successfully. Jul 15 05:17:29.019834 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 05:17:29.027253 systemd-logind[1983]: Removed session 17. Jul 15 05:17:31.533786 containerd[2010]: time="2025-07-15T05:17:31.533731062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8cfe945a2054d2af152586de854ee27420e44eb4e185f005e43d8bc3cdbaaca\" id:\"6695db948a2c2f8e0acd8346e403f54fb5afbe51ed8d5c6203a8060bfcc37c4b\" pid:6337 exited_at:{seconds:1752556651 nanos:533403654}" Jul 15 05:17:34.036401 systemd[1]: Started sshd@17-172.31.21.211:22-139.178.89.65:40726.service - OpenSSH per-connection server daemon (139.178.89.65:40726). Jul 15 05:17:34.236735 sshd[6348]: Accepted publickey for core from 139.178.89.65 port 40726 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:34.238553 sshd-session[6348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:34.249284 systemd-logind[1983]: New session 18 of user core. Jul 15 05:17:34.253910 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 05:17:34.927306 sshd[6351]: Connection closed by 139.178.89.65 port 40726 Jul 15 05:17:34.928409 sshd-session[6348]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:34.938169 systemd-logind[1983]: Session 18 logged out. Waiting for processes to exit. Jul 15 05:17:34.939021 systemd[1]: sshd@17-172.31.21.211:22-139.178.89.65:40726.service: Deactivated successfully. Jul 15 05:17:34.944940 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 05:17:34.965529 systemd-logind[1983]: Removed session 18. Jul 15 05:17:34.966967 systemd[1]: Started sshd@18-172.31.21.211:22-139.178.89.65:40740.service - OpenSSH per-connection server daemon (139.178.89.65:40740). Jul 15 05:17:35.179149 sshd[6364]: Accepted publickey for core from 139.178.89.65 port 40740 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:35.180585 sshd-session[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:35.191053 systemd-logind[1983]: New session 19 of user core. Jul 15 05:17:35.197870 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 05:17:35.876799 sshd[6367]: Connection closed by 139.178.89.65 port 40740 Jul 15 05:17:35.885906 sshd-session[6364]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:35.897849 systemd[1]: sshd@18-172.31.21.211:22-139.178.89.65:40740.service: Deactivated successfully. Jul 15 05:17:35.900475 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 05:17:35.902427 systemd-logind[1983]: Session 19 logged out. Waiting for processes to exit. Jul 15 05:17:35.915427 systemd[1]: Started sshd@19-172.31.21.211:22-139.178.89.65:40750.service - OpenSSH per-connection server daemon (139.178.89.65:40750). Jul 15 05:17:35.916933 systemd-logind[1983]: Removed session 19. Jul 15 05:17:36.133052 sshd[6376]: Accepted publickey for core from 139.178.89.65 port 40750 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:36.134977 sshd-session[6376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:36.142729 systemd-logind[1983]: New session 20 of user core. Jul 15 05:17:36.147936 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 05:17:37.355509 sshd[6379]: Connection closed by 139.178.89.65 port 40750 Jul 15 05:17:37.357453 sshd-session[6376]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:37.366009 systemd[1]: sshd@19-172.31.21.211:22-139.178.89.65:40750.service: Deactivated successfully. Jul 15 05:17:37.369487 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 05:17:37.371743 systemd-logind[1983]: Session 20 logged out. Waiting for processes to exit. Jul 15 05:17:37.375728 systemd-logind[1983]: Removed session 20. Jul 15 05:17:37.393062 systemd[1]: Started sshd@20-172.31.21.211:22-139.178.89.65:40756.service - OpenSSH per-connection server daemon (139.178.89.65:40756). Jul 15 05:17:37.601893 sshd[6393]: Accepted publickey for core from 139.178.89.65 port 40756 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:37.604997 sshd-session[6393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:37.613376 systemd-logind[1983]: New session 21 of user core. Jul 15 05:17:37.623899 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 05:17:39.115694 sshd[6399]: Connection closed by 139.178.89.65 port 40756 Jul 15 05:17:39.116521 sshd-session[6393]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:39.123676 systemd[1]: sshd@20-172.31.21.211:22-139.178.89.65:40756.service: Deactivated successfully. Jul 15 05:17:39.125390 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 05:17:39.126250 systemd-logind[1983]: Session 21 logged out. Waiting for processes to exit. Jul 15 05:17:39.151261 systemd-logind[1983]: Removed session 21. Jul 15 05:17:39.153008 systemd[1]: Started sshd@21-172.31.21.211:22-139.178.89.65:43532.service - OpenSSH per-connection server daemon (139.178.89.65:43532). Jul 15 05:17:39.374125 sshd[6435]: Accepted publickey for core from 139.178.89.65 port 43532 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:39.376554 sshd-session[6435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:39.401055 systemd-logind[1983]: New session 22 of user core. Jul 15 05:17:39.405892 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 05:17:40.340417 containerd[2010]: time="2025-07-15T05:17:40.340343642Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8cfe945a2054d2af152586de854ee27420e44eb4e185f005e43d8bc3cdbaaca\" id:\"26b9a4b028f2fd13e4720094eed1178992b80b471ed283297880988db78f52c7\" pid:6425 exited_at:{seconds:1752556660 nanos:248020632}" Jul 15 05:17:40.703223 sshd[6441]: Connection closed by 139.178.89.65 port 43532 Jul 15 05:17:40.707676 sshd-session[6435]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:40.711708 systemd-logind[1983]: Session 22 logged out. Waiting for processes to exit. Jul 15 05:17:40.712372 systemd[1]: sshd@21-172.31.21.211:22-139.178.89.65:43532.service: Deactivated successfully. Jul 15 05:17:40.716405 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 05:17:40.722772 systemd-logind[1983]: Removed session 22. Jul 15 05:17:41.335111 containerd[2010]: time="2025-07-15T05:17:41.335058953Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d34a6e9d8b674fe267a1aa39e517167045381c7859326a2b40b90a0006b9bbd5\" id:\"140b04affee930603b1679aa0f4828420947bd70292c46c95fbc88d5283a865a\" pid:6468 exited_at:{seconds:1752556661 nanos:334485648}" Jul 15 05:17:45.742212 systemd[1]: Started sshd@22-172.31.21.211:22-139.178.89.65:43546.service - OpenSSH per-connection server daemon (139.178.89.65:43546). Jul 15 05:17:46.046228 sshd[6482]: Accepted publickey for core from 139.178.89.65 port 43546 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:46.049469 sshd-session[6482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:46.061823 systemd-logind[1983]: New session 23 of user core. Jul 15 05:17:46.067595 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 05:17:46.658476 sshd[6485]: Connection closed by 139.178.89.65 port 43546 Jul 15 05:17:46.659496 sshd-session[6482]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:46.665107 systemd[1]: sshd@22-172.31.21.211:22-139.178.89.65:43546.service: Deactivated successfully. Jul 15 05:17:46.669186 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 05:17:46.671923 systemd-logind[1983]: Session 23 logged out. Waiting for processes to exit. Jul 15 05:17:46.674778 systemd-logind[1983]: Removed session 23. Jul 15 05:17:49.262117 containerd[2010]: time="2025-07-15T05:17:49.262077426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8273e4dd25bc0dec3dc3d00a276a7dda2fef2346da705a6d6fbe493f1e14ed3c\" id:\"a5fc9888cb6ae61801a8f6703ea5bf7526a420c75f8f6cd3f4b39ed384eb771a\" pid:6509 exited_at:{seconds:1752556669 nanos:261649642}" Jul 15 05:17:51.693856 systemd[1]: Started sshd@23-172.31.21.211:22-139.178.89.65:32956.service - OpenSSH per-connection server daemon (139.178.89.65:32956). Jul 15 05:17:51.892681 sshd[6521]: Accepted publickey for core from 139.178.89.65 port 32956 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:51.893776 sshd-session[6521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:51.898617 systemd-logind[1983]: New session 24 of user core. Jul 15 05:17:51.906094 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 05:17:52.429689 sshd[6524]: Connection closed by 139.178.89.65 port 32956 Jul 15 05:17:52.430943 sshd-session[6521]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:52.437518 systemd[1]: sshd@23-172.31.21.211:22-139.178.89.65:32956.service: Deactivated successfully. Jul 15 05:17:52.441938 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 05:17:52.443835 systemd-logind[1983]: Session 24 logged out. Waiting for processes to exit. Jul 15 05:17:52.446417 systemd-logind[1983]: Removed session 24. Jul 15 05:17:57.470649 systemd[1]: Started sshd@24-172.31.21.211:22-139.178.89.65:32960.service - OpenSSH per-connection server daemon (139.178.89.65:32960). Jul 15 05:17:57.663257 sshd[6536]: Accepted publickey for core from 139.178.89.65 port 32960 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:17:57.666310 sshd-session[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:17:57.672008 systemd-logind[1983]: New session 25 of user core. Jul 15 05:17:57.682895 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 05:17:57.890513 sshd[6539]: Connection closed by 139.178.89.65 port 32960 Jul 15 05:17:57.891111 sshd-session[6536]: pam_unix(sshd:session): session closed for user core Jul 15 05:17:57.894412 systemd[1]: sshd@24-172.31.21.211:22-139.178.89.65:32960.service: Deactivated successfully. Jul 15 05:17:57.896410 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 05:17:57.898567 systemd-logind[1983]: Session 25 logged out. Waiting for processes to exit. Jul 15 05:17:57.900136 systemd-logind[1983]: Removed session 25. Jul 15 05:17:58.550026 containerd[2010]: time="2025-07-15T05:17:58.549983108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d34a6e9d8b674fe267a1aa39e517167045381c7859326a2b40b90a0006b9bbd5\" id:\"81186d6fd3efb5faf06f7d75f1baab652f31fe801825b3fa699229b753c8f8fa\" pid:6562 exited_at:{seconds:1752556678 nanos:549319491}" Jul 15 05:18:01.065323 containerd[2010]: time="2025-07-15T05:18:01.065272889Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8cfe945a2054d2af152586de854ee27420e44eb4e185f005e43d8bc3cdbaaca\" id:\"f83d8a9daaa1f1cec251986928002400377723115f77b7e794b0f5e109959fa7\" pid:6584 exited_at:{seconds:1752556681 nanos:64604206}" Jul 15 05:18:02.930958 systemd[1]: Started sshd@25-172.31.21.211:22-139.178.89.65:37232.service - OpenSSH per-connection server daemon (139.178.89.65:37232). Jul 15 05:18:03.218701 sshd[6595]: Accepted publickey for core from 139.178.89.65 port 37232 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:18:03.220945 sshd-session[6595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:18:03.229165 systemd-logind[1983]: New session 26 of user core. Jul 15 05:18:03.237268 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 05:18:04.535852 sshd[6598]: Connection closed by 139.178.89.65 port 37232 Jul 15 05:18:04.536767 sshd-session[6595]: pam_unix(sshd:session): session closed for user core Jul 15 05:18:04.542259 systemd-logind[1983]: Session 26 logged out. Waiting for processes to exit. Jul 15 05:18:04.544115 systemd[1]: sshd@25-172.31.21.211:22-139.178.89.65:37232.service: Deactivated successfully. Jul 15 05:18:04.546429 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 05:18:04.548930 systemd-logind[1983]: Removed session 26. Jul 15 05:18:09.568933 systemd[1]: Started sshd@26-172.31.21.211:22-139.178.89.65:42954.service - OpenSSH per-connection server daemon (139.178.89.65:42954). Jul 15 05:18:09.766156 sshd[6613]: Accepted publickey for core from 139.178.89.65 port 42954 ssh2: RSA SHA256:GkB2NQb8ttcecrkr6wMNwKWllqcPg0g7p088zv9jGDI Jul 15 05:18:09.767333 sshd-session[6613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:18:09.772912 systemd-logind[1983]: New session 27 of user core. Jul 15 05:18:09.776862 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 15 05:18:10.146993 sshd[6616]: Connection closed by 139.178.89.65 port 42954 Jul 15 05:18:10.147807 sshd-session[6613]: pam_unix(sshd:session): session closed for user core Jul 15 05:18:10.153984 systemd[1]: sshd@26-172.31.21.211:22-139.178.89.65:42954.service: Deactivated successfully. Jul 15 05:18:10.156228 systemd[1]: session-27.scope: Deactivated successfully. Jul 15 05:18:10.157502 systemd-logind[1983]: Session 27 logged out. Waiting for processes to exit. Jul 15 05:18:10.158933 systemd-logind[1983]: Removed session 27. Jul 15 05:18:11.201436 containerd[2010]: time="2025-07-15T05:18:11.201348984Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d34a6e9d8b674fe267a1aa39e517167045381c7859326a2b40b90a0006b9bbd5\" id:\"504462cb4be40b7826beb2ecad4efa47a8c2ae3dedbbaaab437213da751f2a2f\" pid:6639 exited_at:{seconds:1752556691 nanos:201111675}" Jul 15 05:18:18.936270 containerd[2010]: time="2025-07-15T05:18:18.936230596Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8273e4dd25bc0dec3dc3d00a276a7dda2fef2346da705a6d6fbe493f1e14ed3c\" id:\"d624aa289cb28f61bfde1675a36b10219f6c36d328f66861616e6e597fefc129\" pid:6671 exited_at:{seconds:1752556698 nanos:935160376}" Jul 15 05:18:23.840060 systemd[1]: cri-containerd-e7f71e1fbbe4ef2bf9ff0e6066e9281aed52c6d9a7df2b9122f1ebaad791f047.scope: Deactivated successfully. Jul 15 05:18:23.840340 systemd[1]: cri-containerd-e7f71e1fbbe4ef2bf9ff0e6066e9281aed52c6d9a7df2b9122f1ebaad791f047.scope: Consumed 5.665s CPU time, 87.8M memory peak, 112.3M read from disk. Jul 15 05:18:23.989151 containerd[2010]: time="2025-07-15T05:18:23.989075131Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7f71e1fbbe4ef2bf9ff0e6066e9281aed52c6d9a7df2b9122f1ebaad791f047\" id:\"e7f71e1fbbe4ef2bf9ff0e6066e9281aed52c6d9a7df2b9122f1ebaad791f047\" pid:3157 exit_status:1 exited_at:{seconds:1752556703 nanos:949205083}" Jul 15 05:18:23.991408 containerd[2010]: time="2025-07-15T05:18:23.991343676Z" level=info msg="received exit event container_id:\"e7f71e1fbbe4ef2bf9ff0e6066e9281aed52c6d9a7df2b9122f1ebaad791f047\" id:\"e7f71e1fbbe4ef2bf9ff0e6066e9281aed52c6d9a7df2b9122f1ebaad791f047\" pid:3157 exit_status:1 exited_at:{seconds:1752556703 nanos:949205083}" Jul 15 05:18:24.112034 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7f71e1fbbe4ef2bf9ff0e6066e9281aed52c6d9a7df2b9122f1ebaad791f047-rootfs.mount: Deactivated successfully. Jul 15 05:18:24.790007 kubelet[3337]: I0715 05:18:24.789251 3337 scope.go:117] "RemoveContainer" containerID="e7f71e1fbbe4ef2bf9ff0e6066e9281aed52c6d9a7df2b9122f1ebaad791f047" Jul 15 05:18:24.858307 systemd[1]: cri-containerd-d300fed5e8b99022ee0fd33b5368703ae459a773ce020a31233683e00b1a3c18.scope: Deactivated successfully. Jul 15 05:18:24.859915 systemd[1]: cri-containerd-d300fed5e8b99022ee0fd33b5368703ae459a773ce020a31233683e00b1a3c18.scope: Consumed 13.997s CPU time, 113.4M memory peak, 79.8M read from disk. Jul 15 05:18:24.862319 containerd[2010]: time="2025-07-15T05:18:24.862274675Z" level=info msg="received exit event container_id:\"d300fed5e8b99022ee0fd33b5368703ae459a773ce020a31233683e00b1a3c18\" id:\"d300fed5e8b99022ee0fd33b5368703ae459a773ce020a31233683e00b1a3c18\" pid:3932 exit_status:1 exited_at:{seconds:1752556704 nanos:861560222}" Jul 15 05:18:24.863100 containerd[2010]: time="2025-07-15T05:18:24.863064805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d300fed5e8b99022ee0fd33b5368703ae459a773ce020a31233683e00b1a3c18\" id:\"d300fed5e8b99022ee0fd33b5368703ae459a773ce020a31233683e00b1a3c18\" pid:3932 exit_status:1 exited_at:{seconds:1752556704 nanos:861560222}" Jul 15 05:18:24.883844 containerd[2010]: time="2025-07-15T05:18:24.883766619Z" level=info msg="CreateContainer within sandbox \"0c5c8147e9650a7052f3c40ced71066b4e5df38632af5696d505e1fc7d07521d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 15 05:18:24.922376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d300fed5e8b99022ee0fd33b5368703ae459a773ce020a31233683e00b1a3c18-rootfs.mount: Deactivated successfully. Jul 15 05:18:25.054909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103773275.mount: Deactivated successfully. Jul 15 05:18:25.056292 containerd[2010]: time="2025-07-15T05:18:25.055740771Z" level=info msg="Container 173bd381dd5b2539acbbef30e16babc6599c4d5b8fa18df0582e6039d978b66d: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:18:25.064200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339908820.mount: Deactivated successfully. Jul 15 05:18:25.086686 containerd[2010]: time="2025-07-15T05:18:25.086629019Z" level=info msg="CreateContainer within sandbox \"0c5c8147e9650a7052f3c40ced71066b4e5df38632af5696d505e1fc7d07521d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"173bd381dd5b2539acbbef30e16babc6599c4d5b8fa18df0582e6039d978b66d\"" Jul 15 05:18:25.089817 containerd[2010]: time="2025-07-15T05:18:25.089768232Z" level=info msg="StartContainer for \"173bd381dd5b2539acbbef30e16babc6599c4d5b8fa18df0582e6039d978b66d\"" Jul 15 05:18:25.098766 containerd[2010]: time="2025-07-15T05:18:25.098718564Z" level=info msg="connecting to shim 173bd381dd5b2539acbbef30e16babc6599c4d5b8fa18df0582e6039d978b66d" address="unix:///run/containerd/s/bbde67b195baf970f0205a573ff65289e276b6ccb9ce5967e567b7595ae7627b" protocol=ttrpc version=3 Jul 15 05:18:25.176856 systemd[1]: Started cri-containerd-173bd381dd5b2539acbbef30e16babc6599c4d5b8fa18df0582e6039d978b66d.scope - libcontainer container 173bd381dd5b2539acbbef30e16babc6599c4d5b8fa18df0582e6039d978b66d. Jul 15 05:18:25.254336 containerd[2010]: time="2025-07-15T05:18:25.254296872Z" level=info msg="StartContainer for \"173bd381dd5b2539acbbef30e16babc6599c4d5b8fa18df0582e6039d978b66d\" returns successfully" Jul 15 05:18:25.774443 kubelet[3337]: I0715 05:18:25.774394 3337 scope.go:117] "RemoveContainer" containerID="d300fed5e8b99022ee0fd33b5368703ae459a773ce020a31233683e00b1a3c18" Jul 15 05:18:25.786503 containerd[2010]: time="2025-07-15T05:18:25.786355782Z" level=info msg="CreateContainer within sandbox \"367fc8b72133424f8d46fa933f246762409f2eba0ba86761faffd8b5817c87ef\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 15 05:18:25.812052 containerd[2010]: time="2025-07-15T05:18:25.811292535Z" level=info msg="Container b6d77d15b6ea81ab23683f245e23db12708a9299ec1ec1da265bd2c7538020a4: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:18:25.827104 containerd[2010]: time="2025-07-15T05:18:25.827068191Z" level=info msg="CreateContainer within sandbox \"367fc8b72133424f8d46fa933f246762409f2eba0ba86761faffd8b5817c87ef\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b6d77d15b6ea81ab23683f245e23db12708a9299ec1ec1da265bd2c7538020a4\"" Jul 15 05:18:25.827899 containerd[2010]: time="2025-07-15T05:18:25.827870104Z" level=info msg="StartContainer for \"b6d77d15b6ea81ab23683f245e23db12708a9299ec1ec1da265bd2c7538020a4\"" Jul 15 05:18:25.828979 containerd[2010]: time="2025-07-15T05:18:25.828939388Z" level=info msg="connecting to shim b6d77d15b6ea81ab23683f245e23db12708a9299ec1ec1da265bd2c7538020a4" address="unix:///run/containerd/s/e49023da0dbace2487041b0c9cdc823ba2ab4eb46b6a3ab8420834dc25a11b25" protocol=ttrpc version=3 Jul 15 05:18:25.878018 systemd[1]: Started cri-containerd-b6d77d15b6ea81ab23683f245e23db12708a9299ec1ec1da265bd2c7538020a4.scope - libcontainer container b6d77d15b6ea81ab23683f245e23db12708a9299ec1ec1da265bd2c7538020a4. Jul 15 05:18:25.956870 containerd[2010]: time="2025-07-15T05:18:25.956806383Z" level=info msg="StartContainer for \"b6d77d15b6ea81ab23683f245e23db12708a9299ec1ec1da265bd2c7538020a4\" returns successfully" Jul 15 05:18:29.059546 systemd[1]: cri-containerd-b494b83c5a6498dfd86c0d9b299c097461effaa9a921b672632ae583acc2b9c2.scope: Deactivated successfully. Jul 15 05:18:29.061161 systemd[1]: cri-containerd-b494b83c5a6498dfd86c0d9b299c097461effaa9a921b672632ae583acc2b9c2.scope: Consumed 2.629s CPU time, 37.8M memory peak, 72.3M read from disk. Jul 15 05:18:29.063155 containerd[2010]: time="2025-07-15T05:18:29.063114644Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b494b83c5a6498dfd86c0d9b299c097461effaa9a921b672632ae583acc2b9c2\" id:\"b494b83c5a6498dfd86c0d9b299c097461effaa9a921b672632ae583acc2b9c2\" pid:3184 exit_status:1 exited_at:{seconds:1752556709 nanos:61341443}" Jul 15 05:18:29.063426 containerd[2010]: time="2025-07-15T05:18:29.063216597Z" level=info msg="received exit event container_id:\"b494b83c5a6498dfd86c0d9b299c097461effaa9a921b672632ae583acc2b9c2\" id:\"b494b83c5a6498dfd86c0d9b299c097461effaa9a921b672632ae583acc2b9c2\" pid:3184 exit_status:1 exited_at:{seconds:1752556709 nanos:61341443}" Jul 15 05:18:29.126489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b494b83c5a6498dfd86c0d9b299c097461effaa9a921b672632ae583acc2b9c2-rootfs.mount: Deactivated successfully. Jul 15 05:18:29.385090 kubelet[3337]: E0715 05:18:29.385029 3337 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.211:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-211?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 15 05:18:29.789433 kubelet[3337]: I0715 05:18:29.789232 3337 scope.go:117] "RemoveContainer" containerID="b494b83c5a6498dfd86c0d9b299c097461effaa9a921b672632ae583acc2b9c2" Jul 15 05:18:29.791603 containerd[2010]: time="2025-07-15T05:18:29.791572468Z" level=info msg="CreateContainer within sandbox \"755e62372d618906fd4400e6186aa9d2610e3a4558f9028ef3169a296a22910f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 15 05:18:29.814882 containerd[2010]: time="2025-07-15T05:18:29.814188058Z" level=info msg="Container 2fec486c551b72144ee45f849403a9fe267dffc543f549e5dbac13e99e0bfd0c: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:18:29.818527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3004471126.mount: Deactivated successfully. Jul 15 05:18:29.829324 containerd[2010]: time="2025-07-15T05:18:29.829273596Z" level=info msg="CreateContainer within sandbox \"755e62372d618906fd4400e6186aa9d2610e3a4558f9028ef3169a296a22910f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2fec486c551b72144ee45f849403a9fe267dffc543f549e5dbac13e99e0bfd0c\"" Jul 15 05:18:29.829877 containerd[2010]: time="2025-07-15T05:18:29.829795241Z" level=info msg="StartContainer for \"2fec486c551b72144ee45f849403a9fe267dffc543f549e5dbac13e99e0bfd0c\"" Jul 15 05:18:29.830990 containerd[2010]: time="2025-07-15T05:18:29.830933660Z" level=info msg="connecting to shim 2fec486c551b72144ee45f849403a9fe267dffc543f549e5dbac13e99e0bfd0c" address="unix:///run/containerd/s/0e541796f976eb3ee67873b863a254afa87eabb664f39c6ea9ab5172fcb324f7" protocol=ttrpc version=3 Jul 15 05:18:29.856905 systemd[1]: Started cri-containerd-2fec486c551b72144ee45f849403a9fe267dffc543f549e5dbac13e99e0bfd0c.scope - libcontainer container 2fec486c551b72144ee45f849403a9fe267dffc543f549e5dbac13e99e0bfd0c. Jul 15 05:18:29.913507 containerd[2010]: time="2025-07-15T05:18:29.913467114Z" level=info msg="StartContainer for \"2fec486c551b72144ee45f849403a9fe267dffc543f549e5dbac13e99e0bfd0c\" returns successfully" Jul 15 05:18:31.059047 containerd[2010]: time="2025-07-15T05:18:31.058633977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8cfe945a2054d2af152586de854ee27420e44eb4e185f005e43d8bc3cdbaaca\" id:\"0e9ffb3d067adca8fef003ce80a1cd4b0d9af71223259a83d5896326dc35da4b\" pid:6842 exited_at:{seconds:1752556711 nanos:58322177}"