Jul 15 23:56:36.918132 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 22:01:05 -00 2025 Jul 15 23:56:36.918172 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:56:36.918188 kernel: BIOS-provided physical RAM map: Jul 15 23:56:36.918200 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 15 23:56:36.918211 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jul 15 23:56:36.918223 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 15 23:56:36.919246 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 15 23:56:36.919265 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 15 23:56:36.919283 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 15 23:56:36.919296 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 15 23:56:36.919308 kernel: NX (Execute Disable) protection: active Jul 15 23:56:36.919320 kernel: APIC: Static calls initialized Jul 15 23:56:36.919333 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jul 15 23:56:36.919346 kernel: extended physical RAM map: Jul 15 23:56:36.919365 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 15 23:56:36.919379 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jul 15 23:56:36.919393 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jul 15 23:56:36.919407 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jul 15 23:56:36.919421 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 15 23:56:36.919434 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 15 23:56:36.919448 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 15 23:56:36.919462 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 15 23:56:36.919475 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 15 23:56:36.919489 kernel: efi: EFI v2.7 by EDK II Jul 15 23:56:36.919506 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Jul 15 23:56:36.919519 kernel: secureboot: Secure boot disabled Jul 15 23:56:36.919533 kernel: SMBIOS 2.7 present. Jul 15 23:56:36.919547 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 15 23:56:36.919560 kernel: DMI: Memory slots populated: 1/1 Jul 15 23:56:36.919573 kernel: Hypervisor detected: KVM Jul 15 23:56:36.919587 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 23:56:36.919600 kernel: kvm-clock: using sched offset of 4978428477 cycles Jul 15 23:56:36.919615 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 23:56:36.919629 kernel: tsc: Detected 2499.994 MHz processor Jul 15 23:56:36.919644 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 23:56:36.919660 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 23:56:36.919674 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jul 15 23:56:36.919688 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 15 23:56:36.919700 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 23:56:36.919711 kernel: Using GB pages for direct mapping Jul 15 23:56:36.919728 kernel: ACPI: Early table checksum verification disabled Jul 15 23:56:36.919743 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jul 15 23:56:36.919758 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jul 15 23:56:36.919773 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 15 23:56:36.919787 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 15 23:56:36.919801 kernel: ACPI: FACS 0x00000000789D0000 000040 Jul 15 23:56:36.919815 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 15 23:56:36.919829 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 15 23:56:36.919843 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 15 23:56:36.919860 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 15 23:56:36.919874 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 15 23:56:36.919888 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 15 23:56:36.919902 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 15 23:56:36.919916 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jul 15 23:56:36.919931 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jul 15 23:56:36.919945 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jul 15 23:56:36.919959 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jul 15 23:56:36.919976 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jul 15 23:56:36.919990 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jul 15 23:56:36.920004 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jul 15 23:56:36.920018 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jul 15 23:56:36.920032 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jul 15 23:56:36.920046 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jul 15 23:56:36.920060 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jul 15 23:56:36.920073 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jul 15 23:56:36.920087 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 15 23:56:36.920101 kernel: NUMA: Initialized distance table, cnt=1 Jul 15 23:56:36.920118 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Jul 15 23:56:36.920132 kernel: Zone ranges: Jul 15 23:56:36.920145 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 23:56:36.920159 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jul 15 23:56:36.920172 kernel: Normal empty Jul 15 23:56:36.920186 kernel: Device empty Jul 15 23:56:36.920199 kernel: Movable zone start for each node Jul 15 23:56:36.920213 kernel: Early memory node ranges Jul 15 23:56:36.920227 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 15 23:56:36.920732 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jul 15 23:56:36.920748 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jul 15 23:56:36.920762 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jul 15 23:56:36.920776 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 23:56:36.920790 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 15 23:56:36.920804 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 15 23:56:36.920819 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jul 15 23:56:36.920832 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 15 23:56:36.920846 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 23:56:36.920863 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 15 23:56:36.920878 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 23:56:36.920892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 23:56:36.920906 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 23:56:36.920920 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 23:56:36.920934 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 23:56:36.920948 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 23:56:36.920961 kernel: TSC deadline timer available Jul 15 23:56:36.920975 kernel: CPU topo: Max. logical packages: 1 Jul 15 23:56:36.920989 kernel: CPU topo: Max. logical dies: 1 Jul 15 23:56:36.921005 kernel: CPU topo: Max. dies per package: 1 Jul 15 23:56:36.921019 kernel: CPU topo: Max. threads per core: 2 Jul 15 23:56:36.921033 kernel: CPU topo: Num. cores per package: 1 Jul 15 23:56:36.921047 kernel: CPU topo: Num. threads per package: 2 Jul 15 23:56:36.921061 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 15 23:56:36.921075 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 15 23:56:36.921090 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jul 15 23:56:36.921103 kernel: Booting paravirtualized kernel on KVM Jul 15 23:56:36.921118 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 23:56:36.921135 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 15 23:56:36.921149 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 15 23:56:36.921163 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 15 23:56:36.921177 kernel: pcpu-alloc: [0] 0 1 Jul 15 23:56:36.921190 kernel: kvm-guest: PV spinlocks enabled Jul 15 23:56:36.921205 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 23:56:36.921222 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:56:36.922267 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:56:36.922289 kernel: random: crng init done Jul 15 23:56:36.922303 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 23:56:36.922316 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 15 23:56:36.922329 kernel: Fallback order for Node 0: 0 Jul 15 23:56:36.922343 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jul 15 23:56:36.922356 kernel: Policy zone: DMA32 Jul 15 23:56:36.922380 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:56:36.922396 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 23:56:36.922410 kernel: Kernel/User page tables isolation: enabled Jul 15 23:56:36.922423 kernel: ftrace: allocating 40095 entries in 157 pages Jul 15 23:56:36.922436 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 23:56:36.922453 kernel: Dynamic Preempt: voluntary Jul 15 23:56:36.922488 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:56:36.922503 kernel: rcu: RCU event tracing is enabled. Jul 15 23:56:36.922515 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 23:56:36.922529 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:56:36.922544 kernel: Rude variant of Tasks RCU enabled. Jul 15 23:56:36.922576 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:56:36.922589 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:56:36.922604 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 23:56:36.922618 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:56:36.922634 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:56:36.922649 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:56:36.922664 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 15 23:56:36.922675 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:56:36.922690 kernel: Console: colour dummy device 80x25 Jul 15 23:56:36.922707 kernel: printk: legacy console [tty0] enabled Jul 15 23:56:36.922722 kernel: printk: legacy console [ttyS0] enabled Jul 15 23:56:36.922735 kernel: ACPI: Core revision 20240827 Jul 15 23:56:36.922749 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 15 23:56:36.922764 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 23:56:36.922777 kernel: x2apic enabled Jul 15 23:56:36.922790 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 23:56:36.922811 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Jul 15 23:56:36.922826 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Jul 15 23:56:36.922843 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 15 23:56:36.922856 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 15 23:56:36.922869 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 23:56:36.922882 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 23:56:36.922898 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 23:56:36.922911 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 15 23:56:36.922923 kernel: RETBleed: Vulnerable Jul 15 23:56:36.922936 kernel: Speculative Store Bypass: Vulnerable Jul 15 23:56:36.922950 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 15 23:56:36.922965 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 15 23:56:36.922983 kernel: GDS: Unknown: Dependent on hypervisor status Jul 15 23:56:36.922995 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 15 23:56:36.923010 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 23:56:36.923025 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 23:56:36.923040 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 23:56:36.923055 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 15 23:56:36.923070 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 15 23:56:36.923086 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 15 23:56:36.923102 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 15 23:56:36.923117 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 15 23:56:36.923132 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 15 23:56:36.923151 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 23:56:36.923166 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 15 23:56:36.923182 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 15 23:56:36.923197 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 15 23:56:36.923212 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 15 23:56:36.923227 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 15 23:56:36.923873 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 15 23:56:36.923889 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 15 23:56:36.923904 kernel: Freeing SMP alternatives memory: 32K Jul 15 23:56:36.923919 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:56:36.923933 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:56:36.923952 kernel: landlock: Up and running. Jul 15 23:56:36.923966 kernel: SELinux: Initializing. Jul 15 23:56:36.923981 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 15 23:56:36.923996 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 15 23:56:36.924010 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 15 23:56:36.924025 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 15 23:56:36.924040 kernel: signal: max sigframe size: 3632 Jul 15 23:56:36.924054 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:56:36.924070 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:56:36.924085 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:56:36.924103 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 15 23:56:36.924117 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:56:36.924132 kernel: smpboot: x86: Booting SMP configuration: Jul 15 23:56:36.924146 kernel: .... node #0, CPUs: #1 Jul 15 23:56:36.924162 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 15 23:56:36.924179 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 15 23:56:36.924193 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 23:56:36.924208 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Jul 15 23:56:36.924223 kernel: Memory: 1908056K/2037804K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 125188K reserved, 0K cma-reserved) Jul 15 23:56:36.924261 kernel: devtmpfs: initialized Jul 15 23:56:36.924274 kernel: x86/mm: Memory block size: 128MB Jul 15 23:56:36.924289 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jul 15 23:56:36.924304 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:56:36.924319 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 23:56:36.924334 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:56:36.924349 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:56:36.924363 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:56:36.924378 kernel: audit: type=2000 audit(1752623794.529:1): state=initialized audit_enabled=0 res=1 Jul 15 23:56:36.924395 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:56:36.924410 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 23:56:36.924424 kernel: cpuidle: using governor menu Jul 15 23:56:36.924438 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:56:36.924452 kernel: dca service started, version 1.12.1 Jul 15 23:56:36.924466 kernel: PCI: Using configuration type 1 for base access Jul 15 23:56:36.924480 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 23:56:36.924493 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:56:36.924508 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:56:36.924526 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:56:36.924540 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:56:36.924555 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:56:36.924575 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:56:36.924593 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:56:36.924607 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 15 23:56:36.924621 kernel: ACPI: Interpreter enabled Jul 15 23:56:36.924636 kernel: ACPI: PM: (supports S0 S5) Jul 15 23:56:36.924650 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 23:56:36.924670 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 23:56:36.924685 kernel: PCI: Using E820 reservations for host bridge windows Jul 15 23:56:36.924698 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 15 23:56:36.924713 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 23:56:36.924936 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 15 23:56:36.925073 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 15 23:56:36.925198 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 15 23:56:36.925220 kernel: acpiphp: Slot [3] registered Jul 15 23:56:36.926790 kernel: acpiphp: Slot [4] registered Jul 15 23:56:36.926811 kernel: acpiphp: Slot [5] registered Jul 15 23:56:36.926826 kernel: acpiphp: Slot [6] registered Jul 15 23:56:36.926841 kernel: acpiphp: Slot [7] registered Jul 15 23:56:36.926856 kernel: acpiphp: Slot [8] registered Jul 15 23:56:36.926871 kernel: acpiphp: Slot [9] registered Jul 15 23:56:36.926886 kernel: acpiphp: Slot [10] registered Jul 15 23:56:36.926900 kernel: acpiphp: Slot [11] registered Jul 15 23:56:36.926920 kernel: acpiphp: Slot [12] registered Jul 15 23:56:36.926934 kernel: acpiphp: Slot [13] registered Jul 15 23:56:36.926948 kernel: acpiphp: Slot [14] registered Jul 15 23:56:36.926963 kernel: acpiphp: Slot [15] registered Jul 15 23:56:36.926978 kernel: acpiphp: Slot [16] registered Jul 15 23:56:36.926993 kernel: acpiphp: Slot [17] registered Jul 15 23:56:36.927007 kernel: acpiphp: Slot [18] registered Jul 15 23:56:36.927022 kernel: acpiphp: Slot [19] registered Jul 15 23:56:36.927036 kernel: acpiphp: Slot [20] registered Jul 15 23:56:36.927050 kernel: acpiphp: Slot [21] registered Jul 15 23:56:36.927067 kernel: acpiphp: Slot [22] registered Jul 15 23:56:36.927082 kernel: acpiphp: Slot [23] registered Jul 15 23:56:36.927096 kernel: acpiphp: Slot [24] registered Jul 15 23:56:36.927111 kernel: acpiphp: Slot [25] registered Jul 15 23:56:36.927126 kernel: acpiphp: Slot [26] registered Jul 15 23:56:36.927140 kernel: acpiphp: Slot [27] registered Jul 15 23:56:36.927154 kernel: acpiphp: Slot [28] registered Jul 15 23:56:36.927169 kernel: acpiphp: Slot [29] registered Jul 15 23:56:36.927184 kernel: acpiphp: Slot [30] registered Jul 15 23:56:36.927200 kernel: acpiphp: Slot [31] registered Jul 15 23:56:36.927216 kernel: PCI host bridge to bus 0000:00 Jul 15 23:56:36.927422 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 23:56:36.927588 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 23:56:36.927717 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 23:56:36.927837 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 15 23:56:36.927956 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jul 15 23:56:36.928079 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 23:56:36.928257 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 15 23:56:36.928421 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 15 23:56:36.928572 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jul 15 23:56:36.928709 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 15 23:56:36.928849 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 15 23:56:36.928995 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 15 23:56:36.929133 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 15 23:56:36.930362 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 15 23:56:36.930519 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 15 23:56:36.930657 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 15 23:56:36.930806 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jul 15 23:56:36.930943 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jul 15 23:56:36.931082 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 15 23:56:36.931216 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 23:56:36.932441 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jul 15 23:56:36.932590 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jul 15 23:56:36.932735 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jul 15 23:56:36.932872 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jul 15 23:56:36.932892 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 23:56:36.932913 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 23:56:36.932929 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 23:56:36.932945 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 23:56:36.932961 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 15 23:56:36.932978 kernel: iommu: Default domain type: Translated Jul 15 23:56:36.932993 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 23:56:36.933009 kernel: efivars: Registered efivars operations Jul 15 23:56:36.933025 kernel: PCI: Using ACPI for IRQ routing Jul 15 23:56:36.933040 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 23:56:36.933059 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jul 15 23:56:36.933074 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jul 15 23:56:36.933089 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jul 15 23:56:36.933227 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 15 23:56:36.938488 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 15 23:56:36.938634 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 23:56:36.938655 kernel: vgaarb: loaded Jul 15 23:56:36.938672 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 15 23:56:36.938695 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 15 23:56:36.938712 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 23:56:36.938727 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:56:36.938743 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:56:36.938760 kernel: pnp: PnP ACPI init Jul 15 23:56:36.938777 kernel: pnp: PnP ACPI: found 5 devices Jul 15 23:56:36.938794 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 23:56:36.938809 kernel: NET: Registered PF_INET protocol family Jul 15 23:56:36.938826 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 23:56:36.938845 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 15 23:56:36.938861 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:56:36.938878 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 15 23:56:36.938894 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 15 23:56:36.938910 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 15 23:56:36.938925 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 15 23:56:36.938941 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 15 23:56:36.938958 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:56:36.938973 kernel: NET: Registered PF_XDP protocol family Jul 15 23:56:36.939107 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 23:56:36.939229 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 23:56:36.939364 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 23:56:36.939482 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 15 23:56:36.939600 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jul 15 23:56:36.939741 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 15 23:56:36.939762 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:56:36.939778 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 15 23:56:36.939799 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Jul 15 23:56:36.939814 kernel: clocksource: Switched to clocksource tsc Jul 15 23:56:36.939830 kernel: Initialise system trusted keyrings Jul 15 23:56:36.939846 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 15 23:56:36.939861 kernel: Key type asymmetric registered Jul 15 23:56:36.939877 kernel: Asymmetric key parser 'x509' registered Jul 15 23:56:36.939892 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 23:56:36.939907 kernel: io scheduler mq-deadline registered Jul 15 23:56:36.939923 kernel: io scheduler kyber registered Jul 15 23:56:36.939942 kernel: io scheduler bfq registered Jul 15 23:56:36.939957 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 23:56:36.939973 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:56:36.939990 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 23:56:36.940006 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 23:56:36.940022 kernel: i8042: Warning: Keylock active Jul 15 23:56:36.940037 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 23:56:36.940053 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 23:56:36.940206 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 15 23:56:36.942427 kernel: rtc_cmos 00:00: registered as rtc0 Jul 15 23:56:36.942564 kernel: rtc_cmos 00:00: setting system clock to 2025-07-15T23:56:36 UTC (1752623796) Jul 15 23:56:36.942687 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 15 23:56:36.942706 kernel: intel_pstate: CPU model not supported Jul 15 23:56:36.942748 kernel: efifb: probing for efifb Jul 15 23:56:36.942767 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jul 15 23:56:36.942783 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 15 23:56:36.942801 kernel: efifb: scrolling: redraw Jul 15 23:56:36.942817 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 15 23:56:36.942832 kernel: Console: switching to colour frame buffer device 100x37 Jul 15 23:56:36.942848 kernel: fb0: EFI VGA frame buffer device Jul 15 23:56:36.942864 kernel: pstore: Using crash dump compression: deflate Jul 15 23:56:36.942880 kernel: pstore: Registered efi_pstore as persistent store backend Jul 15 23:56:36.942895 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:56:36.942911 kernel: Segment Routing with IPv6 Jul 15 23:56:36.942926 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:56:36.942942 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:56:36.942960 kernel: Key type dns_resolver registered Jul 15 23:56:36.942975 kernel: IPI shorthand broadcast: enabled Jul 15 23:56:36.942991 kernel: sched_clock: Marking stable (2714078577, 144293856)->(2933589507, -75217074) Jul 15 23:56:36.943006 kernel: registered taskstats version 1 Jul 15 23:56:36.943022 kernel: Loading compiled-in X.509 certificates Jul 15 23:56:36.943038 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: cfc533be64675f3c66ee10d42aa8c5ce2115881d' Jul 15 23:56:36.943053 kernel: Demotion targets for Node 0: null Jul 15 23:56:36.943069 kernel: Key type .fscrypt registered Jul 15 23:56:36.943084 kernel: Key type fscrypt-provisioning registered Jul 15 23:56:36.943102 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 23:56:36.943118 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:56:36.943134 kernel: ima: No architecture policies found Jul 15 23:56:36.943149 kernel: clk: Disabling unused clocks Jul 15 23:56:36.943164 kernel: Warning: unable to open an initial console. Jul 15 23:56:36.943180 kernel: Freeing unused kernel image (initmem) memory: 54424K Jul 15 23:56:36.943196 kernel: Write protecting the kernel read-only data: 24576k Jul 15 23:56:36.943211 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 23:56:36.943230 kernel: Run /init as init process Jul 15 23:56:36.943265 kernel: with arguments: Jul 15 23:56:36.943281 kernel: /init Jul 15 23:56:36.943296 kernel: with environment: Jul 15 23:56:36.943311 kernel: HOME=/ Jul 15 23:56:36.943327 kernel: TERM=linux Jul 15 23:56:36.943345 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:56:36.943362 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:56:36.943386 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:56:36.943403 systemd[1]: Detected virtualization amazon. Jul 15 23:56:36.943419 systemd[1]: Detected architecture x86-64. Jul 15 23:56:36.943434 systemd[1]: Running in initrd. Jul 15 23:56:36.943451 systemd[1]: No hostname configured, using default hostname. Jul 15 23:56:36.943471 systemd[1]: Hostname set to . Jul 15 23:56:36.943486 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:56:36.943503 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:56:36.943519 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:56:36.943536 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:56:36.943553 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:56:36.943570 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:56:36.943587 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:56:36.943607 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:56:36.943625 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:56:36.943642 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:56:36.943658 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:56:36.943675 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:56:36.943691 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:56:36.943707 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:56:36.943726 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:56:36.943742 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:56:36.943759 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:56:36.943775 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:56:36.943791 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:56:36.943808 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:56:36.943824 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:56:36.943840 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:56:36.943857 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:56:36.943876 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:56:36.943893 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:56:36.943909 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:56:36.943926 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:56:36.943943 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:56:36.943959 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:56:36.943976 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:56:36.943992 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:56:36.944011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:56:36.944027 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:56:36.944072 systemd-journald[207]: Collecting audit messages is disabled. Jul 15 23:56:36.944111 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:56:36.944128 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:56:36.944145 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:56:36.944162 systemd-journald[207]: Journal started Jul 15 23:56:36.944199 systemd-journald[207]: Runtime Journal (/run/log/journal/ec2c846c69f4aff83bcac28467ae0e07) is 4.8M, max 38.4M, 33.6M free. Jul 15 23:56:36.922850 systemd-modules-load[208]: Inserted module 'overlay' Jul 15 23:56:36.955124 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:56:36.965267 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:56:36.968097 systemd-modules-load[208]: Inserted module 'br_netfilter' Jul 15 23:56:36.968829 kernel: Bridge firewalling registered Jul 15 23:56:36.969417 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:56:36.970465 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:56:36.976391 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:56:36.981388 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:56:36.984477 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:56:36.988168 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:56:36.996430 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:56:37.005260 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 23:56:37.004354 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:56:37.018399 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:56:37.020620 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:56:37.021502 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:56:37.025495 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:56:37.030434 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:56:37.032201 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:56:37.064416 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:56:37.092744 systemd-resolved[245]: Positive Trust Anchors: Jul 15 23:56:37.093765 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:56:37.093829 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:56:37.101160 systemd-resolved[245]: Defaulting to hostname 'linux'. Jul 15 23:56:37.104526 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:56:37.105255 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:56:37.163281 kernel: SCSI subsystem initialized Jul 15 23:56:37.173273 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:56:37.185272 kernel: iscsi: registered transport (tcp) Jul 15 23:56:37.207275 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:56:37.207362 kernel: QLogic iSCSI HBA Driver Jul 15 23:56:37.226339 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:56:37.246784 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:56:37.249035 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:56:37.299110 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:56:37.300856 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:56:37.351282 kernel: raid6: avx512x4 gen() 17648 MB/s Jul 15 23:56:37.369267 kernel: raid6: avx512x2 gen() 17657 MB/s Jul 15 23:56:37.387280 kernel: raid6: avx512x1 gen() 17723 MB/s Jul 15 23:56:37.405263 kernel: raid6: avx2x4 gen() 17485 MB/s Jul 15 23:56:37.423267 kernel: raid6: avx2x2 gen() 17570 MB/s Jul 15 23:56:37.441528 kernel: raid6: avx2x1 gen() 13511 MB/s Jul 15 23:56:37.441614 kernel: raid6: using algorithm avx512x1 gen() 17723 MB/s Jul 15 23:56:37.464719 kernel: raid6: .... xor() 15386 MB/s, rmw enabled Jul 15 23:56:37.464807 kernel: raid6: using avx512x2 recovery algorithm Jul 15 23:56:37.494279 kernel: xor: automatically using best checksumming function avx Jul 15 23:56:37.664267 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:56:37.670801 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:56:37.672998 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:56:37.702403 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jul 15 23:56:37.709040 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:56:37.712946 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:56:37.745859 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jul 15 23:56:37.752262 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 15 23:56:37.775780 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:56:37.777845 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:56:37.838792 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:56:37.842462 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:56:37.943282 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 23:56:37.952457 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 15 23:56:37.952770 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 15 23:56:37.966260 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 15 23:56:37.974267 kernel: AES CTR mode by8 optimization enabled Jul 15 23:56:37.973622 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:56:37.989729 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 15 23:56:37.989995 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 15 23:56:37.990024 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:08:9c:f7:52:8d Jul 15 23:56:37.973905 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:56:37.996412 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 15 23:56:37.991465 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:56:38.000896 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:56:38.004515 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:56:38.025033 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 23:56:38.025069 kernel: GPT:9289727 != 16777215 Jul 15 23:56:38.025089 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 23:56:38.025110 kernel: GPT:9289727 != 16777215 Jul 15 23:56:38.025129 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 23:56:38.025149 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 23:56:38.019005 (udev-worker)[503]: Network interface NamePolicy= disabled on kernel command line. Jul 15 23:56:38.027733 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:56:38.028727 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:56:38.031845 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:56:38.058975 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:56:38.064254 kernel: nvme nvme0: using unchecked data buffer Jul 15 23:56:38.180816 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 15 23:56:38.181592 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 23:56:38.200698 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 15 23:56:38.201451 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 15 23:56:38.213091 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 15 23:56:38.224034 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 15 23:56:38.224672 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:56:38.226037 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:56:38.227177 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:56:38.228913 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 23:56:38.233410 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 23:56:38.253871 disk-uuid[691]: Primary Header is updated. Jul 15 23:56:38.253871 disk-uuid[691]: Secondary Entries is updated. Jul 15 23:56:38.253871 disk-uuid[691]: Secondary Header is updated. Jul 15 23:56:38.260294 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 23:56:38.265304 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:56:38.281275 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 23:56:39.287285 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 23:56:39.288097 disk-uuid[694]: The operation has completed successfully. Jul 15 23:56:39.402422 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 23:56:39.402528 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 23:56:39.435207 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 23:56:39.452787 sh[957]: Success Jul 15 23:56:39.480804 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 23:56:39.480909 kernel: device-mapper: uevent: version 1.0.3 Jul 15 23:56:39.481780 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 23:56:39.494255 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 15 23:56:39.592626 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 23:56:39.596337 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 23:56:39.604368 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 23:56:39.624676 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 23:56:39.624743 kernel: BTRFS: device fsid 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (980) Jul 15 23:56:39.631205 kernel: BTRFS info (device dm-0): first mount of filesystem 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e Jul 15 23:56:39.631278 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:56:39.631309 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 23:56:39.809715 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 23:56:39.810673 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:56:39.811496 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 23:56:39.812900 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 23:56:39.816402 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 23:56:39.857280 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1015) Jul 15 23:56:39.862191 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:56:39.862289 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:56:39.863419 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 23:56:39.888392 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:56:39.886974 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 23:56:39.891057 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 23:56:39.921174 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:56:39.923806 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:56:39.965455 systemd-networkd[1149]: lo: Link UP Jul 15 23:56:39.965468 systemd-networkd[1149]: lo: Gained carrier Jul 15 23:56:39.967322 systemd-networkd[1149]: Enumeration completed Jul 15 23:56:39.967754 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:56:39.967759 systemd-networkd[1149]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:56:39.968757 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:56:39.970337 systemd[1]: Reached target network.target - Network. Jul 15 23:56:39.971865 systemd-networkd[1149]: eth0: Link UP Jul 15 23:56:39.971871 systemd-networkd[1149]: eth0: Gained carrier Jul 15 23:56:39.971889 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:56:39.983329 systemd-networkd[1149]: eth0: DHCPv4 address 172.31.28.113/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 15 23:56:40.320524 ignition[1112]: Ignition 2.21.0 Jul 15 23:56:40.320539 ignition[1112]: Stage: fetch-offline Jul 15 23:56:40.320730 ignition[1112]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:56:40.320739 ignition[1112]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:56:40.322494 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:56:40.320937 ignition[1112]: Ignition finished successfully Jul 15 23:56:40.324244 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 23:56:40.350473 ignition[1159]: Ignition 2.21.0 Jul 15 23:56:40.350487 ignition[1159]: Stage: fetch Jul 15 23:56:40.350773 ignition[1159]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:56:40.350782 ignition[1159]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:56:40.350865 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:56:40.394427 ignition[1159]: PUT result: OK Jul 15 23:56:40.397265 ignition[1159]: parsed url from cmdline: "" Jul 15 23:56:40.397274 ignition[1159]: no config URL provided Jul 15 23:56:40.397290 ignition[1159]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:56:40.397302 ignition[1159]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:56:40.397319 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:56:40.406439 ignition[1159]: PUT result: OK Jul 15 23:56:40.406526 ignition[1159]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 15 23:56:40.408834 ignition[1159]: GET result: OK Jul 15 23:56:40.409294 ignition[1159]: parsing config with SHA512: c2c34bc7a246345601ef9e3fe6e953fe305a192134c679f2f318830f3c7a30900d5887c73603537ad751887e66156dd48f126b337ac85c3cbebad5454a107eb7 Jul 15 23:56:40.415288 unknown[1159]: fetched base config from "system" Jul 15 23:56:40.415971 unknown[1159]: fetched base config from "system" Jul 15 23:56:40.415987 unknown[1159]: fetched user config from "aws" Jul 15 23:56:40.416328 ignition[1159]: fetch: fetch complete Jul 15 23:56:40.416333 ignition[1159]: fetch: fetch passed Jul 15 23:56:40.416376 ignition[1159]: Ignition finished successfully Jul 15 23:56:40.418751 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 23:56:40.420123 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 23:56:40.446026 ignition[1166]: Ignition 2.21.0 Jul 15 23:56:40.446044 ignition[1166]: Stage: kargs Jul 15 23:56:40.446442 ignition[1166]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:56:40.446455 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:56:40.446575 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:56:40.448247 ignition[1166]: PUT result: OK Jul 15 23:56:40.452096 ignition[1166]: kargs: kargs passed Jul 15 23:56:40.452170 ignition[1166]: Ignition finished successfully Jul 15 23:56:40.454359 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 23:56:40.455791 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 23:56:40.477397 ignition[1172]: Ignition 2.21.0 Jul 15 23:56:40.477412 ignition[1172]: Stage: disks Jul 15 23:56:40.477988 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:56:40.477998 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:56:40.478114 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:56:40.480183 ignition[1172]: PUT result: OK Jul 15 23:56:40.482900 ignition[1172]: disks: disks passed Jul 15 23:56:40.482974 ignition[1172]: Ignition finished successfully Jul 15 23:56:40.484889 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 23:56:40.485624 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 23:56:40.486053 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 23:56:40.486897 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:56:40.487315 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:56:40.487810 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:56:40.489418 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 23:56:40.531658 systemd-fsck[1180]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 23:56:40.534328 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 23:56:40.536118 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 23:56:40.695266 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e7011b63-42ae-44ea-90bf-c826e39292b2 r/w with ordered data mode. Quota mode: none. Jul 15 23:56:40.696457 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 23:56:40.697352 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 23:56:40.699540 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:56:40.702328 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 23:56:40.703497 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 23:56:40.704177 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 23:56:40.704208 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:56:40.716904 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 23:56:40.719224 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 23:56:40.736285 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1199) Jul 15 23:56:40.740312 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:56:40.740376 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:56:40.740391 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 23:56:40.747976 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:56:41.092919 initrd-setup-root[1224]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 23:56:41.109464 initrd-setup-root[1231]: cut: /sysroot/etc/group: No such file or directory Jul 15 23:56:41.114501 initrd-setup-root[1238]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 23:56:41.118438 initrd-setup-root[1245]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 23:56:41.224413 systemd-networkd[1149]: eth0: Gained IPv6LL Jul 15 23:56:41.321484 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 23:56:41.323589 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 23:56:41.327408 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 23:56:41.341163 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 23:56:41.343519 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:56:41.370779 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 23:56:41.376257 ignition[1313]: INFO : Ignition 2.21.0 Jul 15 23:56:41.376257 ignition[1313]: INFO : Stage: mount Jul 15 23:56:41.378992 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:56:41.378992 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:56:41.378992 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:56:41.380831 ignition[1313]: INFO : PUT result: OK Jul 15 23:56:41.383273 ignition[1313]: INFO : mount: mount passed Jul 15 23:56:41.383853 ignition[1313]: INFO : Ignition finished successfully Jul 15 23:56:41.385341 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 23:56:41.387032 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 23:56:41.698782 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:56:41.730397 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1326) Jul 15 23:56:41.730462 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:56:41.734093 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:56:41.734166 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 23:56:41.741790 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:56:41.768350 ignition[1342]: INFO : Ignition 2.21.0 Jul 15 23:56:41.770258 ignition[1342]: INFO : Stage: files Jul 15 23:56:41.770258 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:56:41.770258 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:56:41.770258 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:56:41.772611 ignition[1342]: INFO : PUT result: OK Jul 15 23:56:41.775486 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping Jul 15 23:56:41.777198 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 23:56:41.778137 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 23:56:41.792021 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 23:56:41.792816 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 23:56:41.792816 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 23:56:41.792429 unknown[1342]: wrote ssh authorized keys file for user: core Jul 15 23:56:41.806699 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 15 23:56:41.807673 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 15 23:56:41.882094 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 23:56:42.094181 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 15 23:56:42.094181 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 23:56:42.095738 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 23:56:42.095738 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:56:42.095738 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:56:42.095738 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:56:42.095738 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:56:42.095738 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:56:42.095738 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:56:42.100287 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:56:42.100287 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:56:42.100287 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 23:56:42.102772 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 23:56:42.102772 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 23:56:42.102772 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 15 23:56:42.617415 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 23:56:43.603790 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 23:56:43.603790 ignition[1342]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 23:56:43.614000 ignition[1342]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:56:43.618666 ignition[1342]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:56:43.618666 ignition[1342]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 23:56:43.618666 ignition[1342]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 15 23:56:43.622049 ignition[1342]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 23:56:43.622049 ignition[1342]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:56:43.622049 ignition[1342]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:56:43.622049 ignition[1342]: INFO : files: files passed Jul 15 23:56:43.622049 ignition[1342]: INFO : Ignition finished successfully Jul 15 23:56:43.620453 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 23:56:43.622796 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 23:56:43.627163 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 23:56:43.638480 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 23:56:43.638587 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 23:56:43.644097 initrd-setup-root-after-ignition[1372]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:56:43.646118 initrd-setup-root-after-ignition[1372]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:56:43.646921 initrd-setup-root-after-ignition[1376]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:56:43.647448 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:56:43.648532 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 23:56:43.650084 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 23:56:43.699031 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 23:56:43.699194 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 23:56:43.700439 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 23:56:43.701649 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 23:56:43.702423 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 23:56:43.703654 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 23:56:43.729714 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:56:43.731923 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 23:56:43.758192 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:56:43.758830 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:56:43.759748 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 23:56:43.760620 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 23:56:43.760778 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:56:43.761862 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 23:56:43.762564 systemd[1]: Stopped target basic.target - Basic System. Jul 15 23:56:43.763313 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 23:56:43.764097 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:56:43.764724 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 23:56:43.765484 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:56:43.766435 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 23:56:43.767075 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:56:43.767763 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 23:56:43.768574 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 23:56:43.769701 systemd[1]: Stopped target swap.target - Swaps. Jul 15 23:56:43.770369 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 23:56:43.770500 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:56:43.771445 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:56:43.772195 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:56:43.772804 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 23:56:43.772900 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:56:43.773462 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 23:56:43.773721 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 23:56:43.774747 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 23:56:43.774921 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:56:43.775887 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 23:56:43.776014 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 23:56:43.778336 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 23:56:43.782412 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 23:56:43.783289 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 23:56:43.783759 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:56:43.784729 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 23:56:43.785020 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:56:43.795104 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 23:56:43.795774 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 23:56:43.805905 ignition[1396]: INFO : Ignition 2.21.0 Jul 15 23:56:43.806649 ignition[1396]: INFO : Stage: umount Jul 15 23:56:43.806649 ignition[1396]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:56:43.806649 ignition[1396]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:56:43.806649 ignition[1396]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:56:43.808764 ignition[1396]: INFO : PUT result: OK Jul 15 23:56:43.808981 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 23:56:43.813256 ignition[1396]: INFO : umount: umount passed Jul 15 23:56:43.813256 ignition[1396]: INFO : Ignition finished successfully Jul 15 23:56:43.814112 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 23:56:43.814223 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 23:56:43.814854 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 23:56:43.814901 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 23:56:43.815275 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 23:56:43.815317 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 23:56:43.815912 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 23:56:43.815963 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 23:56:43.817761 systemd[1]: Stopped target network.target - Network. Jul 15 23:56:43.818332 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 23:56:43.818390 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:56:43.818934 systemd[1]: Stopped target paths.target - Path Units. Jul 15 23:56:43.819539 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 23:56:43.821301 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:56:43.821818 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 23:56:43.822395 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 23:56:43.823039 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 23:56:43.823082 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:56:43.823905 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 23:56:43.823943 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:56:43.824517 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 23:56:43.824579 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 23:56:43.825161 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 23:56:43.825206 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 23:56:43.825998 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 23:56:43.826511 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 23:56:43.827692 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 23:56:43.827782 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 23:56:43.828820 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 23:56:43.828919 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 23:56:43.832782 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 23:56:43.832895 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 23:56:43.836271 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 23:56:43.836476 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 23:56:43.836571 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 23:56:43.838476 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 23:56:43.838784 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 23:56:43.839461 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 23:56:43.839496 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:56:43.840827 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 23:56:43.841149 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 23:56:43.841195 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:56:43.841653 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:56:43.841691 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:56:43.842075 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 23:56:43.842110 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 23:56:43.845220 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 23:56:43.845285 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:56:43.846048 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:56:43.849188 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:56:43.849279 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:56:43.862106 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 23:56:43.862281 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:56:43.863451 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 23:56:43.863593 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 23:56:43.865305 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 23:56:43.866012 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 23:56:43.866708 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 23:56:43.866760 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:56:43.867559 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 23:56:43.867635 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:56:43.868755 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 23:56:43.868822 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 23:56:43.870110 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 23:56:43.870181 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:56:43.872329 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 23:56:43.873336 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 23:56:43.873412 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:56:43.876696 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 23:56:43.876778 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:56:43.878711 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 23:56:43.878785 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:56:43.879528 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 23:56:43.879592 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:56:43.880192 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:56:43.880279 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:56:43.885604 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 23:56:43.885707 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 23:56:43.885759 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 23:56:43.885814 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:56:43.892457 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 23:56:43.892605 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 23:56:43.893995 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 23:56:43.895790 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 23:56:43.915109 systemd[1]: Switching root. Jul 15 23:56:43.945034 systemd-journald[207]: Journal stopped Jul 15 23:56:45.740968 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 15 23:56:45.741061 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 23:56:45.741093 kernel: SELinux: policy capability open_perms=1 Jul 15 23:56:45.741123 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 23:56:45.741144 kernel: SELinux: policy capability always_check_network=0 Jul 15 23:56:45.741165 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 23:56:45.741190 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 23:56:45.741210 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 23:56:45.741245 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 23:56:45.741262 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 23:56:45.741287 kernel: audit: type=1403 audit(1752623804.355:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 23:56:45.741307 systemd[1]: Successfully loaded SELinux policy in 91.059ms. Jul 15 23:56:45.741338 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.255ms. Jul 15 23:56:45.741360 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:56:45.741380 systemd[1]: Detected virtualization amazon. Jul 15 23:56:45.741400 systemd[1]: Detected architecture x86-64. Jul 15 23:56:45.741419 systemd[1]: Detected first boot. Jul 15 23:56:45.741438 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:56:45.741457 kernel: Guest personality initialized and is inactive Jul 15 23:56:45.741474 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 23:56:45.741501 kernel: Initialized host personality Jul 15 23:56:45.741524 zram_generator::config[1440]: No configuration found. Jul 15 23:56:45.741543 kernel: NET: Registered PF_VSOCK protocol family Jul 15 23:56:45.741562 systemd[1]: Populated /etc with preset unit settings. Jul 15 23:56:45.741582 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 23:56:45.741607 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 23:56:45.741626 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 23:56:45.741645 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 23:56:45.741665 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 23:56:45.741688 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 23:56:45.741707 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 23:56:45.741726 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 23:56:45.741745 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 23:56:45.741764 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 23:56:45.741783 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 23:56:45.741802 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 23:56:45.741821 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:56:45.741844 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:56:45.741860 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 23:56:45.741877 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 23:56:45.741898 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 23:56:45.741918 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:56:45.741937 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 23:56:45.741957 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:56:45.741977 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:56:45.742013 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 23:56:45.742033 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 23:56:45.742053 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 23:56:45.742072 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 23:56:45.742092 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:56:45.742111 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:56:45.742131 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:56:45.742150 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:56:45.742169 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 23:56:45.742194 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 23:56:45.742214 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 23:56:45.752895 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:56:45.752949 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:56:45.752972 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:56:45.752993 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 23:56:45.753015 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 23:56:45.753037 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 23:56:45.753059 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 23:56:45.753087 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:56:45.753109 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 23:56:45.753130 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 23:56:45.753151 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 23:56:45.753173 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 23:56:45.753196 systemd[1]: Reached target machines.target - Containers. Jul 15 23:56:45.753217 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 23:56:45.753342 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:56:45.753369 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:56:45.753390 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 23:56:45.753412 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:56:45.753433 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:56:45.753454 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:56:45.753475 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 23:56:45.753504 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:56:45.753525 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 23:56:45.753546 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 23:56:45.753571 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 23:56:45.753593 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 23:56:45.753613 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 23:56:45.753636 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:56:45.753663 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:56:45.753684 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:56:45.753708 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:56:45.753730 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 23:56:45.753750 kernel: loop: module loaded Jul 15 23:56:45.753772 kernel: fuse: init (API version 7.41) Jul 15 23:56:45.753792 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 23:56:45.753813 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:56:45.753837 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 23:56:45.753858 systemd[1]: Stopped verity-setup.service. Jul 15 23:56:45.753880 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:56:45.753901 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 23:56:45.753923 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 23:56:45.753943 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 23:56:45.753965 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 23:56:45.753989 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 23:56:45.754010 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 23:56:45.754032 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:56:45.754053 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 23:56:45.754075 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 23:56:45.754096 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:56:45.754117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:56:45.754138 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:56:45.754159 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:56:45.754183 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 23:56:45.754205 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 23:56:45.754225 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:56:45.754260 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:56:45.754283 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:56:45.754305 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:56:45.754326 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 23:56:45.754348 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 23:56:45.754370 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 23:56:45.754395 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:56:45.754416 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 23:56:45.754437 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 23:56:45.754458 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 23:56:45.754482 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:56:45.754504 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 23:56:45.754573 systemd-journald[1533]: Collecting audit messages is disabled. Jul 15 23:56:45.754615 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 23:56:45.754636 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:56:45.754659 systemd-journald[1533]: Journal started Jul 15 23:56:45.754704 systemd-journald[1533]: Runtime Journal (/run/log/journal/ec2c846c69f4aff83bcac28467ae0e07) is 4.8M, max 38.4M, 33.6M free. Jul 15 23:56:45.764793 kernel: ACPI: bus type drm_connector registered Jul 15 23:56:45.280743 systemd[1]: Queued start job for default target multi-user.target. Jul 15 23:56:45.293842 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 15 23:56:45.294301 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 23:56:45.770327 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 23:56:45.774259 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:56:45.786263 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 23:56:45.790265 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:56:45.794263 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:56:45.803690 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 23:56:45.809260 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:56:45.818789 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:56:45.821722 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:56:45.822246 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:56:45.823576 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:56:45.824536 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 23:56:45.826458 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 23:56:45.829379 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 23:56:45.846876 kernel: loop0: detected capacity change from 0 to 229808 Jul 15 23:56:45.859890 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 23:56:45.865431 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 23:56:45.870414 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 23:56:45.887843 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Jul 15 23:56:45.888224 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Jul 15 23:56:45.902253 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 23:56:45.905000 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:56:45.915185 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:56:45.916399 systemd-journald[1533]: Time spent on flushing to /var/log/journal/ec2c846c69f4aff83bcac28467ae0e07 is 43.213ms for 1028 entries. Jul 15 23:56:45.916399 systemd-journald[1533]: System Journal (/var/log/journal/ec2c846c69f4aff83bcac28467ae0e07) is 8M, max 195.6M, 187.6M free. Jul 15 23:56:45.964587 systemd-journald[1533]: Received client request to flush runtime journal. Jul 15 23:56:45.964653 kernel: loop1: detected capacity change from 0 to 72352 Jul 15 23:56:45.920318 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 23:56:45.928148 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 23:56:45.968661 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 23:56:46.024283 kernel: loop2: detected capacity change from 0 to 146240 Jul 15 23:56:46.029771 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 23:56:46.033405 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:56:46.069586 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Jul 15 23:56:46.069949 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Jul 15 23:56:46.076901 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:56:46.159267 kernel: loop3: detected capacity change from 0 to 113872 Jul 15 23:56:46.269680 kernel: loop4: detected capacity change from 0 to 229808 Jul 15 23:56:46.297715 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 23:56:46.308277 kernel: loop5: detected capacity change from 0 to 72352 Jul 15 23:56:46.328325 kernel: loop6: detected capacity change from 0 to 146240 Jul 15 23:56:46.356256 kernel: loop7: detected capacity change from 0 to 113872 Jul 15 23:56:46.383225 (sd-merge)[1600]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 15 23:56:46.384278 (sd-merge)[1600]: Merged extensions into '/usr'. Jul 15 23:56:46.401170 systemd[1]: Reload requested from client PID 1555 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 23:56:46.401329 systemd[1]: Reloading... Jul 15 23:56:46.510266 zram_generator::config[1622]: No configuration found. Jul 15 23:56:46.701656 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:56:46.871574 systemd[1]: Reloading finished in 469 ms. Jul 15 23:56:46.895528 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 23:56:46.904382 systemd[1]: Starting ensure-sysext.service... Jul 15 23:56:46.909850 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:56:46.939289 systemd[1]: Reload requested from client PID 1677 ('systemctl') (unit ensure-sysext.service)... Jul 15 23:56:46.939308 systemd[1]: Reloading... Jul 15 23:56:46.953252 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 23:56:46.954943 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 23:56:46.959735 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 23:56:46.960147 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 23:56:46.961966 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 23:56:46.962589 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Jul 15 23:56:46.962764 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Jul 15 23:56:46.969282 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:56:46.969702 systemd-tmpfiles[1678]: Skipping /boot Jul 15 23:56:46.986721 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:56:46.986808 systemd-tmpfiles[1678]: Skipping /boot Jul 15 23:56:47.058258 zram_generator::config[1703]: No configuration found. Jul 15 23:56:47.200594 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:56:47.301301 systemd[1]: Reloading finished in 361 ms. Jul 15 23:56:47.316907 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 23:56:47.330081 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:56:47.336481 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:56:47.341442 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 23:56:47.344585 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 23:56:47.350424 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:56:47.356503 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:56:47.362580 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 23:56:47.373880 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:56:47.374806 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:56:47.377788 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:56:47.384715 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:56:47.397002 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:56:47.397790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:56:47.397977 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:56:47.398149 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:56:47.406579 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:56:47.407107 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:56:47.408913 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:56:47.409099 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:56:47.409268 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:56:47.430664 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:56:47.431097 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:56:47.436945 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:56:47.438260 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:56:47.439992 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 23:56:47.449650 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:56:47.449901 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:56:47.454362 systemd[1]: Finished ensure-sysext.service. Jul 15 23:56:47.458589 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:56:47.459034 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:56:47.461518 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:56:47.462333 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:56:47.462383 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:56:47.462458 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:56:47.462529 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:56:47.462584 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 23:56:47.470498 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 23:56:47.472465 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:56:47.501687 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 23:56:47.505740 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:56:47.509493 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:56:47.531101 systemd-udevd[1764]: Using default interface naming scheme 'v255'. Jul 15 23:56:47.563039 augenrules[1797]: No rules Jul 15 23:56:47.564418 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:56:47.564743 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:56:47.578993 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 23:56:47.581639 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 23:56:47.582830 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:56:47.597860 ldconfig[1551]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 23:56:47.606780 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 23:56:47.611662 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 23:56:47.617025 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:56:47.625146 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:56:47.649305 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 23:56:47.787086 systemd-resolved[1763]: Positive Trust Anchors: Jul 15 23:56:47.787487 systemd-resolved[1763]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:56:47.787612 systemd-resolved[1763]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:56:47.804028 systemd-resolved[1763]: Defaulting to hostname 'linux'. Jul 15 23:56:47.814444 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:56:47.815181 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:56:47.816349 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:56:47.817483 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 23:56:47.819362 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 23:56:47.819919 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 23:56:47.821687 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 23:56:47.823455 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 23:56:47.824000 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 23:56:47.824737 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 23:56:47.824783 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:56:47.826034 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:56:47.828935 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 23:56:47.832673 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 23:56:47.841078 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 23:56:47.846318 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 23:56:47.846944 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 23:56:47.857204 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 23:56:47.858638 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 23:56:47.860474 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 23:56:47.862481 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:56:47.862642 systemd-networkd[1813]: lo: Link UP Jul 15 23:56:47.862648 systemd-networkd[1813]: lo: Gained carrier Jul 15 23:56:47.863428 systemd-networkd[1813]: Enumeration completed Jul 15 23:56:47.863468 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:56:47.864267 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:56:47.864388 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:56:47.868393 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 23:56:47.871321 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 23:56:47.880963 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 23:56:47.885435 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 23:56:47.889469 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 23:56:47.890110 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 23:56:47.895511 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 23:56:47.899486 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 23:56:47.904460 systemd[1]: Started ntpd.service - Network Time Service. Jul 15 23:56:47.911045 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 23:56:47.925095 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 15 23:56:47.933654 (udev-worker)[1820]: Network interface NamePolicy= disabled on kernel command line. Jul 15 23:56:47.934918 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 23:56:47.954531 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 23:56:47.968197 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 23:56:47.970006 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 23:56:47.979790 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 23:56:47.981503 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 23:56:47.993039 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 23:56:48.012502 google_oslogin_nss_cache[1846]: oslogin_cache_refresh[1846]: Refreshing passwd entry cache Jul 15 23:56:48.012502 google_oslogin_nss_cache[1846]: oslogin_cache_refresh[1846]: Failure getting users, quitting Jul 15 23:56:48.012502 google_oslogin_nss_cache[1846]: oslogin_cache_refresh[1846]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 23:56:48.012502 google_oslogin_nss_cache[1846]: oslogin_cache_refresh[1846]: Refreshing group entry cache Jul 15 23:56:48.012502 google_oslogin_nss_cache[1846]: oslogin_cache_refresh[1846]: Failure getting groups, quitting Jul 15 23:56:48.012502 google_oslogin_nss_cache[1846]: oslogin_cache_refresh[1846]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 23:56:48.001089 oslogin_cache_refresh[1846]: Refreshing passwd entry cache Jul 15 23:56:47.996006 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:56:48.004258 oslogin_cache_refresh[1846]: Failure getting users, quitting Jul 15 23:56:47.998581 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 23:56:48.004278 oslogin_cache_refresh[1846]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 23:56:48.000692 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 23:56:48.004330 oslogin_cache_refresh[1846]: Refreshing group entry cache Jul 15 23:56:48.001304 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 23:56:48.004998 oslogin_cache_refresh[1846]: Failure getting groups, quitting Jul 15 23:56:48.005011 oslogin_cache_refresh[1846]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 23:56:48.016035 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 23:56:48.017156 systemd[1]: Reached target network.target - Network. Jul 15 23:56:48.021788 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 23:56:48.037838 extend-filesystems[1845]: Found /dev/nvme0n1p6 Jul 15 23:56:48.043068 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 23:56:48.047051 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 23:56:48.049122 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 23:56:48.051104 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 23:56:48.072023 jq[1844]: false Jul 15 23:56:48.084111 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 23:56:48.084432 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 23:56:48.099790 extend-filesystems[1845]: Found /dev/nvme0n1p9 Jul 15 23:56:48.126675 jq[1856]: true Jul 15 23:56:48.129497 extend-filesystems[1845]: Checking size of /dev/nvme0n1p9 Jul 15 23:56:48.125163 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 23:56:48.128422 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 23:56:48.151124 systemd-networkd[1813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:56:48.151137 systemd-networkd[1813]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:56:48.162457 update_engine[1855]: I20250715 23:56:48.156796 1855 main.cc:92] Flatcar Update Engine starting Jul 15 23:56:48.168491 ntpd[1848]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 21:30:22 UTC 2025 (1): Starting Jul 15 23:56:48.169574 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 21:30:22 UTC 2025 (1): Starting Jul 15 23:56:48.169574 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 23:56:48.169574 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: ---------------------------------------------------- Jul 15 23:56:48.169574 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: ntp-4 is maintained by Network Time Foundation, Jul 15 23:56:48.169574 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 23:56:48.169574 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: corporation. Support and training for ntp-4 are Jul 15 23:56:48.169574 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: available at https://www.nwtime.org/support Jul 15 23:56:48.169574 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: ---------------------------------------------------- Jul 15 23:56:48.168521 ntpd[1848]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 23:56:48.168532 ntpd[1848]: ---------------------------------------------------- Jul 15 23:56:48.168540 ntpd[1848]: ntp-4 is maintained by Network Time Foundation, Jul 15 23:56:48.168549 ntpd[1848]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 23:56:48.170549 systemd-networkd[1813]: eth0: Link UP Jul 15 23:56:48.168559 ntpd[1848]: corporation. Support and training for ntp-4 are Jul 15 23:56:48.170755 systemd-networkd[1813]: eth0: Gained carrier Jul 15 23:56:48.168568 ntpd[1848]: available at https://www.nwtime.org/support Jul 15 23:56:48.170786 systemd-networkd[1813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:56:48.168577 ntpd[1848]: ---------------------------------------------------- Jul 15 23:56:48.177285 ntpd[1848]: proto: precision = 0.066 usec (-24) Jul 15 23:56:48.186102 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: proto: precision = 0.066 usec (-24) Jul 15 23:56:48.186102 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: basedate set to 2025-07-03 Jul 15 23:56:48.186102 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: gps base set to 2025-07-06 (week 2374) Jul 15 23:56:48.186102 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 23:56:48.186102 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 23:56:48.178159 ntpd[1848]: basedate set to 2025-07-03 Jul 15 23:56:48.186379 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 23:56:48.186379 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: Listen normally on 3 lo [::1]:123 Jul 15 23:56:48.178180 ntpd[1848]: gps base set to 2025-07-06 (week 2374) Jul 15 23:56:48.186557 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: bind(20) AF_INET6 fe80::408:9cff:fef7:528d%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 23:56:48.186557 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: unable to create socket on eth0 (4) for fe80::408:9cff:fef7:528d%2#123 Jul 15 23:56:48.186557 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: failed to init interface for address fe80::408:9cff:fef7:528d%2 Jul 15 23:56:48.186557 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: Listening on routing socket on fd #20 for interface updates Jul 15 23:56:48.185994 ntpd[1848]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 23:56:48.186051 ntpd[1848]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 23:56:48.186288 ntpd[1848]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 23:56:48.186334 ntpd[1848]: Listen normally on 3 lo [::1]:123 Jul 15 23:56:48.186384 ntpd[1848]: bind(20) AF_INET6 fe80::408:9cff:fef7:528d%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 23:56:48.186405 ntpd[1848]: unable to create socket on eth0 (4) for fe80::408:9cff:fef7:528d%2#123 Jul 15 23:56:48.186422 ntpd[1848]: failed to init interface for address fe80::408:9cff:fef7:528d%2 Jul 15 23:56:48.186453 ntpd[1848]: Listening on routing socket on fd #20 for interface updates Jul 15 23:56:48.193517 ntpd[1848]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:56:48.195374 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:56:48.195374 ntpd[1848]: 15 Jul 23:56:48 ntpd[1848]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:56:48.193555 ntpd[1848]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:56:48.197326 systemd-networkd[1813]: eth0: DHCPv4 address 172.31.28.113/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 15 23:56:48.213536 (ntainerd)[1896]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 23:56:48.224020 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 23:56:48.238925 dbus-daemon[1842]: [system] SELinux support is enabled Jul 15 23:56:48.239138 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 23:56:48.246994 tar[1879]: linux-amd64/LICENSE Jul 15 23:56:48.246994 tar[1879]: linux-amd64/helm Jul 15 23:56:48.250022 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 23:56:48.250304 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 23:56:48.252372 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 23:56:48.252400 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 23:56:48.254266 dbus-daemon[1842]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1813 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 15 23:56:48.255077 extend-filesystems[1845]: Resized partition /dev/nvme0n1p9 Jul 15 23:56:48.260574 dbus-daemon[1842]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 23:56:48.268707 jq[1893]: true Jul 15 23:56:48.280483 update_engine[1855]: I20250715 23:56:48.278662 1855 update_check_scheduler.cc:74] Next update check in 11m36s Jul 15 23:56:48.280572 extend-filesystems[1910]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 23:56:48.306383 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 15 23:56:48.282539 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 15 23:56:48.305937 systemd[1]: Started update-engine.service - Update Engine. Jul 15 23:56:48.373295 coreos-metadata[1841]: Jul 15 23:56:48.373 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 15 23:56:48.384267 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 15 23:56:48.387703 coreos-metadata[1841]: Jul 15 23:56:48.387 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 15 23:56:48.389559 coreos-metadata[1841]: Jul 15 23:56:48.389 INFO Fetch successful Jul 15 23:56:48.389559 coreos-metadata[1841]: Jul 15 23:56:48.389 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 15 23:56:48.390101 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 23:56:48.392274 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 15 23:56:48.400851 coreos-metadata[1841]: Jul 15 23:56:48.392 INFO Fetch successful Jul 15 23:56:48.400851 coreos-metadata[1841]: Jul 15 23:56:48.393 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 15 23:56:48.400851 coreos-metadata[1841]: Jul 15 23:56:48.400 INFO Fetch successful Jul 15 23:56:48.400851 coreos-metadata[1841]: Jul 15 23:56:48.400 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 15 23:56:48.403871 extend-filesystems[1910]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 15 23:56:48.403871 extend-filesystems[1910]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 23:56:48.403871 extend-filesystems[1910]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 15 23:56:48.407433 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 23:56:48.407474 coreos-metadata[1841]: Jul 15 23:56:48.406 INFO Fetch successful Jul 15 23:56:48.407474 coreos-metadata[1841]: Jul 15 23:56:48.406 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 15 23:56:48.405884 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 23:56:48.407649 extend-filesystems[1845]: Resized filesystem in /dev/nvme0n1p9 Jul 15 23:56:48.407414 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 23:56:48.413567 coreos-metadata[1841]: Jul 15 23:56:48.413 INFO Fetch failed with 404: resource not found Jul 15 23:56:48.413567 coreos-metadata[1841]: Jul 15 23:56:48.413 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 15 23:56:48.415473 coreos-metadata[1841]: Jul 15 23:56:48.415 INFO Fetch successful Jul 15 23:56:48.415473 coreos-metadata[1841]: Jul 15 23:56:48.415 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 15 23:56:48.417151 coreos-metadata[1841]: Jul 15 23:56:48.416 INFO Fetch successful Jul 15 23:56:48.417151 coreos-metadata[1841]: Jul 15 23:56:48.417 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 15 23:56:48.418884 coreos-metadata[1841]: Jul 15 23:56:48.418 INFO Fetch successful Jul 15 23:56:48.418884 coreos-metadata[1841]: Jul 15 23:56:48.418 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 15 23:56:48.420920 coreos-metadata[1841]: Jul 15 23:56:48.420 INFO Fetch successful Jul 15 23:56:48.420920 coreos-metadata[1841]: Jul 15 23:56:48.420 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 15 23:56:48.422985 coreos-metadata[1841]: Jul 15 23:56:48.422 INFO Fetch successful Jul 15 23:56:48.483922 bash[1936]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:56:48.486211 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 23:56:48.508514 systemd[1]: Starting sshkeys.service... Jul 15 23:56:48.510832 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 23:56:48.512952 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 23:56:48.539381 systemd-logind[1854]: New seat seat0. Jul 15 23:56:48.551863 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 23:56:48.596611 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 15 23:56:48.602165 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 15 23:56:48.621304 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 15 23:56:48.670883 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 15 23:56:48.676173 kernel: ACPI: button: Power Button [PWRF] Jul 15 23:56:48.676280 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jul 15 23:56:48.688006 kernel: ACPI: button: Sleep Button [SLPF] Jul 15 23:56:48.801150 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 15 23:56:48.823839 dbus-daemon[1842]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 15 23:56:48.825032 dbus-daemon[1842]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1911 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 15 23:56:48.839358 systemd[1]: Starting polkit.service - Authorization Manager... Jul 15 23:56:48.963555 locksmithd[1913]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 23:56:48.974984 coreos-metadata[1952]: Jul 15 23:56:48.974 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 15 23:56:48.978783 coreos-metadata[1952]: Jul 15 23:56:48.978 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 15 23:56:48.980963 coreos-metadata[1952]: Jul 15 23:56:48.980 INFO Fetch successful Jul 15 23:56:48.980963 coreos-metadata[1952]: Jul 15 23:56:48.980 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 15 23:56:48.984078 coreos-metadata[1952]: Jul 15 23:56:48.983 INFO Fetch successful Jul 15 23:56:48.990851 unknown[1952]: wrote ssh authorized keys file for user: core Jul 15 23:56:49.040318 update-ssh-keys[1991]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:56:49.043977 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 15 23:56:49.053310 systemd[1]: Finished sshkeys.service. Jul 15 23:56:49.111407 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:56:49.148108 systemd-logind[1854]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 23:56:49.174276 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:56:49.175056 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:56:49.179469 systemd-logind[1854]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 15 23:56:49.179598 systemd-logind[1854]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 23:56:49.184373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:56:49.276533 polkitd[1966]: Started polkitd version 126 Jul 15 23:56:49.302682 polkitd[1966]: Loading rules from directory /etc/polkit-1/rules.d Jul 15 23:56:49.308095 containerd[1896]: time="2025-07-15T23:56:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 23:56:49.307401 polkitd[1966]: Loading rules from directory /run/polkit-1/rules.d Jul 15 23:56:49.307462 polkitd[1966]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 23:56:49.307906 polkitd[1966]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 15 23:56:49.307932 polkitd[1966]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 23:56:49.307981 polkitd[1966]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 15 23:56:49.311381 containerd[1896]: time="2025-07-15T23:56:49.311337177Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 15 23:56:49.312629 polkitd[1966]: Finished loading, compiling and executing 2 rules Jul 15 23:56:49.313569 systemd[1]: Started polkit.service - Authorization Manager. Jul 15 23:56:49.318881 dbus-daemon[1842]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 15 23:56:49.320625 polkitd[1966]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 15 23:56:49.359423 systemd-hostnamed[1911]: Hostname set to (transient) Jul 15 23:56:49.359593 systemd-resolved[1763]: System hostname changed to 'ip-172-31-28-113'. Jul 15 23:56:49.383912 containerd[1896]: time="2025-07-15T23:56:49.383859603Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.093µs" Jul 15 23:56:49.383912 containerd[1896]: time="2025-07-15T23:56:49.383909510Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 23:56:49.384040 containerd[1896]: time="2025-07-15T23:56:49.383933657Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 23:56:49.384173 containerd[1896]: time="2025-07-15T23:56:49.384124224Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 23:56:49.384173 containerd[1896]: time="2025-07-15T23:56:49.384152064Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 23:56:49.384295 containerd[1896]: time="2025-07-15T23:56:49.384200814Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:56:49.385919 containerd[1896]: time="2025-07-15T23:56:49.385709735Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:56:49.385919 containerd[1896]: time="2025-07-15T23:56:49.385743552Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:56:49.386108 containerd[1896]: time="2025-07-15T23:56:49.386061316Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:56:49.386108 containerd[1896]: time="2025-07-15T23:56:49.386085950Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:56:49.386202 containerd[1896]: time="2025-07-15T23:56:49.386106368Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:56:49.386202 containerd[1896]: time="2025-07-15T23:56:49.386118796Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 23:56:49.396053 containerd[1896]: time="2025-07-15T23:56:49.386222065Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 23:56:49.398308 containerd[1896]: time="2025-07-15T23:56:49.397699407Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:56:49.398308 containerd[1896]: time="2025-07-15T23:56:49.397778769Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:56:49.398308 containerd[1896]: time="2025-07-15T23:56:49.397796688Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 23:56:49.398308 containerd[1896]: time="2025-07-15T23:56:49.397829424Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 23:56:49.398308 containerd[1896]: time="2025-07-15T23:56:49.398156970Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 23:56:49.403584 containerd[1896]: time="2025-07-15T23:56:49.403533662Z" level=info msg="metadata content store policy set" policy=shared Jul 15 23:56:49.416789 containerd[1896]: time="2025-07-15T23:56:49.416720012Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 23:56:49.416909 containerd[1896]: time="2025-07-15T23:56:49.416874665Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 23:56:49.416909 containerd[1896]: time="2025-07-15T23:56:49.416903265Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 23:56:49.416990 containerd[1896]: time="2025-07-15T23:56:49.416920268Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 23:56:49.418331 containerd[1896]: time="2025-07-15T23:56:49.418290887Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 23:56:49.418419 containerd[1896]: time="2025-07-15T23:56:49.418361778Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 23:56:49.418419 containerd[1896]: time="2025-07-15T23:56:49.418383942Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 23:56:49.418489 containerd[1896]: time="2025-07-15T23:56:49.418401342Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 23:56:49.418489 containerd[1896]: time="2025-07-15T23:56:49.418432562Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 23:56:49.418489 containerd[1896]: time="2025-07-15T23:56:49.418448592Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 23:56:49.418489 containerd[1896]: time="2025-07-15T23:56:49.418473335Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 23:56:49.418624 containerd[1896]: time="2025-07-15T23:56:49.418522452Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 23:56:49.419397 containerd[1896]: time="2025-07-15T23:56:49.419364164Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 23:56:49.419470 containerd[1896]: time="2025-07-15T23:56:49.419408345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 23:56:49.425309 containerd[1896]: time="2025-07-15T23:56:49.422318247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 23:56:49.425309 containerd[1896]: time="2025-07-15T23:56:49.424321656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 23:56:49.425309 containerd[1896]: time="2025-07-15T23:56:49.424363427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 23:56:49.425309 containerd[1896]: time="2025-07-15T23:56:49.424383651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 23:56:49.425309 containerd[1896]: time="2025-07-15T23:56:49.424406947Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 23:56:49.425309 containerd[1896]: time="2025-07-15T23:56:49.424426806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 23:56:49.425309 containerd[1896]: time="2025-07-15T23:56:49.424449192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 23:56:49.425309 containerd[1896]: time="2025-07-15T23:56:49.424469731Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 23:56:49.425309 containerd[1896]: time="2025-07-15T23:56:49.424495796Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 23:56:49.425309 containerd[1896]: time="2025-07-15T23:56:49.424589534Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 23:56:49.425309 containerd[1896]: time="2025-07-15T23:56:49.424612485Z" level=info msg="Start snapshots syncer" Jul 15 23:56:49.425309 containerd[1896]: time="2025-07-15T23:56:49.424648103Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 23:56:49.425872 containerd[1896]: time="2025-07-15T23:56:49.425014987Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 23:56:49.425872 containerd[1896]: time="2025-07-15T23:56:49.425097495Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 23:56:49.426052 containerd[1896]: time="2025-07-15T23:56:49.425208213Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 23:56:49.444047 containerd[1896]: time="2025-07-15T23:56:49.444003052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 23:56:49.444266 containerd[1896]: time="2025-07-15T23:56:49.444218874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 23:56:49.444403 containerd[1896]: time="2025-07-15T23:56:49.444382896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 23:56:49.444493 containerd[1896]: time="2025-07-15T23:56:49.444478563Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 23:56:49.444586 containerd[1896]: time="2025-07-15T23:56:49.444571701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 23:56:49.444665 containerd[1896]: time="2025-07-15T23:56:49.444653868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 23:56:49.444741 containerd[1896]: time="2025-07-15T23:56:49.444729147Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 23:56:49.444846 containerd[1896]: time="2025-07-15T23:56:49.444833749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 23:56:49.444928 containerd[1896]: time="2025-07-15T23:56:49.444915663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 23:56:49.445015 containerd[1896]: time="2025-07-15T23:56:49.445002090Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 23:56:49.445132 containerd[1896]: time="2025-07-15T23:56:49.445119686Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:56:49.445418 containerd[1896]: time="2025-07-15T23:56:49.445394775Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:56:49.445514 containerd[1896]: time="2025-07-15T23:56:49.445499109Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:56:49.445614 containerd[1896]: time="2025-07-15T23:56:49.445595744Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:56:49.445704 containerd[1896]: time="2025-07-15T23:56:49.445689086Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 23:56:49.445806 containerd[1896]: time="2025-07-15T23:56:49.445791036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 23:56:49.445896 containerd[1896]: time="2025-07-15T23:56:49.445880745Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 23:56:49.446284 containerd[1896]: time="2025-07-15T23:56:49.445963521Z" level=info msg="runtime interface created" Jul 15 23:56:49.446284 containerd[1896]: time="2025-07-15T23:56:49.445974698Z" level=info msg="created NRI interface" Jul 15 23:56:49.446284 containerd[1896]: time="2025-07-15T23:56:49.445988303Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 23:56:49.446284 containerd[1896]: time="2025-07-15T23:56:49.446009937Z" level=info msg="Connect containerd service" Jul 15 23:56:49.446284 containerd[1896]: time="2025-07-15T23:56:49.446056794Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 23:56:49.449850 containerd[1896]: time="2025-07-15T23:56:49.449813111Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:56:49.455684 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 15 23:56:49.462684 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 23:56:49.509226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:56:49.548667 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 23:56:49.608496 systemd-networkd[1813]: eth0: Gained IPv6LL Jul 15 23:56:49.617396 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 23:56:49.618795 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 23:56:49.626607 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 15 23:56:49.633314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:56:49.659566 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 23:56:49.794835 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 23:56:49.821356 amazon-ssm-agent[2093]: Initializing new seelog logger Jul 15 23:56:49.821356 amazon-ssm-agent[2093]: New Seelog Logger Creation Complete Jul 15 23:56:49.821356 amazon-ssm-agent[2093]: 2025/07/15 23:56:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:56:49.821356 amazon-ssm-agent[2093]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:56:49.821356 amazon-ssm-agent[2093]: 2025/07/15 23:56:49 processing appconfig overrides Jul 15 23:56:49.821858 amazon-ssm-agent[2093]: 2025/07/15 23:56:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:56:49.821858 amazon-ssm-agent[2093]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:56:49.823479 amazon-ssm-agent[2093]: 2025/07/15 23:56:49 processing appconfig overrides Jul 15 23:56:49.823479 amazon-ssm-agent[2093]: 2025/07/15 23:56:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:56:49.823479 amazon-ssm-agent[2093]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:56:49.823479 amazon-ssm-agent[2093]: 2025/07/15 23:56:49 processing appconfig overrides Jul 15 23:56:49.823479 amazon-ssm-agent[2093]: 2025-07-15 23:56:49.8216 INFO Proxy environment variables: Jul 15 23:56:49.829132 amazon-ssm-agent[2093]: 2025/07/15 23:56:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:56:49.829132 amazon-ssm-agent[2093]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:56:49.829132 amazon-ssm-agent[2093]: 2025/07/15 23:56:49 processing appconfig overrides Jul 15 23:56:49.929119 amazon-ssm-agent[2093]: 2025-07-15 23:56:49.8218 INFO https_proxy: Jul 15 23:56:49.967708 containerd[1896]: time="2025-07-15T23:56:49.967652198Z" level=info msg="Start subscribing containerd event" Jul 15 23:56:49.973605 containerd[1896]: time="2025-07-15T23:56:49.973145130Z" level=info msg="Start recovering state" Jul 15 23:56:49.973605 containerd[1896]: time="2025-07-15T23:56:49.973330885Z" level=info msg="Start event monitor" Jul 15 23:56:49.973605 containerd[1896]: time="2025-07-15T23:56:49.973360817Z" level=info msg="Start cni network conf syncer for default" Jul 15 23:56:49.973605 containerd[1896]: time="2025-07-15T23:56:49.973377261Z" level=info msg="Start streaming server" Jul 15 23:56:49.973605 containerd[1896]: time="2025-07-15T23:56:49.973391128Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 23:56:49.973605 containerd[1896]: time="2025-07-15T23:56:49.973408597Z" level=info msg="runtime interface starting up..." Jul 15 23:56:49.973605 containerd[1896]: time="2025-07-15T23:56:49.973419994Z" level=info msg="starting plugins..." Jul 15 23:56:49.973605 containerd[1896]: time="2025-07-15T23:56:49.973436901Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 23:56:49.976901 containerd[1896]: time="2025-07-15T23:56:49.975563794Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 23:56:49.976901 containerd[1896]: time="2025-07-15T23:56:49.975636027Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 23:56:49.976901 containerd[1896]: time="2025-07-15T23:56:49.975715658Z" level=info msg="containerd successfully booted in 0.669295s" Jul 15 23:56:49.976161 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 23:56:50.030255 amazon-ssm-agent[2093]: 2025-07-15 23:56:49.8218 INFO http_proxy: Jul 15 23:56:50.130386 amazon-ssm-agent[2093]: 2025-07-15 23:56:49.8218 INFO no_proxy: Jul 15 23:56:50.229163 amazon-ssm-agent[2093]: 2025-07-15 23:56:49.8219 INFO Checking if agent identity type OnPrem can be assumed Jul 15 23:56:50.292503 sshd_keygen[1899]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 23:56:50.328324 amazon-ssm-agent[2093]: 2025-07-15 23:56:49.8221 INFO Checking if agent identity type EC2 can be assumed Jul 15 23:56:50.329489 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 23:56:50.333914 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 23:56:50.347742 tar[1879]: linux-amd64/README.md Jul 15 23:56:50.368997 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 23:56:50.370047 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 23:56:50.382348 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 23:56:50.395808 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 23:56:50.424306 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 23:56:50.427971 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.0658 INFO Agent will take identity from EC2 Jul 15 23:56:50.428415 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 23:56:50.433735 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 23:56:50.435668 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 23:56:50.507684 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 23:56:50.511556 systemd[1]: Started sshd@0-172.31.28.113:22-139.178.89.65:54024.service - OpenSSH per-connection server daemon (139.178.89.65:54024). Jul 15 23:56:50.526819 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.0695 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 15 23:56:50.540494 ntpd[1848]: giving up resolving host metadata.google.internal: Name or service not known (-2) Jul 15 23:56:50.540841 ntpd[1848]: 15 Jul 23:56:50 ntpd[1848]: giving up resolving host metadata.google.internal: Name or service not known (-2) Jul 15 23:56:50.626222 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.0695 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 15 23:56:50.725725 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.0695 INFO [amazon-ssm-agent] Starting Core Agent Jul 15 23:56:50.815894 sshd[2138]: Accepted publickey for core from 139.178.89.65 port 54024 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:56:50.818614 sshd-session[2138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:50.825260 amazon-ssm-agent[2093]: 2025/07/15 23:56:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:56:50.825782 amazon-ssm-agent[2093]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:56:50.825782 amazon-ssm-agent[2093]: 2025/07/15 23:56:50 processing appconfig overrides Jul 15 23:56:50.826258 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.0695 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 15 23:56:50.829779 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 23:56:50.833841 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 23:56:50.850364 systemd-logind[1854]: New session 1 of user core. Jul 15 23:56:50.865445 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.0695 INFO [Registrar] Starting registrar module Jul 15 23:56:50.865445 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.0733 INFO [EC2Identity] Checking disk for registration info Jul 15 23:56:50.865445 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.0733 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 15 23:56:50.865445 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.0733 INFO [EC2Identity] Generating registration keypair Jul 15 23:56:50.865445 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.7797 INFO [EC2Identity] Checking write access before registering Jul 15 23:56:50.865445 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.7801 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 15 23:56:50.865445 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.8246 INFO [EC2Identity] EC2 registration was successful. Jul 15 23:56:50.865445 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.8247 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 15 23:56:50.865445 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.8247 INFO [CredentialRefresher] credentialRefresher has started Jul 15 23:56:50.865445 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.8248 INFO [CredentialRefresher] Starting credentials refresher loop Jul 15 23:56:50.865445 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.8647 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 15 23:56:50.865445 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.8649 INFO [CredentialRefresher] Credentials ready Jul 15 23:56:50.867160 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 23:56:50.871767 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 23:56:50.885155 (systemd)[2142]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 23:56:50.888568 systemd-logind[1854]: New session c1 of user core. Jul 15 23:56:50.925451 amazon-ssm-agent[2093]: 2025-07-15 23:56:50.8651 INFO [CredentialRefresher] Next credential rotation will be in 29.999993950183335 minutes Jul 15 23:56:51.083785 systemd[2142]: Queued start job for default target default.target. Jul 15 23:56:51.094905 systemd[2142]: Created slice app.slice - User Application Slice. Jul 15 23:56:51.095030 systemd[2142]: Reached target paths.target - Paths. Jul 15 23:56:51.095377 systemd[2142]: Reached target timers.target - Timers. Jul 15 23:56:51.097039 systemd[2142]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 23:56:51.121915 systemd[2142]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 23:56:51.122964 systemd[2142]: Reached target sockets.target - Sockets. Jul 15 23:56:51.123032 systemd[2142]: Reached target basic.target - Basic System. Jul 15 23:56:51.123083 systemd[2142]: Reached target default.target - Main User Target. Jul 15 23:56:51.123124 systemd[2142]: Startup finished in 225ms. Jul 15 23:56:51.123294 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 23:56:51.128461 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 23:56:51.275506 systemd[1]: Started sshd@1-172.31.28.113:22-139.178.89.65:54040.service - OpenSSH per-connection server daemon (139.178.89.65:54040). Jul 15 23:56:51.443904 sshd[2153]: Accepted publickey for core from 139.178.89.65 port 54040 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:56:51.445696 sshd-session[2153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:51.451706 systemd-logind[1854]: New session 2 of user core. Jul 15 23:56:51.455435 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 23:56:51.533967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:56:51.536197 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 23:56:51.538409 systemd[1]: Startup finished in 2.821s (kernel) + 7.641s (initrd) + 7.273s (userspace) = 17.736s. Jul 15 23:56:51.547036 (kubelet)[2161]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:56:51.586446 sshd[2155]: Connection closed by 139.178.89.65 port 54040 Jul 15 23:56:51.586983 sshd-session[2153]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:51.591127 systemd[1]: sshd@1-172.31.28.113:22-139.178.89.65:54040.service: Deactivated successfully. Jul 15 23:56:51.593746 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 23:56:51.595267 systemd-logind[1854]: Session 2 logged out. Waiting for processes to exit. Jul 15 23:56:51.596546 systemd-logind[1854]: Removed session 2. Jul 15 23:56:51.619399 systemd[1]: Started sshd@2-172.31.28.113:22-139.178.89.65:54052.service - OpenSSH per-connection server daemon (139.178.89.65:54052). Jul 15 23:56:51.799602 sshd[2171]: Accepted publickey for core from 139.178.89.65 port 54052 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:56:51.800978 sshd-session[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:51.807817 systemd-logind[1854]: New session 3 of user core. Jul 15 23:56:51.815454 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 23:56:51.878555 amazon-ssm-agent[2093]: 2025-07-15 23:56:51.8777 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 15 23:56:51.931039 sshd[2178]: Connection closed by 139.178.89.65 port 54052 Jul 15 23:56:51.932205 sshd-session[2171]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:51.935996 systemd-logind[1854]: Session 3 logged out. Waiting for processes to exit. Jul 15 23:56:51.937999 systemd[1]: sshd@2-172.31.28.113:22-139.178.89.65:54052.service: Deactivated successfully. Jul 15 23:56:51.940216 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 23:56:51.942391 systemd-logind[1854]: Removed session 3. Jul 15 23:56:51.968851 systemd[1]: Started sshd@3-172.31.28.113:22-139.178.89.65:54054.service - OpenSSH per-connection server daemon (139.178.89.65:54054). Jul 15 23:56:51.979795 amazon-ssm-agent[2093]: 2025-07-15 23:56:51.8795 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2181) started Jul 15 23:56:52.080798 amazon-ssm-agent[2093]: 2025-07-15 23:56:51.8795 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 15 23:56:52.164538 sshd[2192]: Accepted publickey for core from 139.178.89.65 port 54054 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:56:52.166094 sshd-session[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:52.168948 ntpd[1848]: Listen normally on 5 eth0 172.31.28.113:123 Jul 15 23:56:52.169553 ntpd[1848]: 15 Jul 23:56:52 ntpd[1848]: Listen normally on 5 eth0 172.31.28.113:123 Jul 15 23:56:52.169553 ntpd[1848]: 15 Jul 23:56:52 ntpd[1848]: Listen normally on 6 eth0 [fe80::408:9cff:fef7:528d%2]:123 Jul 15 23:56:52.169000 ntpd[1848]: Listen normally on 6 eth0 [fe80::408:9cff:fef7:528d%2]:123 Jul 15 23:56:52.171471 systemd-logind[1854]: New session 4 of user core. Jul 15 23:56:52.182626 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 23:56:52.303834 sshd[2199]: Connection closed by 139.178.89.65 port 54054 Jul 15 23:56:52.304360 sshd-session[2192]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:52.307619 systemd[1]: sshd@3-172.31.28.113:22-139.178.89.65:54054.service: Deactivated successfully. Jul 15 23:56:52.309426 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 23:56:52.311217 systemd-logind[1854]: Session 4 logged out. Waiting for processes to exit. Jul 15 23:56:52.312801 systemd-logind[1854]: Removed session 4. Jul 15 23:56:52.338209 systemd[1]: Started sshd@4-172.31.28.113:22-139.178.89.65:54058.service - OpenSSH per-connection server daemon (139.178.89.65:54058). Jul 15 23:56:52.421933 kubelet[2161]: E0715 23:56:52.421852 2161 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:56:52.424769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:56:52.424967 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:56:52.425546 systemd[1]: kubelet.service: Consumed 1.100s CPU time, 269.4M memory peak. Jul 15 23:56:52.510847 sshd[2205]: Accepted publickey for core from 139.178.89.65 port 54058 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:56:52.512096 sshd-session[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:52.518646 systemd-logind[1854]: New session 5 of user core. Jul 15 23:56:52.523502 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 23:56:52.652903 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 23:56:52.653182 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:56:52.668766 sudo[2209]: pam_unix(sudo:session): session closed for user root Jul 15 23:56:52.691419 sshd[2208]: Connection closed by 139.178.89.65 port 54058 Jul 15 23:56:52.692122 sshd-session[2205]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:52.696780 systemd[1]: sshd@4-172.31.28.113:22-139.178.89.65:54058.service: Deactivated successfully. Jul 15 23:56:52.698846 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 23:56:52.699855 systemd-logind[1854]: Session 5 logged out. Waiting for processes to exit. Jul 15 23:56:52.701348 systemd-logind[1854]: Removed session 5. Jul 15 23:56:52.725779 systemd[1]: Started sshd@5-172.31.28.113:22-139.178.89.65:54064.service - OpenSSH per-connection server daemon (139.178.89.65:54064). Jul 15 23:56:52.909819 sshd[2215]: Accepted publickey for core from 139.178.89.65 port 54064 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:56:52.911155 sshd-session[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:52.916378 systemd-logind[1854]: New session 6 of user core. Jul 15 23:56:52.926462 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 23:56:53.023030 sudo[2219]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 23:56:53.023324 sudo[2219]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:56:53.029115 sudo[2219]: pam_unix(sudo:session): session closed for user root Jul 15 23:56:53.034655 sudo[2218]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 23:56:53.034940 sudo[2218]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:56:53.045096 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:56:53.091308 augenrules[2241]: No rules Jul 15 23:56:53.092526 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:56:53.092985 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:56:53.094431 sudo[2218]: pam_unix(sudo:session): session closed for user root Jul 15 23:56:53.118000 sshd[2217]: Connection closed by 139.178.89.65 port 54064 Jul 15 23:56:53.118522 sshd-session[2215]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:53.122000 systemd[1]: sshd@5-172.31.28.113:22-139.178.89.65:54064.service: Deactivated successfully. Jul 15 23:56:53.124011 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 23:56:53.126185 systemd-logind[1854]: Session 6 logged out. Waiting for processes to exit. Jul 15 23:56:53.128042 systemd-logind[1854]: Removed session 6. Jul 15 23:56:53.154185 systemd[1]: Started sshd@6-172.31.28.113:22-139.178.89.65:54072.service - OpenSSH per-connection server daemon (139.178.89.65:54072). Jul 15 23:56:53.333993 sshd[2250]: Accepted publickey for core from 139.178.89.65 port 54072 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:56:53.335361 sshd-session[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:53.340790 systemd-logind[1854]: New session 7 of user core. Jul 15 23:56:53.351476 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 23:56:53.445377 sudo[2253]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 23:56:53.445692 sudo[2253]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:56:54.081695 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 23:56:54.091720 (dockerd)[2271]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 23:56:54.508319 dockerd[2271]: time="2025-07-15T23:56:54.508149616Z" level=info msg="Starting up" Jul 15 23:56:54.510495 dockerd[2271]: time="2025-07-15T23:56:54.510453437Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 23:56:54.786500 dockerd[2271]: time="2025-07-15T23:56:54.786385524Z" level=info msg="Loading containers: start." Jul 15 23:56:54.797269 kernel: Initializing XFRM netlink socket Jul 15 23:56:55.072210 (udev-worker)[2292]: Network interface NamePolicy= disabled on kernel command line. Jul 15 23:56:55.119825 systemd-networkd[1813]: docker0: Link UP Jul 15 23:56:55.124975 dockerd[2271]: time="2025-07-15T23:56:55.124919264Z" level=info msg="Loading containers: done." Jul 15 23:56:55.144015 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1132034243-merged.mount: Deactivated successfully. Jul 15 23:56:55.154259 dockerd[2271]: time="2025-07-15T23:56:55.154083203Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 23:56:55.154259 dockerd[2271]: time="2025-07-15T23:56:55.154168817Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 15 23:56:55.154473 dockerd[2271]: time="2025-07-15T23:56:55.154308247Z" level=info msg="Initializing buildkit" Jul 15 23:56:55.180672 dockerd[2271]: time="2025-07-15T23:56:55.180633358Z" level=info msg="Completed buildkit initialization" Jul 15 23:56:55.188430 dockerd[2271]: time="2025-07-15T23:56:55.188349306Z" level=info msg="Daemon has completed initialization" Jul 15 23:56:55.188430 dockerd[2271]: time="2025-07-15T23:56:55.188409584Z" level=info msg="API listen on /run/docker.sock" Jul 15 23:56:55.188815 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 23:56:56.116727 containerd[1896]: time="2025-07-15T23:56:56.116681987Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Jul 15 23:56:56.669312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2180427878.mount: Deactivated successfully. Jul 15 23:56:57.974222 containerd[1896]: time="2025-07-15T23:56:57.974174868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:56:57.975042 containerd[1896]: time="2025-07-15T23:56:57.974990705Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=30078237" Jul 15 23:56:57.975982 containerd[1896]: time="2025-07-15T23:56:57.975931244Z" level=info msg="ImageCreate event name:\"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:56:57.981943 containerd[1896]: time="2025-07-15T23:56:57.981252550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:56:57.982301 containerd[1896]: time="2025-07-15T23:56:57.982269356Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"30075037\" in 1.865545761s" Jul 15 23:56:57.982364 containerd[1896]: time="2025-07-15T23:56:57.982309003Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Jul 15 23:56:57.983089 containerd[1896]: time="2025-07-15T23:56:57.983063883Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Jul 15 23:56:59.512934 containerd[1896]: time="2025-07-15T23:56:59.512873973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:56:59.515049 containerd[1896]: time="2025-07-15T23:56:59.514994363Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=26019361" Jul 15 23:56:59.517612 containerd[1896]: time="2025-07-15T23:56:59.517544114Z" level=info msg="ImageCreate event name:\"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:56:59.527087 containerd[1896]: time="2025-07-15T23:56:59.526995508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:56:59.528159 containerd[1896]: time="2025-07-15T23:56:59.527999077Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"27646922\" in 1.544905846s" Jul 15 23:56:59.528159 containerd[1896]: time="2025-07-15T23:56:59.528042603Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Jul 15 23:56:59.529093 containerd[1896]: time="2025-07-15T23:56:59.528724601Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Jul 15 23:57:01.007035 containerd[1896]: time="2025-07-15T23:57:01.006951839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:01.009397 containerd[1896]: time="2025-07-15T23:57:01.009120863Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=20155013" Jul 15 23:57:01.009397 containerd[1896]: time="2025-07-15T23:57:01.009204554Z" level=info msg="ImageCreate event name:\"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:01.015443 containerd[1896]: time="2025-07-15T23:57:01.015363621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:01.019803 containerd[1896]: time="2025-07-15T23:57:01.019646165Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"21782592\" in 1.490886568s" Jul 15 23:57:01.019803 containerd[1896]: time="2025-07-15T23:57:01.019693857Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Jul 15 23:57:01.020682 containerd[1896]: time="2025-07-15T23:57:01.020482200Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Jul 15 23:57:02.177383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2179452645.mount: Deactivated successfully. Jul 15 23:57:02.676319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 23:57:02.680617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:57:02.823354 containerd[1896]: time="2025-07-15T23:57:02.823213556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:02.826939 containerd[1896]: time="2025-07-15T23:57:02.826902212Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31892666" Jul 15 23:57:02.829758 containerd[1896]: time="2025-07-15T23:57:02.829712719Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:02.834819 containerd[1896]: time="2025-07-15T23:57:02.834765591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:02.835539 containerd[1896]: time="2025-07-15T23:57:02.835112629Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 1.814593437s" Jul 15 23:57:02.835539 containerd[1896]: time="2025-07-15T23:57:02.835138650Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Jul 15 23:57:02.835954 containerd[1896]: time="2025-07-15T23:57:02.835794198Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 15 23:57:02.921636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:57:02.929906 (kubelet)[2551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:57:02.978077 kubelet[2551]: E0715 23:57:02.978002 2551 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:57:02.982278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:57:02.982481 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:57:02.982995 systemd[1]: kubelet.service: Consumed 187ms CPU time, 108.4M memory peak. Jul 15 23:57:04.066338 systemd-resolved[1763]: Clock change detected. Flushing caches. Jul 15 23:57:04.214935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1271793256.mount: Deactivated successfully. Jul 15 23:57:05.277878 containerd[1896]: time="2025-07-15T23:57:05.277800519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:05.280290 containerd[1896]: time="2025-07-15T23:57:05.280085481Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 15 23:57:05.282920 containerd[1896]: time="2025-07-15T23:57:05.282881902Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:05.286664 containerd[1896]: time="2025-07-15T23:57:05.286604651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:05.287670 containerd[1896]: time="2025-07-15T23:57:05.287384861Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.555571015s" Jul 15 23:57:05.287670 containerd[1896]: time="2025-07-15T23:57:05.287421899Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 15 23:57:05.288026 containerd[1896]: time="2025-07-15T23:57:05.288003354Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 23:57:05.812295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2931318621.mount: Deactivated successfully. Jul 15 23:57:05.824089 containerd[1896]: time="2025-07-15T23:57:05.824028469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:57:05.825984 containerd[1896]: time="2025-07-15T23:57:05.825922467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 15 23:57:05.828260 containerd[1896]: time="2025-07-15T23:57:05.828202472Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:57:05.831957 containerd[1896]: time="2025-07-15T23:57:05.831904039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:57:05.832816 containerd[1896]: time="2025-07-15T23:57:05.832413951Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 544.382344ms" Jul 15 23:57:05.832816 containerd[1896]: time="2025-07-15T23:57:05.832442849Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 23:57:05.833022 containerd[1896]: time="2025-07-15T23:57:05.833001127Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 15 23:57:06.382000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1585019110.mount: Deactivated successfully. Jul 15 23:57:09.029030 containerd[1896]: time="2025-07-15T23:57:09.028970214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:09.030193 containerd[1896]: time="2025-07-15T23:57:09.030145993Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 15 23:57:09.031532 containerd[1896]: time="2025-07-15T23:57:09.031476914Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:09.033890 containerd[1896]: time="2025-07-15T23:57:09.033839042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:09.035268 containerd[1896]: time="2025-07-15T23:57:09.034853643Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.201798351s" Jul 15 23:57:09.035268 containerd[1896]: time="2025-07-15T23:57:09.034887943Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 15 23:57:11.676920 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:57:11.677266 systemd[1]: kubelet.service: Consumed 187ms CPU time, 108.4M memory peak. Jul 15 23:57:11.680045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:57:11.715738 systemd[1]: Reload requested from client PID 2698 ('systemctl') (unit session-7.scope)... Jul 15 23:57:11.715758 systemd[1]: Reloading... Jul 15 23:57:11.826279 zram_generator::config[2743]: No configuration found. Jul 15 23:57:11.957552 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:57:12.104606 systemd[1]: Reloading finished in 388 ms. Jul 15 23:57:12.173968 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 23:57:12.174058 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 23:57:12.174414 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:57:12.174474 systemd[1]: kubelet.service: Consumed 129ms CPU time, 98.2M memory peak. Jul 15 23:57:12.176460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:57:12.405892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:57:12.413750 (kubelet)[2806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:57:12.482303 kubelet[2806]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:57:12.482303 kubelet[2806]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:57:12.482303 kubelet[2806]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:57:12.489452 kubelet[2806]: I0715 23:57:12.484351 2806 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:57:12.720923 kubelet[2806]: I0715 23:57:12.720799 2806 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 23:57:12.720923 kubelet[2806]: I0715 23:57:12.720838 2806 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:57:12.721368 kubelet[2806]: I0715 23:57:12.721335 2806 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 23:57:12.760613 kubelet[2806]: I0715 23:57:12.760518 2806 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:57:12.764838 kubelet[2806]: E0715 23:57:12.764795 2806 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.113:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 15 23:57:12.785050 kubelet[2806]: I0715 23:57:12.785004 2806 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:57:12.798854 kubelet[2806]: I0715 23:57:12.798746 2806 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:57:12.804844 kubelet[2806]: I0715 23:57:12.804768 2806 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:57:12.809700 kubelet[2806]: I0715 23:57:12.804834 2806 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-113","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:57:12.812682 kubelet[2806]: I0715 23:57:12.812640 2806 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:57:12.812682 kubelet[2806]: I0715 23:57:12.812683 2806 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 23:57:12.812860 kubelet[2806]: I0715 23:57:12.812849 2806 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:57:12.817826 kubelet[2806]: I0715 23:57:12.817648 2806 kubelet.go:480] "Attempting to sync node with API server" Jul 15 23:57:12.817826 kubelet[2806]: I0715 23:57:12.817693 2806 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:57:12.817826 kubelet[2806]: I0715 23:57:12.817724 2806 kubelet.go:386] "Adding apiserver pod source" Jul 15 23:57:12.820335 kubelet[2806]: I0715 23:57:12.820092 2806 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:57:12.823065 kubelet[2806]: E0715 23:57:12.823026 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-113&limit=500&resourceVersion=0\": dial tcp 172.31.28.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 23:57:12.836419 kubelet[2806]: E0715 23:57:12.836125 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 23:57:12.839867 kubelet[2806]: I0715 23:57:12.839840 2806 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:57:12.840360 kubelet[2806]: I0715 23:57:12.840341 2806 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 23:57:12.841290 kubelet[2806]: W0715 23:57:12.841259 2806 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 23:57:12.847036 kubelet[2806]: I0715 23:57:12.846494 2806 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:57:12.847036 kubelet[2806]: I0715 23:57:12.846557 2806 server.go:1289] "Started kubelet" Jul 15 23:57:12.850609 kubelet[2806]: I0715 23:57:12.850544 2806 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:57:12.852513 kubelet[2806]: I0715 23:57:12.852469 2806 server.go:317] "Adding debug handlers to kubelet server" Jul 15 23:57:12.854435 kubelet[2806]: I0715 23:57:12.853843 2806 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:57:12.854435 kubelet[2806]: I0715 23:57:12.854197 2806 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:57:12.858461 kubelet[2806]: E0715 23:57:12.854332 2806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.113:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.113:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-113.1852921715b35ca5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-113,UID:ip-172-31-28-113,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-113,},FirstTimestamp:2025-07-15 23:57:12.846523557 +0000 UTC m=+0.428176474,LastTimestamp:2025-07-15 23:57:12.846523557 +0000 UTC m=+0.428176474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-113,}" Jul 15 23:57:12.860765 kubelet[2806]: I0715 23:57:12.860581 2806 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:57:12.862290 kubelet[2806]: I0715 23:57:12.862266 2806 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:57:12.868730 kubelet[2806]: E0715 23:57:12.868692 2806 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-113\" not found" Jul 15 23:57:12.868861 kubelet[2806]: I0715 23:57:12.868734 2806 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:57:12.869717 kubelet[2806]: I0715 23:57:12.869118 2806 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:57:12.869717 kubelet[2806]: I0715 23:57:12.869177 2806 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:57:12.869717 kubelet[2806]: E0715 23:57:12.869576 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 23:57:12.875704 kubelet[2806]: E0715 23:57:12.875507 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-113?timeout=10s\": dial tcp 172.31.28.113:6443: connect: connection refused" interval="200ms" Jul 15 23:57:12.878151 kubelet[2806]: I0715 23:57:12.878129 2806 factory.go:223] Registration of the containerd container factory successfully Jul 15 23:57:12.878306 kubelet[2806]: I0715 23:57:12.878298 2806 factory.go:223] Registration of the systemd container factory successfully Jul 15 23:57:12.878456 kubelet[2806]: I0715 23:57:12.878418 2806 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:57:12.885151 kubelet[2806]: I0715 23:57:12.885026 2806 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 23:57:12.886848 kubelet[2806]: I0715 23:57:12.886818 2806 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 23:57:12.886848 kubelet[2806]: I0715 23:57:12.886845 2806 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 23:57:12.886972 kubelet[2806]: I0715 23:57:12.886866 2806 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:57:12.886972 kubelet[2806]: I0715 23:57:12.886873 2806 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 23:57:12.886972 kubelet[2806]: E0715 23:57:12.886914 2806 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:57:12.898455 kubelet[2806]: E0715 23:57:12.898410 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 23:57:12.916513 kubelet[2806]: I0715 23:57:12.916300 2806 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:57:12.916513 kubelet[2806]: I0715 23:57:12.916318 2806 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:57:12.916513 kubelet[2806]: I0715 23:57:12.916336 2806 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:57:12.918779 kubelet[2806]: I0715 23:57:12.918734 2806 policy_none.go:49] "None policy: Start" Jul 15 23:57:12.918779 kubelet[2806]: I0715 23:57:12.918766 2806 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:57:12.918779 kubelet[2806]: I0715 23:57:12.918780 2806 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:57:12.927422 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 23:57:12.940287 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 23:57:12.945184 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 23:57:12.954966 kubelet[2806]: E0715 23:57:12.954936 2806 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 23:57:12.955202 kubelet[2806]: I0715 23:57:12.955169 2806 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:57:12.956370 kubelet[2806]: I0715 23:57:12.955189 2806 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:57:12.956370 kubelet[2806]: I0715 23:57:12.955875 2806 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:57:12.959610 kubelet[2806]: E0715 23:57:12.959580 2806 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:57:12.959737 kubelet[2806]: E0715 23:57:12.959631 2806 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-113\" not found" Jul 15 23:57:13.002898 systemd[1]: Created slice kubepods-burstable-poddf732de9af5be8c95c592085191d8f0f.slice - libcontainer container kubepods-burstable-poddf732de9af5be8c95c592085191d8f0f.slice. Jul 15 23:57:13.016904 kubelet[2806]: E0715 23:57:13.016863 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-113\" not found" node="ip-172-31-28-113" Jul 15 23:57:13.020302 systemd[1]: Created slice kubepods-burstable-podd11766a5c3dea1071f2a919864cde0e6.slice - libcontainer container kubepods-burstable-podd11766a5c3dea1071f2a919864cde0e6.slice. Jul 15 23:57:13.028721 kubelet[2806]: E0715 23:57:13.028685 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-113\" not found" node="ip-172-31-28-113" Jul 15 23:57:13.032019 systemd[1]: Created slice kubepods-burstable-pod80769599f72eb9ec00027aef61b5a97c.slice - libcontainer container kubepods-burstable-pod80769599f72eb9ec00027aef61b5a97c.slice. Jul 15 23:57:13.034784 kubelet[2806]: E0715 23:57:13.034757 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-113\" not found" node="ip-172-31-28-113" Jul 15 23:57:13.057612 kubelet[2806]: I0715 23:57:13.057535 2806 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-113" Jul 15 23:57:13.057917 kubelet[2806]: E0715 23:57:13.057894 2806 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.113:6443/api/v1/nodes\": dial tcp 172.31.28.113:6443: connect: connection refused" node="ip-172-31-28-113" Jul 15 23:57:13.076671 kubelet[2806]: E0715 23:57:13.076631 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-113?timeout=10s\": dial tcp 172.31.28.113:6443: connect: connection refused" interval="400ms" Jul 15 23:57:13.170404 kubelet[2806]: I0715 23:57:13.170300 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d11766a5c3dea1071f2a919864cde0e6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-113\" (UID: \"d11766a5c3dea1071f2a919864cde0e6\") " pod="kube-system/kube-apiserver-ip-172-31-28-113" Jul 15 23:57:13.170404 kubelet[2806]: I0715 23:57:13.170357 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80769599f72eb9ec00027aef61b5a97c-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-113\" (UID: \"80769599f72eb9ec00027aef61b5a97c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-113" Jul 15 23:57:13.170404 kubelet[2806]: I0715 23:57:13.170374 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80769599f72eb9ec00027aef61b5a97c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-113\" (UID: \"80769599f72eb9ec00027aef61b5a97c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-113" Jul 15 23:57:13.170404 kubelet[2806]: I0715 23:57:13.170393 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80769599f72eb9ec00027aef61b5a97c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-113\" (UID: \"80769599f72eb9ec00027aef61b5a97c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-113" Jul 15 23:57:13.170404 kubelet[2806]: I0715 23:57:13.170411 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80769599f72eb9ec00027aef61b5a97c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-113\" (UID: \"80769599f72eb9ec00027aef61b5a97c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-113" Jul 15 23:57:13.170731 kubelet[2806]: I0715 23:57:13.170429 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80769599f72eb9ec00027aef61b5a97c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-113\" (UID: \"80769599f72eb9ec00027aef61b5a97c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-113" Jul 15 23:57:13.170731 kubelet[2806]: I0715 23:57:13.170445 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df732de9af5be8c95c592085191d8f0f-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-113\" (UID: \"df732de9af5be8c95c592085191d8f0f\") " pod="kube-system/kube-scheduler-ip-172-31-28-113" Jul 15 23:57:13.170731 kubelet[2806]: I0715 23:57:13.170459 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d11766a5c3dea1071f2a919864cde0e6-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-113\" (UID: \"d11766a5c3dea1071f2a919864cde0e6\") " pod="kube-system/kube-apiserver-ip-172-31-28-113" Jul 15 23:57:13.170731 kubelet[2806]: I0715 23:57:13.170482 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d11766a5c3dea1071f2a919864cde0e6-ca-certs\") pod \"kube-apiserver-ip-172-31-28-113\" (UID: \"d11766a5c3dea1071f2a919864cde0e6\") " pod="kube-system/kube-apiserver-ip-172-31-28-113" Jul 15 23:57:13.260181 kubelet[2806]: I0715 23:57:13.259882 2806 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-113" Jul 15 23:57:13.260299 kubelet[2806]: E0715 23:57:13.260194 2806 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.113:6443/api/v1/nodes\": dial tcp 172.31.28.113:6443: connect: connection refused" node="ip-172-31-28-113" Jul 15 23:57:13.318976 containerd[1896]: time="2025-07-15T23:57:13.318934363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-113,Uid:df732de9af5be8c95c592085191d8f0f,Namespace:kube-system,Attempt:0,}" Jul 15 23:57:13.338455 containerd[1896]: time="2025-07-15T23:57:13.338250381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-113,Uid:d11766a5c3dea1071f2a919864cde0e6,Namespace:kube-system,Attempt:0,}" Jul 15 23:57:13.338662 containerd[1896]: time="2025-07-15T23:57:13.338247674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-113,Uid:80769599f72eb9ec00027aef61b5a97c,Namespace:kube-system,Attempt:0,}" Jul 15 23:57:13.477356 kubelet[2806]: E0715 23:57:13.477309 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-113?timeout=10s\": dial tcp 172.31.28.113:6443: connect: connection refused" interval="800ms" Jul 15 23:57:13.504737 containerd[1896]: time="2025-07-15T23:57:13.504661605Z" level=info msg="connecting to shim e5bc26aa9227a3eeeba0fcfa49d6f2e7eb49d9d26ed14373243c9da036be0f76" address="unix:///run/containerd/s/11ddff90fbbd29216c3ceb729c4d7b5c2487328b4bba718259a30f5139f651a2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:57:13.505475 containerd[1896]: time="2025-07-15T23:57:13.505403627Z" level=info msg="connecting to shim ce5d5808313a6b7ec42780404486031177d746347f47f48d30840a684747ec14" address="unix:///run/containerd/s/82359b0ed59044ae0c4bf581cccb59588c593cdce0165e860c3aeb62f045900b" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:57:13.506310 containerd[1896]: time="2025-07-15T23:57:13.506287241Z" level=info msg="connecting to shim e1c83904c06b65d6b4a3edda0dceea2f3c6037af98baa3a505ca83ea40ef5e1b" address="unix:///run/containerd/s/b821b5eeebd6442781a6df43ce9cbc3760810082fcd7d9d6a9b01351c71752bf" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:57:13.615467 systemd[1]: Started cri-containerd-e5bc26aa9227a3eeeba0fcfa49d6f2e7eb49d9d26ed14373243c9da036be0f76.scope - libcontainer container e5bc26aa9227a3eeeba0fcfa49d6f2e7eb49d9d26ed14373243c9da036be0f76. Jul 15 23:57:13.621702 systemd[1]: Started cri-containerd-ce5d5808313a6b7ec42780404486031177d746347f47f48d30840a684747ec14.scope - libcontainer container ce5d5808313a6b7ec42780404486031177d746347f47f48d30840a684747ec14. Jul 15 23:57:13.624010 systemd[1]: Started cri-containerd-e1c83904c06b65d6b4a3edda0dceea2f3c6037af98baa3a505ca83ea40ef5e1b.scope - libcontainer container e1c83904c06b65d6b4a3edda0dceea2f3c6037af98baa3a505ca83ea40ef5e1b. Jul 15 23:57:13.663340 kubelet[2806]: I0715 23:57:13.663295 2806 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-113" Jul 15 23:57:13.665526 kubelet[2806]: E0715 23:57:13.665489 2806 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.113:6443/api/v1/nodes\": dial tcp 172.31.28.113:6443: connect: connection refused" node="ip-172-31-28-113" Jul 15 23:57:13.703582 containerd[1896]: time="2025-07-15T23:57:13.703549093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-113,Uid:80769599f72eb9ec00027aef61b5a97c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1c83904c06b65d6b4a3edda0dceea2f3c6037af98baa3a505ca83ea40ef5e1b\"" Jul 15 23:57:13.717657 containerd[1896]: time="2025-07-15T23:57:13.717596380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-113,Uid:df732de9af5be8c95c592085191d8f0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce5d5808313a6b7ec42780404486031177d746347f47f48d30840a684747ec14\"" Jul 15 23:57:13.719165 containerd[1896]: time="2025-07-15T23:57:13.719130740Z" level=info msg="CreateContainer within sandbox \"e1c83904c06b65d6b4a3edda0dceea2f3c6037af98baa3a505ca83ea40ef5e1b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 23:57:13.722908 containerd[1896]: time="2025-07-15T23:57:13.722872609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-113,Uid:d11766a5c3dea1071f2a919864cde0e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5bc26aa9227a3eeeba0fcfa49d6f2e7eb49d9d26ed14373243c9da036be0f76\"" Jul 15 23:57:13.725864 containerd[1896]: time="2025-07-15T23:57:13.725829809Z" level=info msg="CreateContainer within sandbox \"ce5d5808313a6b7ec42780404486031177d746347f47f48d30840a684747ec14\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 23:57:13.730598 containerd[1896]: time="2025-07-15T23:57:13.729946499Z" level=info msg="CreateContainer within sandbox \"e5bc26aa9227a3eeeba0fcfa49d6f2e7eb49d9d26ed14373243c9da036be0f76\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 23:57:13.745521 containerd[1896]: time="2025-07-15T23:57:13.745357492Z" level=info msg="Container b4669ddaf852a4a5c1e5a7075a5b8ae6bd25fa6c030701a73acc22f5bf5db7b7: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:57:13.755993 containerd[1896]: time="2025-07-15T23:57:13.755944052Z" level=info msg="Container 6c6de8fb877ea4225ea0b6ce664ff8facf6b9a2943f3a9b98fc04190e16eb8a1: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:57:13.763471 containerd[1896]: time="2025-07-15T23:57:13.763369942Z" level=info msg="Container 992e44d3f79f889fdf0ecab36b8a022233924cd781830e431fabffe9d96b7683: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:57:13.773845 containerd[1896]: time="2025-07-15T23:57:13.773802782Z" level=info msg="CreateContainer within sandbox \"e1c83904c06b65d6b4a3edda0dceea2f3c6037af98baa3a505ca83ea40ef5e1b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b4669ddaf852a4a5c1e5a7075a5b8ae6bd25fa6c030701a73acc22f5bf5db7b7\"" Jul 15 23:57:13.774541 containerd[1896]: time="2025-07-15T23:57:13.774475390Z" level=info msg="StartContainer for \"b4669ddaf852a4a5c1e5a7075a5b8ae6bd25fa6c030701a73acc22f5bf5db7b7\"" Jul 15 23:57:13.776105 containerd[1896]: time="2025-07-15T23:57:13.776073166Z" level=info msg="connecting to shim b4669ddaf852a4a5c1e5a7075a5b8ae6bd25fa6c030701a73acc22f5bf5db7b7" address="unix:///run/containerd/s/b821b5eeebd6442781a6df43ce9cbc3760810082fcd7d9d6a9b01351c71752bf" protocol=ttrpc version=3 Jul 15 23:57:13.780655 containerd[1896]: time="2025-07-15T23:57:13.780620383Z" level=info msg="CreateContainer within sandbox \"e5bc26aa9227a3eeeba0fcfa49d6f2e7eb49d9d26ed14373243c9da036be0f76\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"992e44d3f79f889fdf0ecab36b8a022233924cd781830e431fabffe9d96b7683\"" Jul 15 23:57:13.781330 containerd[1896]: time="2025-07-15T23:57:13.781285954Z" level=info msg="StartContainer for \"992e44d3f79f889fdf0ecab36b8a022233924cd781830e431fabffe9d96b7683\"" Jul 15 23:57:13.784801 containerd[1896]: time="2025-07-15T23:57:13.784758582Z" level=info msg="connecting to shim 992e44d3f79f889fdf0ecab36b8a022233924cd781830e431fabffe9d96b7683" address="unix:///run/containerd/s/11ddff90fbbd29216c3ceb729c4d7b5c2487328b4bba718259a30f5139f651a2" protocol=ttrpc version=3 Jul 15 23:57:13.785297 containerd[1896]: time="2025-07-15T23:57:13.785272820Z" level=info msg="CreateContainer within sandbox \"ce5d5808313a6b7ec42780404486031177d746347f47f48d30840a684747ec14\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6c6de8fb877ea4225ea0b6ce664ff8facf6b9a2943f3a9b98fc04190e16eb8a1\"" Jul 15 23:57:13.786011 containerd[1896]: time="2025-07-15T23:57:13.785986079Z" level=info msg="StartContainer for \"6c6de8fb877ea4225ea0b6ce664ff8facf6b9a2943f3a9b98fc04190e16eb8a1\"" Jul 15 23:57:13.786851 containerd[1896]: time="2025-07-15T23:57:13.786811113Z" level=info msg="connecting to shim 6c6de8fb877ea4225ea0b6ce664ff8facf6b9a2943f3a9b98fc04190e16eb8a1" address="unix:///run/containerd/s/82359b0ed59044ae0c4bf581cccb59588c593cdce0165e860c3aeb62f045900b" protocol=ttrpc version=3 Jul 15 23:57:13.800466 systemd[1]: Started cri-containerd-b4669ddaf852a4a5c1e5a7075a5b8ae6bd25fa6c030701a73acc22f5bf5db7b7.scope - libcontainer container b4669ddaf852a4a5c1e5a7075a5b8ae6bd25fa6c030701a73acc22f5bf5db7b7. Jul 15 23:57:13.818673 systemd[1]: Started cri-containerd-6c6de8fb877ea4225ea0b6ce664ff8facf6b9a2943f3a9b98fc04190e16eb8a1.scope - libcontainer container 6c6de8fb877ea4225ea0b6ce664ff8facf6b9a2943f3a9b98fc04190e16eb8a1. Jul 15 23:57:13.828480 systemd[1]: Started cri-containerd-992e44d3f79f889fdf0ecab36b8a022233924cd781830e431fabffe9d96b7683.scope - libcontainer container 992e44d3f79f889fdf0ecab36b8a022233924cd781830e431fabffe9d96b7683. Jul 15 23:57:13.840308 kubelet[2806]: E0715 23:57:13.838892 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-113&limit=500&resourceVersion=0\": dial tcp 172.31.28.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 23:57:13.920573 containerd[1896]: time="2025-07-15T23:57:13.920377701Z" level=info msg="StartContainer for \"b4669ddaf852a4a5c1e5a7075a5b8ae6bd25fa6c030701a73acc22f5bf5db7b7\" returns successfully" Jul 15 23:57:13.959067 containerd[1896]: time="2025-07-15T23:57:13.958953036Z" level=info msg="StartContainer for \"992e44d3f79f889fdf0ecab36b8a022233924cd781830e431fabffe9d96b7683\" returns successfully" Jul 15 23:57:13.967922 containerd[1896]: time="2025-07-15T23:57:13.967821158Z" level=info msg="StartContainer for \"6c6de8fb877ea4225ea0b6ce664ff8facf6b9a2943f3a9b98fc04190e16eb8a1\" returns successfully" Jul 15 23:57:13.971989 kubelet[2806]: E0715 23:57:13.971933 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-113\" not found" node="ip-172-31-28-113" Jul 15 23:57:14.037326 kubelet[2806]: E0715 23:57:14.037270 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 23:57:14.111211 kubelet[2806]: E0715 23:57:14.111168 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 23:57:14.133541 kubelet[2806]: E0715 23:57:14.133486 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 23:57:14.278068 kubelet[2806]: E0715 23:57:14.277939 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-113?timeout=10s\": dial tcp 172.31.28.113:6443: connect: connection refused" interval="1.6s" Jul 15 23:57:14.468911 kubelet[2806]: I0715 23:57:14.468421 2806 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-113" Jul 15 23:57:14.468911 kubelet[2806]: E0715 23:57:14.468772 2806 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.113:6443/api/v1/nodes\": dial tcp 172.31.28.113:6443: connect: connection refused" node="ip-172-31-28-113" Jul 15 23:57:14.979060 kubelet[2806]: E0715 23:57:14.978824 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-113\" not found" node="ip-172-31-28-113" Jul 15 23:57:14.985738 kubelet[2806]: E0715 23:57:14.985712 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-113\" not found" node="ip-172-31-28-113" Jul 15 23:57:15.987028 kubelet[2806]: E0715 23:57:15.986971 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-113\" not found" node="ip-172-31-28-113" Jul 15 23:57:15.989570 kubelet[2806]: E0715 23:57:15.987857 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-113\" not found" node="ip-172-31-28-113" Jul 15 23:57:16.072126 kubelet[2806]: I0715 23:57:16.072060 2806 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-113" Jul 15 23:57:16.989478 kubelet[2806]: E0715 23:57:16.989443 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-113\" not found" node="ip-172-31-28-113" Jul 15 23:57:16.989938 kubelet[2806]: E0715 23:57:16.989877 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-113\" not found" node="ip-172-31-28-113" Jul 15 23:57:17.344873 kubelet[2806]: I0715 23:57:17.344295 2806 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-113" Jul 15 23:57:17.344873 kubelet[2806]: E0715 23:57:17.344729 2806 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-28-113\": node \"ip-172-31-28-113\" not found" Jul 15 23:57:17.370464 kubelet[2806]: I0715 23:57:17.370429 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-113" Jul 15 23:57:17.414368 kubelet[2806]: E0715 23:57:17.414257 2806 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-28-113.1852921715b35ca5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-113,UID:ip-172-31-28-113,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-113,},FirstTimestamp:2025-07-15 23:57:12.846523557 +0000 UTC m=+0.428176474,LastTimestamp:2025-07-15 23:57:12.846523557 +0000 UTC m=+0.428176474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-113,}" Jul 15 23:57:17.425210 kubelet[2806]: E0715 23:57:17.425175 2806 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-113\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-113" Jul 15 23:57:17.425210 kubelet[2806]: I0715 23:57:17.425210 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-113" Jul 15 23:57:17.426360 kubelet[2806]: E0715 23:57:17.426313 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jul 15 23:57:17.432599 kubelet[2806]: E0715 23:57:17.432555 2806 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-113\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-113" Jul 15 23:57:17.432599 kubelet[2806]: I0715 23:57:17.432592 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-113" Jul 15 23:57:17.437447 kubelet[2806]: E0715 23:57:17.437412 2806 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-113\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-113" Jul 15 23:57:17.826049 kubelet[2806]: I0715 23:57:17.825072 2806 apiserver.go:52] "Watching apiserver" Jul 15 23:57:17.869631 kubelet[2806]: I0715 23:57:17.869577 2806 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:57:17.988827 kubelet[2806]: I0715 23:57:17.988799 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-113" Jul 15 23:57:17.991057 kubelet[2806]: E0715 23:57:17.991020 2806 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-113\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-113" Jul 15 23:57:19.390078 systemd[1]: Reload requested from client PID 3085 ('systemctl') (unit session-7.scope)... Jul 15 23:57:19.390097 systemd[1]: Reloading... Jul 15 23:57:19.481296 zram_generator::config[3126]: No configuration found. Jul 15 23:57:19.609036 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:57:19.763495 systemd[1]: Reloading finished in 372 ms. Jul 15 23:57:19.795791 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:57:19.806825 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:57:19.807059 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:57:19.807130 systemd[1]: kubelet.service: Consumed 839ms CPU time, 128.8M memory peak. Jul 15 23:57:19.809189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:57:20.131595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:57:20.141835 (kubelet)[3190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:57:20.208764 kubelet[3190]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:57:20.210245 kubelet[3190]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:57:20.210245 kubelet[3190]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:57:20.210245 kubelet[3190]: I0715 23:57:20.209366 3190 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:57:20.216895 kubelet[3190]: I0715 23:57:20.216867 3190 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 23:57:20.217027 kubelet[3190]: I0715 23:57:20.217019 3190 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:57:20.217325 kubelet[3190]: I0715 23:57:20.217311 3190 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 23:57:20.218593 kubelet[3190]: I0715 23:57:20.218569 3190 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 15 23:57:20.220859 kubelet[3190]: I0715 23:57:20.220833 3190 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:57:20.228023 kubelet[3190]: I0715 23:57:20.228005 3190 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:57:20.232055 kubelet[3190]: I0715 23:57:20.232022 3190 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:57:20.232336 kubelet[3190]: I0715 23:57:20.232306 3190 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:57:20.232662 kubelet[3190]: I0715 23:57:20.232346 3190 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-113","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:57:20.232800 kubelet[3190]: I0715 23:57:20.232675 3190 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:57:20.232800 kubelet[3190]: I0715 23:57:20.232690 3190 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 23:57:20.232800 kubelet[3190]: I0715 23:57:20.232752 3190 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:57:20.232942 kubelet[3190]: I0715 23:57:20.232929 3190 kubelet.go:480] "Attempting to sync node with API server" Jul 15 23:57:20.232982 kubelet[3190]: I0715 23:57:20.232953 3190 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:57:20.233019 kubelet[3190]: I0715 23:57:20.232989 3190 kubelet.go:386] "Adding apiserver pod source" Jul 15 23:57:20.233019 kubelet[3190]: I0715 23:57:20.233011 3190 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:57:20.239250 kubelet[3190]: I0715 23:57:20.237958 3190 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:57:20.240146 kubelet[3190]: I0715 23:57:20.240125 3190 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 23:57:20.244701 kubelet[3190]: I0715 23:57:20.244682 3190 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:57:20.244883 kubelet[3190]: I0715 23:57:20.244873 3190 server.go:1289] "Started kubelet" Jul 15 23:57:20.251640 kubelet[3190]: I0715 23:57:20.251604 3190 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:57:20.262885 kubelet[3190]: I0715 23:57:20.262781 3190 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:57:20.265877 kubelet[3190]: I0715 23:57:20.265424 3190 server.go:317] "Adding debug handlers to kubelet server" Jul 15 23:57:20.268448 kubelet[3190]: I0715 23:57:20.268419 3190 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:57:20.268746 kubelet[3190]: E0715 23:57:20.268722 3190 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-113\" not found" Jul 15 23:57:20.269317 kubelet[3190]: I0715 23:57:20.269015 3190 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:57:20.269317 kubelet[3190]: I0715 23:57:20.269152 3190 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:57:20.274474 kubelet[3190]: I0715 23:57:20.274402 3190 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:57:20.274672 kubelet[3190]: I0715 23:57:20.274655 3190 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:57:20.275156 kubelet[3190]: I0715 23:57:20.275067 3190 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:57:20.280521 kubelet[3190]: I0715 23:57:20.280486 3190 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:57:20.284787 kubelet[3190]: E0715 23:57:20.284380 3190 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:57:20.284787 kubelet[3190]: I0715 23:57:20.284398 3190 factory.go:223] Registration of the containerd container factory successfully Jul 15 23:57:20.284787 kubelet[3190]: I0715 23:57:20.284409 3190 factory.go:223] Registration of the systemd container factory successfully Jul 15 23:57:20.294142 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 15 23:57:20.302517 kubelet[3190]: I0715 23:57:20.302468 3190 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 23:57:20.307109 kubelet[3190]: I0715 23:57:20.307082 3190 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 23:57:20.307299 kubelet[3190]: I0715 23:57:20.307288 3190 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 23:57:20.307393 kubelet[3190]: I0715 23:57:20.307383 3190 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:57:20.307472 kubelet[3190]: I0715 23:57:20.307463 3190 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 23:57:20.307622 kubelet[3190]: E0715 23:57:20.307583 3190 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:57:20.352872 kubelet[3190]: I0715 23:57:20.352851 3190 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:57:20.353059 kubelet[3190]: I0715 23:57:20.353037 3190 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:57:20.353167 kubelet[3190]: I0715 23:57:20.353157 3190 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:57:20.353450 kubelet[3190]: I0715 23:57:20.353429 3190 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 23:57:20.353561 kubelet[3190]: I0715 23:57:20.353538 3190 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 23:57:20.353604 kubelet[3190]: I0715 23:57:20.353599 3190 policy_none.go:49] "None policy: Start" Jul 15 23:57:20.353646 kubelet[3190]: I0715 23:57:20.353641 3190 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:57:20.353685 kubelet[3190]: I0715 23:57:20.353680 3190 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:57:20.353816 kubelet[3190]: I0715 23:57:20.353809 3190 state_mem.go:75] "Updated machine memory state" Jul 15 23:57:20.358644 kubelet[3190]: E0715 23:57:20.358623 3190 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 23:57:20.359317 kubelet[3190]: I0715 23:57:20.359296 3190 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:57:20.359622 kubelet[3190]: I0715 23:57:20.359580 3190 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:57:20.359922 kubelet[3190]: I0715 23:57:20.359897 3190 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:57:20.365700 kubelet[3190]: E0715 23:57:20.365669 3190 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:57:20.409444 kubelet[3190]: I0715 23:57:20.409087 3190 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-113" Jul 15 23:57:20.409444 kubelet[3190]: I0715 23:57:20.409103 3190 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-113" Jul 15 23:57:20.409444 kubelet[3190]: I0715 23:57:20.409425 3190 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-113" Jul 15 23:57:20.462711 kubelet[3190]: I0715 23:57:20.462671 3190 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-113" Jul 15 23:57:20.469940 kubelet[3190]: I0715 23:57:20.469905 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d11766a5c3dea1071f2a919864cde0e6-ca-certs\") pod \"kube-apiserver-ip-172-31-28-113\" (UID: \"d11766a5c3dea1071f2a919864cde0e6\") " pod="kube-system/kube-apiserver-ip-172-31-28-113" Jul 15 23:57:20.469940 kubelet[3190]: I0715 23:57:20.469940 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d11766a5c3dea1071f2a919864cde0e6-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-113\" (UID: \"d11766a5c3dea1071f2a919864cde0e6\") " pod="kube-system/kube-apiserver-ip-172-31-28-113" Jul 15 23:57:20.469940 kubelet[3190]: I0715 23:57:20.469967 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80769599f72eb9ec00027aef61b5a97c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-113\" (UID: \"80769599f72eb9ec00027aef61b5a97c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-113" Jul 15 23:57:20.469940 kubelet[3190]: I0715 23:57:20.470136 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80769599f72eb9ec00027aef61b5a97c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-113\" (UID: \"80769599f72eb9ec00027aef61b5a97c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-113" Jul 15 23:57:20.469940 kubelet[3190]: I0715 23:57:20.470180 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d11766a5c3dea1071f2a919864cde0e6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-113\" (UID: \"d11766a5c3dea1071f2a919864cde0e6\") " pod="kube-system/kube-apiserver-ip-172-31-28-113" Jul 15 23:57:20.471366 kubelet[3190]: I0715 23:57:20.470204 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80769599f72eb9ec00027aef61b5a97c-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-113\" (UID: \"80769599f72eb9ec00027aef61b5a97c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-113" Jul 15 23:57:20.471366 kubelet[3190]: I0715 23:57:20.470239 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80769599f72eb9ec00027aef61b5a97c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-113\" (UID: \"80769599f72eb9ec00027aef61b5a97c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-113" Jul 15 23:57:20.471366 kubelet[3190]: I0715 23:57:20.470268 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80769599f72eb9ec00027aef61b5a97c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-113\" (UID: \"80769599f72eb9ec00027aef61b5a97c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-113" Jul 15 23:57:20.471366 kubelet[3190]: I0715 23:57:20.470291 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df732de9af5be8c95c592085191d8f0f-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-113\" (UID: \"df732de9af5be8c95c592085191d8f0f\") " pod="kube-system/kube-scheduler-ip-172-31-28-113" Jul 15 23:57:20.473612 kubelet[3190]: I0715 23:57:20.473534 3190 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-113" Jul 15 23:57:20.473817 kubelet[3190]: I0715 23:57:20.473719 3190 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-113" Jul 15 23:57:21.246678 kubelet[3190]: I0715 23:57:21.246640 3190 apiserver.go:52] "Watching apiserver" Jul 15 23:57:21.269254 kubelet[3190]: I0715 23:57:21.269188 3190 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:57:21.327244 kubelet[3190]: I0715 23:57:21.327043 3190 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-113" Jul 15 23:57:21.328268 kubelet[3190]: I0715 23:57:21.328218 3190 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-113" Jul 15 23:57:21.335295 kubelet[3190]: E0715 23:57:21.335244 3190 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-113\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-113" Jul 15 23:57:21.338488 kubelet[3190]: E0715 23:57:21.338426 3190 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-113\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-113" Jul 15 23:57:21.351527 kubelet[3190]: I0715 23:57:21.351477 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-113" podStartSLOduration=1.351463189 podStartE2EDuration="1.351463189s" podCreationTimestamp="2025-07-15 23:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:57:21.350843983 +0000 UTC m=+1.200119458" watchObservedRunningTime="2025-07-15 23:57:21.351463189 +0000 UTC m=+1.200738659" Jul 15 23:57:21.375034 kubelet[3190]: I0715 23:57:21.374778 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-113" podStartSLOduration=1.374761808 podStartE2EDuration="1.374761808s" podCreationTimestamp="2025-07-15 23:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:57:21.374151678 +0000 UTC m=+1.223427158" watchObservedRunningTime="2025-07-15 23:57:21.374761808 +0000 UTC m=+1.224037265" Jul 15 23:57:21.375034 kubelet[3190]: I0715 23:57:21.374864 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-113" podStartSLOduration=1.374860259 podStartE2EDuration="1.374860259s" podCreationTimestamp="2025-07-15 23:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:57:21.363490271 +0000 UTC m=+1.212765744" watchObservedRunningTime="2025-07-15 23:57:21.374860259 +0000 UTC m=+1.224135736" Jul 15 23:57:25.842934 kubelet[3190]: I0715 23:57:25.842899 3190 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 23:57:25.843393 containerd[1896]: time="2025-07-15T23:57:25.843197901Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 23:57:25.843599 kubelet[3190]: I0715 23:57:25.843486 3190 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 23:57:26.811772 kubelet[3190]: I0715 23:57:26.811735 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52bfb8cd-a3f9-4450-8cf4-bae4efb31b71-xtables-lock\") pod \"kube-proxy-9s9gp\" (UID: \"52bfb8cd-a3f9-4450-8cf4-bae4efb31b71\") " pod="kube-system/kube-proxy-9s9gp" Jul 15 23:57:26.811772 kubelet[3190]: I0715 23:57:26.811771 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52bfb8cd-a3f9-4450-8cf4-bae4efb31b71-lib-modules\") pod \"kube-proxy-9s9gp\" (UID: \"52bfb8cd-a3f9-4450-8cf4-bae4efb31b71\") " pod="kube-system/kube-proxy-9s9gp" Jul 15 23:57:26.811927 kubelet[3190]: I0715 23:57:26.811791 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7927\" (UniqueName: \"kubernetes.io/projected/52bfb8cd-a3f9-4450-8cf4-bae4efb31b71-kube-api-access-x7927\") pod \"kube-proxy-9s9gp\" (UID: \"52bfb8cd-a3f9-4450-8cf4-bae4efb31b71\") " pod="kube-system/kube-proxy-9s9gp" Jul 15 23:57:26.811927 kubelet[3190]: I0715 23:57:26.811833 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/52bfb8cd-a3f9-4450-8cf4-bae4efb31b71-kube-proxy\") pod \"kube-proxy-9s9gp\" (UID: \"52bfb8cd-a3f9-4450-8cf4-bae4efb31b71\") " pod="kube-system/kube-proxy-9s9gp" Jul 15 23:57:26.817885 systemd[1]: Created slice kubepods-besteffort-pod52bfb8cd_a3f9_4450_8cf4_bae4efb31b71.slice - libcontainer container kubepods-besteffort-pod52bfb8cd_a3f9_4450_8cf4_bae4efb31b71.slice. Jul 15 23:57:27.055620 systemd[1]: Created slice kubepods-besteffort-pod8391aae5_2867_4858_bd94_0ec95b96a5a2.slice - libcontainer container kubepods-besteffort-pod8391aae5_2867_4858_bd94_0ec95b96a5a2.slice. Jul 15 23:57:27.115113 kubelet[3190]: I0715 23:57:27.114954 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjlqv\" (UniqueName: \"kubernetes.io/projected/8391aae5-2867-4858-bd94-0ec95b96a5a2-kube-api-access-gjlqv\") pod \"tigera-operator-747864d56d-9264n\" (UID: \"8391aae5-2867-4858-bd94-0ec95b96a5a2\") " pod="tigera-operator/tigera-operator-747864d56d-9264n" Jul 15 23:57:27.115113 kubelet[3190]: I0715 23:57:27.115008 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8391aae5-2867-4858-bd94-0ec95b96a5a2-var-lib-calico\") pod \"tigera-operator-747864d56d-9264n\" (UID: \"8391aae5-2867-4858-bd94-0ec95b96a5a2\") " pod="tigera-operator/tigera-operator-747864d56d-9264n" Jul 15 23:57:27.126858 containerd[1896]: time="2025-07-15T23:57:27.126792560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9s9gp,Uid:52bfb8cd-a3f9-4450-8cf4-bae4efb31b71,Namespace:kube-system,Attempt:0,}" Jul 15 23:57:27.156846 containerd[1896]: time="2025-07-15T23:57:27.156730573Z" level=info msg="connecting to shim d7ae49601da0724169572babcd98972fff4adc96b2dc6add4ea12ecb57c5f943" address="unix:///run/containerd/s/7aba5574118528a2961490b94a2a7264f1b86aee467080fbec3ad54f86d07f78" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:57:27.190497 systemd[1]: Started cri-containerd-d7ae49601da0724169572babcd98972fff4adc96b2dc6add4ea12ecb57c5f943.scope - libcontainer container d7ae49601da0724169572babcd98972fff4adc96b2dc6add4ea12ecb57c5f943. Jul 15 23:57:27.227859 containerd[1896]: time="2025-07-15T23:57:27.227819058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9s9gp,Uid:52bfb8cd-a3f9-4450-8cf4-bae4efb31b71,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7ae49601da0724169572babcd98972fff4adc96b2dc6add4ea12ecb57c5f943\"" Jul 15 23:57:27.242138 containerd[1896]: time="2025-07-15T23:57:27.242098863Z" level=info msg="CreateContainer within sandbox \"d7ae49601da0724169572babcd98972fff4adc96b2dc6add4ea12ecb57c5f943\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 23:57:27.260729 containerd[1896]: time="2025-07-15T23:57:27.260692101Z" level=info msg="Container e3a7a4ce7a7f4929bdc3a02fc07584c01b1909030535f1776c988d1f2f617c7e: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:57:27.274933 containerd[1896]: time="2025-07-15T23:57:27.274272651Z" level=info msg="CreateContainer within sandbox \"d7ae49601da0724169572babcd98972fff4adc96b2dc6add4ea12ecb57c5f943\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e3a7a4ce7a7f4929bdc3a02fc07584c01b1909030535f1776c988d1f2f617c7e\"" Jul 15 23:57:27.277153 containerd[1896]: time="2025-07-15T23:57:27.276143223Z" level=info msg="StartContainer for \"e3a7a4ce7a7f4929bdc3a02fc07584c01b1909030535f1776c988d1f2f617c7e\"" Jul 15 23:57:27.279116 containerd[1896]: time="2025-07-15T23:57:27.279074052Z" level=info msg="connecting to shim e3a7a4ce7a7f4929bdc3a02fc07584c01b1909030535f1776c988d1f2f617c7e" address="unix:///run/containerd/s/7aba5574118528a2961490b94a2a7264f1b86aee467080fbec3ad54f86d07f78" protocol=ttrpc version=3 Jul 15 23:57:27.303444 systemd[1]: Started cri-containerd-e3a7a4ce7a7f4929bdc3a02fc07584c01b1909030535f1776c988d1f2f617c7e.scope - libcontainer container e3a7a4ce7a7f4929bdc3a02fc07584c01b1909030535f1776c988d1f2f617c7e. Jul 15 23:57:27.350460 containerd[1896]: time="2025-07-15T23:57:27.350424173Z" level=info msg="StartContainer for \"e3a7a4ce7a7f4929bdc3a02fc07584c01b1909030535f1776c988d1f2f617c7e\" returns successfully" Jul 15 23:57:27.363870 containerd[1896]: time="2025-07-15T23:57:27.363606187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-9264n,Uid:8391aae5-2867-4858-bd94-0ec95b96a5a2,Namespace:tigera-operator,Attempt:0,}" Jul 15 23:57:27.397981 containerd[1896]: time="2025-07-15T23:57:27.397889868Z" level=info msg="connecting to shim 2199b3e00990525bfdd7d2033144148c603f75b14841e5342c33f72aedede9bf" address="unix:///run/containerd/s/05ae5f87b95299e8165c58f9562db5baacb9ace4f16a86dceae9aec5e674c71d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:57:27.428498 systemd[1]: Started cri-containerd-2199b3e00990525bfdd7d2033144148c603f75b14841e5342c33f72aedede9bf.scope - libcontainer container 2199b3e00990525bfdd7d2033144148c603f75b14841e5342c33f72aedede9bf. Jul 15 23:57:27.486644 containerd[1896]: time="2025-07-15T23:57:27.486561275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-9264n,Uid:8391aae5-2867-4858-bd94-0ec95b96a5a2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2199b3e00990525bfdd7d2033144148c603f75b14841e5342c33f72aedede9bf\"" Jul 15 23:57:27.489075 containerd[1896]: time="2025-07-15T23:57:27.489016888Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 23:57:27.933280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1024132462.mount: Deactivated successfully. Jul 15 23:57:28.435624 kubelet[3190]: I0715 23:57:28.435561 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9s9gp" podStartSLOduration=2.4355313450000002 podStartE2EDuration="2.435531345s" podCreationTimestamp="2025-07-15 23:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:57:28.357924091 +0000 UTC m=+8.207199568" watchObservedRunningTime="2025-07-15 23:57:28.435531345 +0000 UTC m=+8.284806827" Jul 15 23:57:29.152905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3064757844.mount: Deactivated successfully. Jul 15 23:57:29.982056 containerd[1896]: time="2025-07-15T23:57:29.982004470Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:29.983000 containerd[1896]: time="2025-07-15T23:57:29.982893303Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 15 23:57:29.983905 containerd[1896]: time="2025-07-15T23:57:29.983876917Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:29.987246 containerd[1896]: time="2025-07-15T23:57:29.986177753Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:29.987246 containerd[1896]: time="2025-07-15T23:57:29.987094492Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.497835212s" Jul 15 23:57:29.987246 containerd[1896]: time="2025-07-15T23:57:29.987128545Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 15 23:57:29.992112 containerd[1896]: time="2025-07-15T23:57:29.991598313Z" level=info msg="CreateContainer within sandbox \"2199b3e00990525bfdd7d2033144148c603f75b14841e5342c33f72aedede9bf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 23:57:29.998951 containerd[1896]: time="2025-07-15T23:57:29.998912324Z" level=info msg="Container 15042eb1b7668dcf9420906316a2cd8a00be224662aa7d46f83f1e45a8f0f570: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:57:30.028512 containerd[1896]: time="2025-07-15T23:57:30.028466299Z" level=info msg="CreateContainer within sandbox \"2199b3e00990525bfdd7d2033144148c603f75b14841e5342c33f72aedede9bf\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"15042eb1b7668dcf9420906316a2cd8a00be224662aa7d46f83f1e45a8f0f570\"" Jul 15 23:57:30.029256 containerd[1896]: time="2025-07-15T23:57:30.029191287Z" level=info msg="StartContainer for \"15042eb1b7668dcf9420906316a2cd8a00be224662aa7d46f83f1e45a8f0f570\"" Jul 15 23:57:30.030822 containerd[1896]: time="2025-07-15T23:57:30.030623917Z" level=info msg="connecting to shim 15042eb1b7668dcf9420906316a2cd8a00be224662aa7d46f83f1e45a8f0f570" address="unix:///run/containerd/s/05ae5f87b95299e8165c58f9562db5baacb9ace4f16a86dceae9aec5e674c71d" protocol=ttrpc version=3 Jul 15 23:57:30.067558 systemd[1]: Started cri-containerd-15042eb1b7668dcf9420906316a2cd8a00be224662aa7d46f83f1e45a8f0f570.scope - libcontainer container 15042eb1b7668dcf9420906316a2cd8a00be224662aa7d46f83f1e45a8f0f570. Jul 15 23:57:30.104201 containerd[1896]: time="2025-07-15T23:57:30.104161593Z" level=info msg="StartContainer for \"15042eb1b7668dcf9420906316a2cd8a00be224662aa7d46f83f1e45a8f0f570\" returns successfully" Jul 15 23:57:34.702523 update_engine[1855]: I20250715 23:57:34.702339 1855 update_attempter.cc:509] Updating boot flags... Jul 15 23:57:35.372574 sudo[2253]: pam_unix(sudo:session): session closed for user root Jul 15 23:57:35.398012 sshd[2252]: Connection closed by 139.178.89.65 port 54072 Jul 15 23:57:35.398040 sshd-session[2250]: pam_unix(sshd:session): session closed for user core Jul 15 23:57:35.412737 systemd[1]: sshd@6-172.31.28.113:22-139.178.89.65:54072.service: Deactivated successfully. Jul 15 23:57:35.417214 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 23:57:35.418315 systemd[1]: session-7.scope: Consumed 5.059s CPU time, 152.9M memory peak. Jul 15 23:57:35.421890 systemd-logind[1854]: Session 7 logged out. Waiting for processes to exit. Jul 15 23:57:35.428295 systemd-logind[1854]: Removed session 7. Jul 15 23:57:40.248976 kubelet[3190]: I0715 23:57:40.248766 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-9264n" podStartSLOduration=11.748570246 podStartE2EDuration="14.248223294s" podCreationTimestamp="2025-07-15 23:57:26 +0000 UTC" firstStartedPulling="2025-07-15 23:57:27.488528714 +0000 UTC m=+7.337804169" lastFinishedPulling="2025-07-15 23:57:29.988181761 +0000 UTC m=+9.837457217" observedRunningTime="2025-07-15 23:57:30.402807304 +0000 UTC m=+10.252082783" watchObservedRunningTime="2025-07-15 23:57:40.248223294 +0000 UTC m=+20.097498771" Jul 15 23:57:40.273145 systemd[1]: Created slice kubepods-besteffort-poddcab4e4c_7b34_4fee_9410_f6d6d451503e.slice - libcontainer container kubepods-besteffort-poddcab4e4c_7b34_4fee_9410_f6d6d451503e.slice. Jul 15 23:57:40.313956 kubelet[3190]: I0715 23:57:40.313111 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcab4e4c-7b34-4fee-9410-f6d6d451503e-tigera-ca-bundle\") pod \"calico-typha-647c44458b-5656f\" (UID: \"dcab4e4c-7b34-4fee-9410-f6d6d451503e\") " pod="calico-system/calico-typha-647c44458b-5656f" Jul 15 23:57:40.313956 kubelet[3190]: I0715 23:57:40.313152 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dcab4e4c-7b34-4fee-9410-f6d6d451503e-typha-certs\") pod \"calico-typha-647c44458b-5656f\" (UID: \"dcab4e4c-7b34-4fee-9410-f6d6d451503e\") " pod="calico-system/calico-typha-647c44458b-5656f" Jul 15 23:57:40.313956 kubelet[3190]: I0715 23:57:40.313171 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lqm\" (UniqueName: \"kubernetes.io/projected/dcab4e4c-7b34-4fee-9410-f6d6d451503e-kube-api-access-k9lqm\") pod \"calico-typha-647c44458b-5656f\" (UID: \"dcab4e4c-7b34-4fee-9410-f6d6d451503e\") " pod="calico-system/calico-typha-647c44458b-5656f" Jul 15 23:57:40.522119 systemd[1]: Created slice kubepods-besteffort-podca49004d_5665_4fe2_a03a_3547353e6384.slice - libcontainer container kubepods-besteffort-podca49004d_5665_4fe2_a03a_3547353e6384.slice. Jul 15 23:57:40.597978 containerd[1896]: time="2025-07-15T23:57:40.597930878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-647c44458b-5656f,Uid:dcab4e4c-7b34-4fee-9410-f6d6d451503e,Namespace:calico-system,Attempt:0,}" Jul 15 23:57:40.615040 kubelet[3190]: I0715 23:57:40.614991 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca49004d-5665-4fe2-a03a-3547353e6384-xtables-lock\") pod \"calico-node-hnrqm\" (UID: \"ca49004d-5665-4fe2-a03a-3547353e6384\") " pod="calico-system/calico-node-hnrqm" Jul 15 23:57:40.615742 kubelet[3190]: I0715 23:57:40.615051 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txsxz\" (UniqueName: \"kubernetes.io/projected/ca49004d-5665-4fe2-a03a-3547353e6384-kube-api-access-txsxz\") pod \"calico-node-hnrqm\" (UID: \"ca49004d-5665-4fe2-a03a-3547353e6384\") " pod="calico-system/calico-node-hnrqm" Jul 15 23:57:40.615742 kubelet[3190]: I0715 23:57:40.615084 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ca49004d-5665-4fe2-a03a-3547353e6384-node-certs\") pod \"calico-node-hnrqm\" (UID: \"ca49004d-5665-4fe2-a03a-3547353e6384\") " pod="calico-system/calico-node-hnrqm" Jul 15 23:57:40.615742 kubelet[3190]: I0715 23:57:40.615115 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ca49004d-5665-4fe2-a03a-3547353e6384-cni-log-dir\") pod \"calico-node-hnrqm\" (UID: \"ca49004d-5665-4fe2-a03a-3547353e6384\") " pod="calico-system/calico-node-hnrqm" Jul 15 23:57:40.615742 kubelet[3190]: I0715 23:57:40.615137 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ca49004d-5665-4fe2-a03a-3547353e6384-cni-net-dir\") pod \"calico-node-hnrqm\" (UID: \"ca49004d-5665-4fe2-a03a-3547353e6384\") " pod="calico-system/calico-node-hnrqm" Jul 15 23:57:40.615742 kubelet[3190]: I0715 23:57:40.615160 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ca49004d-5665-4fe2-a03a-3547353e6384-policysync\") pod \"calico-node-hnrqm\" (UID: \"ca49004d-5665-4fe2-a03a-3547353e6384\") " pod="calico-system/calico-node-hnrqm" Jul 15 23:57:40.615962 kubelet[3190]: I0715 23:57:40.615183 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ca49004d-5665-4fe2-a03a-3547353e6384-var-lib-calico\") pod \"calico-node-hnrqm\" (UID: \"ca49004d-5665-4fe2-a03a-3547353e6384\") " pod="calico-system/calico-node-hnrqm" Jul 15 23:57:40.615962 kubelet[3190]: I0715 23:57:40.615207 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ca49004d-5665-4fe2-a03a-3547353e6384-cni-bin-dir\") pod \"calico-node-hnrqm\" (UID: \"ca49004d-5665-4fe2-a03a-3547353e6384\") " pod="calico-system/calico-node-hnrqm" Jul 15 23:57:40.615962 kubelet[3190]: I0715 23:57:40.615244 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ca49004d-5665-4fe2-a03a-3547353e6384-var-run-calico\") pod \"calico-node-hnrqm\" (UID: \"ca49004d-5665-4fe2-a03a-3547353e6384\") " pod="calico-system/calico-node-hnrqm" Jul 15 23:57:40.615962 kubelet[3190]: I0715 23:57:40.615276 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ca49004d-5665-4fe2-a03a-3547353e6384-flexvol-driver-host\") pod \"calico-node-hnrqm\" (UID: \"ca49004d-5665-4fe2-a03a-3547353e6384\") " pod="calico-system/calico-node-hnrqm" Jul 15 23:57:40.615962 kubelet[3190]: I0715 23:57:40.615302 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca49004d-5665-4fe2-a03a-3547353e6384-lib-modules\") pod \"calico-node-hnrqm\" (UID: \"ca49004d-5665-4fe2-a03a-3547353e6384\") " pod="calico-system/calico-node-hnrqm" Jul 15 23:57:40.616147 kubelet[3190]: I0715 23:57:40.615326 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca49004d-5665-4fe2-a03a-3547353e6384-tigera-ca-bundle\") pod \"calico-node-hnrqm\" (UID: \"ca49004d-5665-4fe2-a03a-3547353e6384\") " pod="calico-system/calico-node-hnrqm" Jul 15 23:57:40.670151 containerd[1896]: time="2025-07-15T23:57:40.670054653Z" level=info msg="connecting to shim 11eb7044b3d74295350e138cea4c331881f2d118a5922a8a751aee8d49d40f9b" address="unix:///run/containerd/s/1854f5e10f52d6dd9e8b7ca24171a5d9c59da39be7e6cb9cda39ba3f42e7c872" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:57:40.712541 systemd[1]: Started cri-containerd-11eb7044b3d74295350e138cea4c331881f2d118a5922a8a751aee8d49d40f9b.scope - libcontainer container 11eb7044b3d74295350e138cea4c331881f2d118a5922a8a751aee8d49d40f9b. Jul 15 23:57:40.725271 kubelet[3190]: E0715 23:57:40.723964 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.725271 kubelet[3190]: W0715 23:57:40.724001 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.734192 kubelet[3190]: E0715 23:57:40.733281 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.737917 kubelet[3190]: E0715 23:57:40.734566 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.738142 kubelet[3190]: W0715 23:57:40.738081 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.738142 kubelet[3190]: E0715 23:57:40.738122 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.739431 kubelet[3190]: E0715 23:57:40.739130 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.739431 kubelet[3190]: W0715 23:57:40.739171 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.743374 kubelet[3190]: E0715 23:57:40.741527 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.743877 kubelet[3190]: E0715 23:57:40.743725 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.743877 kubelet[3190]: W0715 23:57:40.743749 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.743877 kubelet[3190]: E0715 23:57:40.743773 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.744403 kubelet[3190]: E0715 23:57:40.744383 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.744403 kubelet[3190]: W0715 23:57:40.744403 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.744651 kubelet[3190]: E0715 23:57:40.744420 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.745035 kubelet[3190]: E0715 23:57:40.744858 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.745163 kubelet[3190]: W0715 23:57:40.745049 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.745163 kubelet[3190]: E0715 23:57:40.745070 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.746428 kubelet[3190]: E0715 23:57:40.745490 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.746428 kubelet[3190]: W0715 23:57:40.745501 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.746428 kubelet[3190]: E0715 23:57:40.745517 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.811664 kubelet[3190]: E0715 23:57:40.811528 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5b8vh" podUID="128dc328-f476-4e10-9f21-e490b2e597ac" Jul 15 23:57:40.828318 containerd[1896]: time="2025-07-15T23:57:40.828131152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hnrqm,Uid:ca49004d-5665-4fe2-a03a-3547353e6384,Namespace:calico-system,Attempt:0,}" Jul 15 23:57:40.880705 containerd[1896]: time="2025-07-15T23:57:40.880604402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-647c44458b-5656f,Uid:dcab4e4c-7b34-4fee-9410-f6d6d451503e,Namespace:calico-system,Attempt:0,} returns sandbox id \"11eb7044b3d74295350e138cea4c331881f2d118a5922a8a751aee8d49d40f9b\"" Jul 15 23:57:40.887866 containerd[1896]: time="2025-07-15T23:57:40.887805736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 23:57:40.911615 kubelet[3190]: E0715 23:57:40.911199 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.911615 kubelet[3190]: W0715 23:57:40.911284 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.911615 kubelet[3190]: E0715 23:57:40.911316 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.913251 kubelet[3190]: E0715 23:57:40.912415 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.913251 kubelet[3190]: W0715 23:57:40.912434 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.913722 kubelet[3190]: E0715 23:57:40.913439 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.914004 kubelet[3190]: E0715 23:57:40.913938 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.914004 kubelet[3190]: W0715 23:57:40.913954 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.914004 kubelet[3190]: E0715 23:57:40.913972 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.915580 kubelet[3190]: E0715 23:57:40.915509 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.915580 kubelet[3190]: W0715 23:57:40.915530 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.915580 kubelet[3190]: E0715 23:57:40.915550 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.916861 kubelet[3190]: E0715 23:57:40.916750 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.917288 kubelet[3190]: W0715 23:57:40.917059 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.917288 kubelet[3190]: E0715 23:57:40.917082 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.919121 kubelet[3190]: E0715 23:57:40.918828 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.919811 kubelet[3190]: W0715 23:57:40.919710 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.919811 kubelet[3190]: E0715 23:57:40.919737 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.920493 kubelet[3190]: E0715 23:57:40.920319 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.920493 kubelet[3190]: W0715 23:57:40.920346 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.920493 kubelet[3190]: E0715 23:57:40.920365 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.921611 kubelet[3190]: E0715 23:57:40.921432 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.921611 kubelet[3190]: W0715 23:57:40.921490 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.921611 kubelet[3190]: E0715 23:57:40.921509 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.927420 kubelet[3190]: E0715 23:57:40.925574 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.927420 kubelet[3190]: W0715 23:57:40.925592 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.927420 kubelet[3190]: E0715 23:57:40.925610 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.927420 kubelet[3190]: I0715 23:57:40.925647 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128dc328-f476-4e10-9f21-e490b2e597ac-kubelet-dir\") pod \"csi-node-driver-5b8vh\" (UID: \"128dc328-f476-4e10-9f21-e490b2e597ac\") " pod="calico-system/csi-node-driver-5b8vh" Jul 15 23:57:40.927420 kubelet[3190]: E0715 23:57:40.926155 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.927420 kubelet[3190]: W0715 23:57:40.926167 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.927420 kubelet[3190]: E0715 23:57:40.926186 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.927420 kubelet[3190]: I0715 23:57:40.926211 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/128dc328-f476-4e10-9f21-e490b2e597ac-registration-dir\") pod \"csi-node-driver-5b8vh\" (UID: \"128dc328-f476-4e10-9f21-e490b2e597ac\") " pod="calico-system/csi-node-driver-5b8vh" Jul 15 23:57:40.927420 kubelet[3190]: E0715 23:57:40.927273 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.927807 kubelet[3190]: W0715 23:57:40.927291 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.927807 kubelet[3190]: E0715 23:57:40.927310 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.929176 kubelet[3190]: E0715 23:57:40.928791 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.929176 kubelet[3190]: W0715 23:57:40.928810 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.929176 kubelet[3190]: E0715 23:57:40.928829 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.930417 kubelet[3190]: E0715 23:57:40.930320 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.930793 kubelet[3190]: W0715 23:57:40.930698 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.931074 kubelet[3190]: E0715 23:57:40.930889 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.933265 kubelet[3190]: E0715 23:57:40.932072 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.933265 kubelet[3190]: W0715 23:57:40.932105 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.933265 kubelet[3190]: E0715 23:57:40.932122 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.933908 kubelet[3190]: E0715 23:57:40.933764 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.933908 kubelet[3190]: W0715 23:57:40.933779 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.933908 kubelet[3190]: E0715 23:57:40.933796 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.934529 kubelet[3190]: E0715 23:57:40.934514 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.936262 kubelet[3190]: W0715 23:57:40.934721 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.936262 kubelet[3190]: E0715 23:57:40.934746 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.937530 kubelet[3190]: E0715 23:57:40.937362 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.937530 kubelet[3190]: W0715 23:57:40.937384 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.937530 kubelet[3190]: E0715 23:57:40.937403 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.938892 kubelet[3190]: E0715 23:57:40.938676 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.938892 kubelet[3190]: W0715 23:57:40.938743 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.938892 kubelet[3190]: E0715 23:57:40.938757 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.939522 kubelet[3190]: E0715 23:57:40.939360 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.939522 kubelet[3190]: W0715 23:57:40.939376 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.939522 kubelet[3190]: E0715 23:57:40.939390 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.940209 kubelet[3190]: E0715 23:57:40.940193 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.940569 kubelet[3190]: W0715 23:57:40.940442 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.940876 kubelet[3190]: E0715 23:57:40.940736 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.942647 kubelet[3190]: E0715 23:57:40.942617 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.942840 kubelet[3190]: W0715 23:57:40.942762 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.942840 kubelet[3190]: E0715 23:57:40.942784 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.946354 kubelet[3190]: E0715 23:57:40.944624 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.946354 kubelet[3190]: W0715 23:57:40.944641 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.946354 kubelet[3190]: E0715 23:57:40.944662 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.946354 kubelet[3190]: E0715 23:57:40.944951 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.946354 kubelet[3190]: W0715 23:57:40.944961 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.946354 kubelet[3190]: E0715 23:57:40.944975 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.946354 kubelet[3190]: E0715 23:57:40.945162 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.946354 kubelet[3190]: W0715 23:57:40.945172 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.946354 kubelet[3190]: E0715 23:57:40.945184 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.946354 kubelet[3190]: E0715 23:57:40.945464 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.946736 kubelet[3190]: W0715 23:57:40.945475 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.946736 kubelet[3190]: E0715 23:57:40.945486 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.946736 kubelet[3190]: E0715 23:57:40.945688 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:40.946736 kubelet[3190]: W0715 23:57:40.945703 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:40.946736 kubelet[3190]: E0715 23:57:40.945715 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:40.948250 containerd[1896]: time="2025-07-15T23:57:40.948195719Z" level=info msg="connecting to shim b49e49f0586a6dbefe3b3c1920d75dc31ef786e27b27fe737052fb173a77d235" address="unix:///run/containerd/s/c730d4a28cebcb06b6f4c06bdfef206d77c9a2caa69ac7fc73ab32c5b0c42e11" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:57:40.983611 systemd[1]: Started cri-containerd-b49e49f0586a6dbefe3b3c1920d75dc31ef786e27b27fe737052fb173a77d235.scope - libcontainer container b49e49f0586a6dbefe3b3c1920d75dc31ef786e27b27fe737052fb173a77d235. Jul 15 23:57:41.029545 kubelet[3190]: E0715 23:57:41.029511 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.029545 kubelet[3190]: W0715 23:57:41.029539 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.029726 kubelet[3190]: E0715 23:57:41.029564 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.031691 kubelet[3190]: E0715 23:57:41.031612 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.031691 kubelet[3190]: W0715 23:57:41.031633 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.031691 kubelet[3190]: E0715 23:57:41.031655 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.032328 kubelet[3190]: E0715 23:57:41.032221 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.032328 kubelet[3190]: W0715 23:57:41.032263 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.032328 kubelet[3190]: E0715 23:57:41.032280 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.032499 kubelet[3190]: I0715 23:57:41.032325 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/128dc328-f476-4e10-9f21-e490b2e597ac-socket-dir\") pod \"csi-node-driver-5b8vh\" (UID: \"128dc328-f476-4e10-9f21-e490b2e597ac\") " pod="calico-system/csi-node-driver-5b8vh" Jul 15 23:57:41.033464 kubelet[3190]: E0715 23:57:41.033314 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.033464 kubelet[3190]: W0715 23:57:41.033332 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.033464 kubelet[3190]: E0715 23:57:41.033348 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.033727 kubelet[3190]: E0715 23:57:41.033591 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.033727 kubelet[3190]: W0715 23:57:41.033689 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.033727 kubelet[3190]: E0715 23:57:41.033707 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.035143 kubelet[3190]: E0715 23:57:41.035116 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.035143 kubelet[3190]: W0715 23:57:41.035137 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.035303 kubelet[3190]: E0715 23:57:41.035152 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.035303 kubelet[3190]: I0715 23:57:41.035199 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bwt8\" (UniqueName: \"kubernetes.io/projected/128dc328-f476-4e10-9f21-e490b2e597ac-kube-api-access-5bwt8\") pod \"csi-node-driver-5b8vh\" (UID: \"128dc328-f476-4e10-9f21-e490b2e597ac\") " pod="calico-system/csi-node-driver-5b8vh" Jul 15 23:57:41.035758 kubelet[3190]: E0715 23:57:41.035517 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.035758 kubelet[3190]: W0715 23:57:41.035531 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.035758 kubelet[3190]: E0715 23:57:41.035545 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.035945 kubelet[3190]: E0715 23:57:41.035782 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.035945 kubelet[3190]: W0715 23:57:41.035793 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.035945 kubelet[3190]: E0715 23:57:41.035806 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.036963 kubelet[3190]: E0715 23:57:41.036925 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.036963 kubelet[3190]: W0715 23:57:41.036942 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.036963 kubelet[3190]: E0715 23:57:41.036958 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.037291 kubelet[3190]: E0715 23:57:41.037174 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.037291 kubelet[3190]: W0715 23:57:41.037184 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.037291 kubelet[3190]: E0715 23:57:41.037198 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.038077 kubelet[3190]: E0715 23:57:41.038061 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.038077 kubelet[3190]: W0715 23:57:41.038076 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.038207 kubelet[3190]: E0715 23:57:41.038090 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.038334 kubelet[3190]: I0715 23:57:41.038310 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/128dc328-f476-4e10-9f21-e490b2e597ac-varrun\") pod \"csi-node-driver-5b8vh\" (UID: \"128dc328-f476-4e10-9f21-e490b2e597ac\") " pod="calico-system/csi-node-driver-5b8vh" Jul 15 23:57:41.038657 kubelet[3190]: E0715 23:57:41.038440 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.038657 kubelet[3190]: W0715 23:57:41.038454 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.038657 kubelet[3190]: E0715 23:57:41.038466 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.038815 kubelet[3190]: E0715 23:57:41.038691 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.038815 kubelet[3190]: W0715 23:57:41.038701 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.038815 kubelet[3190]: E0715 23:57:41.038714 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.038997 kubelet[3190]: E0715 23:57:41.038982 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.038997 kubelet[3190]: W0715 23:57:41.038996 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.039350 kubelet[3190]: E0715 23:57:41.039009 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.039350 kubelet[3190]: E0715 23:57:41.039318 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.039350 kubelet[3190]: W0715 23:57:41.039329 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.039350 kubelet[3190]: E0715 23:57:41.039342 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.039808 kubelet[3190]: E0715 23:57:41.039659 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.039808 kubelet[3190]: W0715 23:57:41.039671 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.039808 kubelet[3190]: E0715 23:57:41.039685 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.040344 kubelet[3190]: E0715 23:57:41.040326 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.040344 kubelet[3190]: W0715 23:57:41.040344 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.040686 kubelet[3190]: E0715 23:57:41.040357 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.040739 kubelet[3190]: E0715 23:57:41.040732 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.040792 kubelet[3190]: W0715 23:57:41.040744 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.040792 kubelet[3190]: E0715 23:57:41.040758 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.041311 kubelet[3190]: E0715 23:57:41.041293 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.041311 kubelet[3190]: W0715 23:57:41.041311 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.041502 kubelet[3190]: E0715 23:57:41.041325 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.069355 containerd[1896]: time="2025-07-15T23:57:41.069105345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hnrqm,Uid:ca49004d-5665-4fe2-a03a-3547353e6384,Namespace:calico-system,Attempt:0,} returns sandbox id \"b49e49f0586a6dbefe3b3c1920d75dc31ef786e27b27fe737052fb173a77d235\"" Jul 15 23:57:41.139594 kubelet[3190]: E0715 23:57:41.139507 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.139594 kubelet[3190]: W0715 23:57:41.139537 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.139594 kubelet[3190]: E0715 23:57:41.139563 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.140642 kubelet[3190]: E0715 23:57:41.140324 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.140642 kubelet[3190]: W0715 23:57:41.140578 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.140912 kubelet[3190]: E0715 23:57:41.140806 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.147433 kubelet[3190]: E0715 23:57:41.141470 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.147433 kubelet[3190]: W0715 23:57:41.141596 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.147433 kubelet[3190]: E0715 23:57:41.141616 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.147433 kubelet[3190]: E0715 23:57:41.141901 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.147433 kubelet[3190]: W0715 23:57:41.141912 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.147433 kubelet[3190]: E0715 23:57:41.141925 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.147433 kubelet[3190]: E0715 23:57:41.142084 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.147433 kubelet[3190]: W0715 23:57:41.142092 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.147433 kubelet[3190]: E0715 23:57:41.142104 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.147433 kubelet[3190]: E0715 23:57:41.143734 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.147887 kubelet[3190]: W0715 23:57:41.143749 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.147887 kubelet[3190]: E0715 23:57:41.143765 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.147887 kubelet[3190]: E0715 23:57:41.144268 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.147887 kubelet[3190]: W0715 23:57:41.144281 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.147887 kubelet[3190]: E0715 23:57:41.144295 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.148569 kubelet[3190]: E0715 23:57:41.148323 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.148569 kubelet[3190]: W0715 23:57:41.148341 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.148569 kubelet[3190]: E0715 23:57:41.148373 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.149380 kubelet[3190]: E0715 23:57:41.149357 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.149544 kubelet[3190]: W0715 23:57:41.149466 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.149544 kubelet[3190]: E0715 23:57:41.149484 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.149934 kubelet[3190]: E0715 23:57:41.149919 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.150040 kubelet[3190]: W0715 23:57:41.149988 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.150040 kubelet[3190]: E0715 23:57:41.150002 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.151905 kubelet[3190]: E0715 23:57:41.151691 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.151905 kubelet[3190]: W0715 23:57:41.151709 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.151905 kubelet[3190]: E0715 23:57:41.151724 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.152058 kubelet[3190]: E0715 23:57:41.151959 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.152058 kubelet[3190]: W0715 23:57:41.151969 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.152058 kubelet[3190]: E0715 23:57:41.151981 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.152524 kubelet[3190]: E0715 23:57:41.152362 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.152524 kubelet[3190]: W0715 23:57:41.152376 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.152524 kubelet[3190]: E0715 23:57:41.152389 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.153030 kubelet[3190]: E0715 23:57:41.153017 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.153156 kubelet[3190]: W0715 23:57:41.153126 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.153296 kubelet[3190]: E0715 23:57:41.153264 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.153733 kubelet[3190]: E0715 23:57:41.153720 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.154099 kubelet[3190]: W0715 23:57:41.153809 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.154099 kubelet[3190]: E0715 23:57:41.153834 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:41.169519 kubelet[3190]: E0715 23:57:41.169489 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:41.169902 kubelet[3190]: W0715 23:57:41.169723 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:41.169902 kubelet[3190]: E0715 23:57:41.169755 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:42.213998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831756555.mount: Deactivated successfully. Jul 15 23:57:42.309549 kubelet[3190]: E0715 23:57:42.309506 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5b8vh" podUID="128dc328-f476-4e10-9f21-e490b2e597ac" Jul 15 23:57:43.223986 containerd[1896]: time="2025-07-15T23:57:43.223931500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:43.226038 containerd[1896]: time="2025-07-15T23:57:43.226001670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 15 23:57:43.229438 containerd[1896]: time="2025-07-15T23:57:43.228594701Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:43.232125 containerd[1896]: time="2025-07-15T23:57:43.232003324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:43.233523 containerd[1896]: time="2025-07-15T23:57:43.233460861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.345598236s" Jul 15 23:57:43.233729 containerd[1896]: time="2025-07-15T23:57:43.233617867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 15 23:57:43.235522 containerd[1896]: time="2025-07-15T23:57:43.235493122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 23:57:43.252617 containerd[1896]: time="2025-07-15T23:57:43.251092702Z" level=info msg="CreateContainer within sandbox \"11eb7044b3d74295350e138cea4c331881f2d118a5922a8a751aee8d49d40f9b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 23:57:43.272168 containerd[1896]: time="2025-07-15T23:57:43.270648482Z" level=info msg="Container e04180546d8735a2cc5e7c3b57215712387512d35093c1f874b0c7a33093c5cc: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:57:43.285927 containerd[1896]: time="2025-07-15T23:57:43.285868461Z" level=info msg="CreateContainer within sandbox \"11eb7044b3d74295350e138cea4c331881f2d118a5922a8a751aee8d49d40f9b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e04180546d8735a2cc5e7c3b57215712387512d35093c1f874b0c7a33093c5cc\"" Jul 15 23:57:43.287259 containerd[1896]: time="2025-07-15T23:57:43.286517237Z" level=info msg="StartContainer for \"e04180546d8735a2cc5e7c3b57215712387512d35093c1f874b0c7a33093c5cc\"" Jul 15 23:57:43.288532 containerd[1896]: time="2025-07-15T23:57:43.288279124Z" level=info msg="connecting to shim e04180546d8735a2cc5e7c3b57215712387512d35093c1f874b0c7a33093c5cc" address="unix:///run/containerd/s/1854f5e10f52d6dd9e8b7ca24171a5d9c59da39be7e6cb9cda39ba3f42e7c872" protocol=ttrpc version=3 Jul 15 23:57:43.368678 systemd[1]: Started cri-containerd-e04180546d8735a2cc5e7c3b57215712387512d35093c1f874b0c7a33093c5cc.scope - libcontainer container e04180546d8735a2cc5e7c3b57215712387512d35093c1f874b0c7a33093c5cc. Jul 15 23:57:43.543946 containerd[1896]: time="2025-07-15T23:57:43.543754215Z" level=info msg="StartContainer for \"e04180546d8735a2cc5e7c3b57215712387512d35093c1f874b0c7a33093c5cc\" returns successfully" Jul 15 23:57:44.308189 kubelet[3190]: E0715 23:57:44.308127 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5b8vh" podUID="128dc328-f476-4e10-9f21-e490b2e597ac" Jul 15 23:57:44.476184 kubelet[3190]: E0715 23:57:44.476043 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.476184 kubelet[3190]: W0715 23:57:44.476073 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.482432 kubelet[3190]: E0715 23:57:44.481370 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.482948 kubelet[3190]: E0715 23:57:44.482914 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.483167 kubelet[3190]: W0715 23:57:44.483097 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.483167 kubelet[3190]: E0715 23:57:44.483127 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.483963 kubelet[3190]: E0715 23:57:44.483942 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.484032 kubelet[3190]: W0715 23:57:44.483965 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.484032 kubelet[3190]: E0715 23:57:44.483995 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.485100 kubelet[3190]: E0715 23:57:44.484828 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.485100 kubelet[3190]: W0715 23:57:44.484852 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.485100 kubelet[3190]: E0715 23:57:44.484896 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.486038 kubelet[3190]: E0715 23:57:44.485405 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.486038 kubelet[3190]: W0715 23:57:44.485469 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.486038 kubelet[3190]: E0715 23:57:44.485488 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.486203 kubelet[3190]: E0715 23:57:44.486157 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.486260 kubelet[3190]: W0715 23:57:44.486215 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.486316 kubelet[3190]: E0715 23:57:44.486277 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.486804 kubelet[3190]: E0715 23:57:44.486644 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.486804 kubelet[3190]: W0715 23:57:44.486695 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.486804 kubelet[3190]: E0715 23:57:44.486713 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.487275 kubelet[3190]: E0715 23:57:44.487057 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.487275 kubelet[3190]: W0715 23:57:44.487071 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.487275 kubelet[3190]: E0715 23:57:44.487117 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.488103 kubelet[3190]: E0715 23:57:44.487910 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.488103 kubelet[3190]: W0715 23:57:44.487928 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.488220 kubelet[3190]: E0715 23:57:44.487942 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.489236 kubelet[3190]: E0715 23:57:44.488702 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.489236 kubelet[3190]: W0715 23:57:44.488868 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.489236 kubelet[3190]: E0715 23:57:44.488947 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.489418 kubelet[3190]: E0715 23:57:44.489322 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.489418 kubelet[3190]: W0715 23:57:44.489361 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.489418 kubelet[3190]: E0715 23:57:44.489374 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.490032 kubelet[3190]: E0715 23:57:44.489701 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.490032 kubelet[3190]: W0715 23:57:44.489715 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.490032 kubelet[3190]: E0715 23:57:44.489727 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.490541 kubelet[3190]: E0715 23:57:44.490438 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.490541 kubelet[3190]: W0715 23:57:44.490456 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.490541 kubelet[3190]: E0715 23:57:44.490470 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.491212 kubelet[3190]: E0715 23:57:44.491195 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.491292 kubelet[3190]: W0715 23:57:44.491214 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.491524 kubelet[3190]: E0715 23:57:44.491351 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.491859 kubelet[3190]: E0715 23:57:44.491841 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.491859 kubelet[3190]: W0715 23:57:44.491858 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.492168 kubelet[3190]: E0715 23:57:44.491873 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.493869 kubelet[3190]: E0715 23:57:44.493722 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.493869 kubelet[3190]: W0715 23:57:44.493742 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.493869 kubelet[3190]: E0715 23:57:44.493759 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.494872 kubelet[3190]: E0715 23:57:44.494712 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.494872 kubelet[3190]: W0715 23:57:44.494728 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.494872 kubelet[3190]: E0715 23:57:44.494742 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.495160 kubelet[3190]: E0715 23:57:44.495100 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.495160 kubelet[3190]: W0715 23:57:44.495133 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.495160 kubelet[3190]: E0715 23:57:44.495146 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.495751 kubelet[3190]: E0715 23:57:44.495737 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.495961 kubelet[3190]: W0715 23:57:44.495927 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.495961 kubelet[3190]: E0715 23:57:44.495947 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.496537 kubelet[3190]: E0715 23:57:44.496522 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.496714 kubelet[3190]: W0715 23:57:44.496645 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.496714 kubelet[3190]: E0715 23:57:44.496664 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.497207 kubelet[3190]: E0715 23:57:44.497187 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.497382 kubelet[3190]: W0715 23:57:44.497284 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.497382 kubelet[3190]: E0715 23:57:44.497300 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.497977 kubelet[3190]: E0715 23:57:44.497928 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.497977 kubelet[3190]: W0715 23:57:44.497944 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.497977 kubelet[3190]: E0715 23:57:44.497960 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.498635 kubelet[3190]: E0715 23:57:44.498547 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.498635 kubelet[3190]: W0715 23:57:44.498603 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.498635 kubelet[3190]: E0715 23:57:44.498619 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.499119 kubelet[3190]: E0715 23:57:44.499095 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.499331 kubelet[3190]: W0715 23:57:44.499207 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.499331 kubelet[3190]: E0715 23:57:44.499251 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.499873 kubelet[3190]: E0715 23:57:44.499787 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.499873 kubelet[3190]: W0715 23:57:44.499802 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.499873 kubelet[3190]: E0715 23:57:44.499815 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.500329 kubelet[3190]: E0715 23:57:44.500315 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.500329 kubelet[3190]: W0715 23:57:44.500362 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.500329 kubelet[3190]: E0715 23:57:44.500377 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.501872 kubelet[3190]: E0715 23:57:44.501313 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.501872 kubelet[3190]: W0715 23:57:44.501328 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.501872 kubelet[3190]: E0715 23:57:44.501342 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.502724 kubelet[3190]: E0715 23:57:44.502707 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.502724 kubelet[3190]: W0715 23:57:44.502724 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.502870 kubelet[3190]: E0715 23:57:44.502850 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.503455 kubelet[3190]: E0715 23:57:44.503438 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.503455 kubelet[3190]: W0715 23:57:44.503456 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.503722 kubelet[3190]: E0715 23:57:44.503470 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.504183 kubelet[3190]: E0715 23:57:44.504016 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.504183 kubelet[3190]: W0715 23:57:44.504130 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.504424 kubelet[3190]: E0715 23:57:44.504216 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.506474 kubelet[3190]: E0715 23:57:44.506386 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.506474 kubelet[3190]: W0715 23:57:44.506406 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.506474 kubelet[3190]: E0715 23:57:44.506425 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.507996 kubelet[3190]: E0715 23:57:44.507974 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.507996 kubelet[3190]: W0715 23:57:44.507995 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.508136 kubelet[3190]: E0715 23:57:44.508012 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.508594 kubelet[3190]: E0715 23:57:44.508326 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:57:44.508594 kubelet[3190]: W0715 23:57:44.508344 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:57:44.508594 kubelet[3190]: E0715 23:57:44.508414 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:57:44.551492 containerd[1896]: time="2025-07-15T23:57:44.551423678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:44.553442 containerd[1896]: time="2025-07-15T23:57:44.553242403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 15 23:57:44.555715 containerd[1896]: time="2025-07-15T23:57:44.555656177Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:44.558775 containerd[1896]: time="2025-07-15T23:57:44.558664171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:44.559478 containerd[1896]: time="2025-07-15T23:57:44.559445949Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.323919464s" Jul 15 23:57:44.559535 containerd[1896]: time="2025-07-15T23:57:44.559484678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 15 23:57:44.566297 containerd[1896]: time="2025-07-15T23:57:44.566251950Z" level=info msg="CreateContainer within sandbox \"b49e49f0586a6dbefe3b3c1920d75dc31ef786e27b27fe737052fb173a77d235\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 23:57:44.611255 containerd[1896]: time="2025-07-15T23:57:44.610097460Z" level=info msg="Container bee52d972b1017c05c62dbc8df8adebafe84bb71ae8a4575b8d4e95b31623702: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:57:44.624560 containerd[1896]: time="2025-07-15T23:57:44.624429932Z" level=info msg="CreateContainer within sandbox \"b49e49f0586a6dbefe3b3c1920d75dc31ef786e27b27fe737052fb173a77d235\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bee52d972b1017c05c62dbc8df8adebafe84bb71ae8a4575b8d4e95b31623702\"" Jul 15 23:57:44.626802 containerd[1896]: time="2025-07-15T23:57:44.626751688Z" level=info msg="StartContainer for \"bee52d972b1017c05c62dbc8df8adebafe84bb71ae8a4575b8d4e95b31623702\"" Jul 15 23:57:44.629927 containerd[1896]: time="2025-07-15T23:57:44.629889826Z" level=info msg="connecting to shim bee52d972b1017c05c62dbc8df8adebafe84bb71ae8a4575b8d4e95b31623702" address="unix:///run/containerd/s/c730d4a28cebcb06b6f4c06bdfef206d77c9a2caa69ac7fc73ab32c5b0c42e11" protocol=ttrpc version=3 Jul 15 23:57:44.660500 systemd[1]: Started cri-containerd-bee52d972b1017c05c62dbc8df8adebafe84bb71ae8a4575b8d4e95b31623702.scope - libcontainer container bee52d972b1017c05c62dbc8df8adebafe84bb71ae8a4575b8d4e95b31623702. Jul 15 23:57:44.709763 containerd[1896]: time="2025-07-15T23:57:44.709725771Z" level=info msg="StartContainer for \"bee52d972b1017c05c62dbc8df8adebafe84bb71ae8a4575b8d4e95b31623702\" returns successfully" Jul 15 23:57:44.723959 systemd[1]: cri-containerd-bee52d972b1017c05c62dbc8df8adebafe84bb71ae8a4575b8d4e95b31623702.scope: Deactivated successfully. Jul 15 23:57:44.797537 containerd[1896]: time="2025-07-15T23:57:44.797485851Z" level=info msg="received exit event container_id:\"bee52d972b1017c05c62dbc8df8adebafe84bb71ae8a4575b8d4e95b31623702\" id:\"bee52d972b1017c05c62dbc8df8adebafe84bb71ae8a4575b8d4e95b31623702\" pid:4054 exited_at:{seconds:1752623864 nanos:727126230}" Jul 15 23:57:44.827568 containerd[1896]: time="2025-07-15T23:57:44.827386740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bee52d972b1017c05c62dbc8df8adebafe84bb71ae8a4575b8d4e95b31623702\" id:\"bee52d972b1017c05c62dbc8df8adebafe84bb71ae8a4575b8d4e95b31623702\" pid:4054 exited_at:{seconds:1752623864 nanos:727126230}" Jul 15 23:57:44.891782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bee52d972b1017c05c62dbc8df8adebafe84bb71ae8a4575b8d4e95b31623702-rootfs.mount: Deactivated successfully. Jul 15 23:57:45.417352 kubelet[3190]: I0715 23:57:45.417208 3190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:57:45.419831 containerd[1896]: time="2025-07-15T23:57:45.419769749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 23:57:45.441689 kubelet[3190]: I0715 23:57:45.440187 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-647c44458b-5656f" podStartSLOduration=3.09212646 podStartE2EDuration="5.440165648s" podCreationTimestamp="2025-07-15 23:57:40 +0000 UTC" firstStartedPulling="2025-07-15 23:57:40.886739627 +0000 UTC m=+20.736015096" lastFinishedPulling="2025-07-15 23:57:43.234778825 +0000 UTC m=+23.084054284" observedRunningTime="2025-07-15 23:57:44.426090403 +0000 UTC m=+24.275365880" watchObservedRunningTime="2025-07-15 23:57:45.440165648 +0000 UTC m=+25.289441123" Jul 15 23:57:46.309767 kubelet[3190]: E0715 23:57:46.308497 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5b8vh" podUID="128dc328-f476-4e10-9f21-e490b2e597ac" Jul 15 23:57:48.308398 kubelet[3190]: E0715 23:57:48.308250 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5b8vh" podUID="128dc328-f476-4e10-9f21-e490b2e597ac" Jul 15 23:57:49.363292 containerd[1896]: time="2025-07-15T23:57:49.363221565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:49.365528 containerd[1896]: time="2025-07-15T23:57:49.365058614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 15 23:57:49.367758 containerd[1896]: time="2025-07-15T23:57:49.367694889Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:49.371081 containerd[1896]: time="2025-07-15T23:57:49.371015974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:49.371544 containerd[1896]: time="2025-07-15T23:57:49.371511888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.951242245s" Jul 15 23:57:49.371615 containerd[1896]: time="2025-07-15T23:57:49.371545921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 15 23:57:49.394268 containerd[1896]: time="2025-07-15T23:57:49.393566573Z" level=info msg="CreateContainer within sandbox \"b49e49f0586a6dbefe3b3c1920d75dc31ef786e27b27fe737052fb173a77d235\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 23:57:49.421856 containerd[1896]: time="2025-07-15T23:57:49.421406251Z" level=info msg="Container fb8d8bed0de95f9d8e45882ef72a01ba9196537c48b397537ecbe35b0270fdf0: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:57:49.440831 containerd[1896]: time="2025-07-15T23:57:49.440767174Z" level=info msg="CreateContainer within sandbox \"b49e49f0586a6dbefe3b3c1920d75dc31ef786e27b27fe737052fb173a77d235\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fb8d8bed0de95f9d8e45882ef72a01ba9196537c48b397537ecbe35b0270fdf0\"" Jul 15 23:57:49.441789 containerd[1896]: time="2025-07-15T23:57:49.441592981Z" level=info msg="StartContainer for \"fb8d8bed0de95f9d8e45882ef72a01ba9196537c48b397537ecbe35b0270fdf0\"" Jul 15 23:57:49.443750 containerd[1896]: time="2025-07-15T23:57:49.443691511Z" level=info msg="connecting to shim fb8d8bed0de95f9d8e45882ef72a01ba9196537c48b397537ecbe35b0270fdf0" address="unix:///run/containerd/s/c730d4a28cebcb06b6f4c06bdfef206d77c9a2caa69ac7fc73ab32c5b0c42e11" protocol=ttrpc version=3 Jul 15 23:57:49.472471 systemd[1]: Started cri-containerd-fb8d8bed0de95f9d8e45882ef72a01ba9196537c48b397537ecbe35b0270fdf0.scope - libcontainer container fb8d8bed0de95f9d8e45882ef72a01ba9196537c48b397537ecbe35b0270fdf0. Jul 15 23:57:49.519646 containerd[1896]: time="2025-07-15T23:57:49.519587991Z" level=info msg="StartContainer for \"fb8d8bed0de95f9d8e45882ef72a01ba9196537c48b397537ecbe35b0270fdf0\" returns successfully" Jul 15 23:57:50.269604 systemd[1]: cri-containerd-fb8d8bed0de95f9d8e45882ef72a01ba9196537c48b397537ecbe35b0270fdf0.scope: Deactivated successfully. Jul 15 23:57:50.271693 systemd[1]: cri-containerd-fb8d8bed0de95f9d8e45882ef72a01ba9196537c48b397537ecbe35b0270fdf0.scope: Consumed 561ms CPU time, 162.9M memory peak, 8M read from disk, 171.2M written to disk. Jul 15 23:57:50.277837 containerd[1896]: time="2025-07-15T23:57:50.277792343Z" level=info msg="received exit event container_id:\"fb8d8bed0de95f9d8e45882ef72a01ba9196537c48b397537ecbe35b0270fdf0\" id:\"fb8d8bed0de95f9d8e45882ef72a01ba9196537c48b397537ecbe35b0270fdf0\" pid:4110 exited_at:{seconds:1752623870 nanos:276841337}" Jul 15 23:57:50.278927 containerd[1896]: time="2025-07-15T23:57:50.278885359Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb8d8bed0de95f9d8e45882ef72a01ba9196537c48b397537ecbe35b0270fdf0\" id:\"fb8d8bed0de95f9d8e45882ef72a01ba9196537c48b397537ecbe35b0270fdf0\" pid:4110 exited_at:{seconds:1752623870 nanos:276841337}" Jul 15 23:57:50.310406 kubelet[3190]: E0715 23:57:50.310356 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5b8vh" podUID="128dc328-f476-4e10-9f21-e490b2e597ac" Jul 15 23:57:50.338135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb8d8bed0de95f9d8e45882ef72a01ba9196537c48b397537ecbe35b0270fdf0-rootfs.mount: Deactivated successfully. Jul 15 23:57:50.390263 kubelet[3190]: I0715 23:57:50.390081 3190 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 23:57:50.563326 containerd[1896]: time="2025-07-15T23:57:50.561841530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 23:57:50.640962 systemd[1]: Created slice kubepods-burstable-pod78eab6a1_05c2_474a_ada8_8cc9bda3773a.slice - libcontainer container kubepods-burstable-pod78eab6a1_05c2_474a_ada8_8cc9bda3773a.slice. Jul 15 23:57:50.643688 kubelet[3190]: I0715 23:57:50.641392 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szxc6\" (UniqueName: \"kubernetes.io/projected/78eab6a1-05c2-474a-ada8-8cc9bda3773a-kube-api-access-szxc6\") pod \"coredns-674b8bbfcf-7w44v\" (UID: \"78eab6a1-05c2-474a-ada8-8cc9bda3773a\") " pod="kube-system/coredns-674b8bbfcf-7w44v" Jul 15 23:57:50.643688 kubelet[3190]: I0715 23:57:50.641473 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78eab6a1-05c2-474a-ada8-8cc9bda3773a-config-volume\") pod \"coredns-674b8bbfcf-7w44v\" (UID: \"78eab6a1-05c2-474a-ada8-8cc9bda3773a\") " pod="kube-system/coredns-674b8bbfcf-7w44v" Jul 15 23:57:50.662133 systemd[1]: Created slice kubepods-burstable-podbbdcda6c_439d_4163_bce0_8dfbc9a6f6d5.slice - libcontainer container kubepods-burstable-podbbdcda6c_439d_4163_bce0_8dfbc9a6f6d5.slice. Jul 15 23:57:50.708099 containerd[1896]: time="2025-07-15T23:57:50.707898758Z" level=error msg="collecting metrics for fb8d8bed0de95f9d8e45882ef72a01ba9196537c48b397537ecbe35b0270fdf0" error="ttrpc: closed" Jul 15 23:57:50.731735 systemd[1]: Created slice kubepods-besteffort-pod8d429579_dd7f_48d0_a55a_81a1c29e3add.slice - libcontainer container kubepods-besteffort-pod8d429579_dd7f_48d0_a55a_81a1c29e3add.slice. Jul 15 23:57:50.762377 systemd[1]: Created slice kubepods-besteffort-pod04eac452_7bdb_4f3a_9b64_fb865c6416b6.slice - libcontainer container kubepods-besteffort-pod04eac452_7bdb_4f3a_9b64_fb865c6416b6.slice. Jul 15 23:57:50.774076 systemd[1]: Created slice kubepods-besteffort-pod642af4f8_45e2_4a1f_a15e_a31e1bedc462.slice - libcontainer container kubepods-besteffort-pod642af4f8_45e2_4a1f_a15e_a31e1bedc462.slice. Jul 15 23:57:50.798709 systemd[1]: Created slice kubepods-besteffort-pod0c936f3e_8905_404f_b25f_a78dbd4fc552.slice - libcontainer container kubepods-besteffort-pod0c936f3e_8905_404f_b25f_a78dbd4fc552.slice. Jul 15 23:57:50.837698 systemd[1]: Created slice kubepods-besteffort-pod20795f98_3bfb_43fb_afd0_3a7080cfb21d.slice - libcontainer container kubepods-besteffort-pod20795f98_3bfb_43fb_afd0_3a7080cfb21d.slice. Jul 15 23:57:50.844187 kubelet[3190]: I0715 23:57:50.843948 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lntqs\" (UniqueName: \"kubernetes.io/projected/8d429579-dd7f-48d0-a55a-81a1c29e3add-kube-api-access-lntqs\") pod \"calico-apiserver-94bf58f7b-9lt55\" (UID: \"8d429579-dd7f-48d0-a55a-81a1c29e3add\") " pod="calico-apiserver/calico-apiserver-94bf58f7b-9lt55" Jul 15 23:57:50.846968 kubelet[3190]: I0715 23:57:50.846935 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20795f98-3bfb-43fb-afd0-3a7080cfb21d-tigera-ca-bundle\") pod \"calico-kube-controllers-d5f69495f-dmf7c\" (UID: \"20795f98-3bfb-43fb-afd0-3a7080cfb21d\") " pod="calico-system/calico-kube-controllers-d5f69495f-dmf7c" Jul 15 23:57:50.847144 kubelet[3190]: I0715 23:57:50.847131 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lv2v\" (UniqueName: \"kubernetes.io/projected/20795f98-3bfb-43fb-afd0-3a7080cfb21d-kube-api-access-8lv2v\") pod \"calico-kube-controllers-d5f69495f-dmf7c\" (UID: \"20795f98-3bfb-43fb-afd0-3a7080cfb21d\") " pod="calico-system/calico-kube-controllers-d5f69495f-dmf7c" Jul 15 23:57:50.847216 kubelet[3190]: I0715 23:57:50.847207 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8w27\" (UniqueName: \"kubernetes.io/projected/642af4f8-45e2-4a1f-a15e-a31e1bedc462-kube-api-access-w8w27\") pod \"calico-apiserver-94bf58f7b-mpkpj\" (UID: \"642af4f8-45e2-4a1f-a15e-a31e1bedc462\") " pod="calico-apiserver/calico-apiserver-94bf58f7b-mpkpj" Jul 15 23:57:50.847312 kubelet[3190]: I0715 23:57:50.847302 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0c936f3e-8905-404f-b25f-a78dbd4fc552-goldmane-key-pair\") pod \"goldmane-768f4c5c69-9bgld\" (UID: \"0c936f3e-8905-404f-b25f-a78dbd4fc552\") " pod="calico-system/goldmane-768f4c5c69-9bgld" Jul 15 23:57:50.847365 kubelet[3190]: I0715 23:57:50.847357 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04eac452-7bdb-4f3a-9b64-fb865c6416b6-whisker-ca-bundle\") pod \"whisker-7b5b98f47d-fd7kf\" (UID: \"04eac452-7bdb-4f3a-9b64-fb865c6416b6\") " pod="calico-system/whisker-7b5b98f47d-fd7kf" Jul 15 23:57:50.847432 kubelet[3190]: I0715 23:57:50.847423 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpz9z\" (UniqueName: \"kubernetes.io/projected/04eac452-7bdb-4f3a-9b64-fb865c6416b6-kube-api-access-cpz9z\") pod \"whisker-7b5b98f47d-fd7kf\" (UID: \"04eac452-7bdb-4f3a-9b64-fb865c6416b6\") " pod="calico-system/whisker-7b5b98f47d-fd7kf" Jul 15 23:57:50.847487 kubelet[3190]: I0715 23:57:50.847476 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8d429579-dd7f-48d0-a55a-81a1c29e3add-calico-apiserver-certs\") pod \"calico-apiserver-94bf58f7b-9lt55\" (UID: \"8d429579-dd7f-48d0-a55a-81a1c29e3add\") " pod="calico-apiserver/calico-apiserver-94bf58f7b-9lt55" Jul 15 23:57:50.847540 kubelet[3190]: I0715 23:57:50.847531 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c936f3e-8905-404f-b25f-a78dbd4fc552-config\") pod \"goldmane-768f4c5c69-9bgld\" (UID: \"0c936f3e-8905-404f-b25f-a78dbd4fc552\") " pod="calico-system/goldmane-768f4c5c69-9bgld" Jul 15 23:57:50.847602 kubelet[3190]: I0715 23:57:50.847592 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m6zn\" (UniqueName: \"kubernetes.io/projected/bbdcda6c-439d-4163-bce0-8dfbc9a6f6d5-kube-api-access-9m6zn\") pod \"coredns-674b8bbfcf-sbczf\" (UID: \"bbdcda6c-439d-4163-bce0-8dfbc9a6f6d5\") " pod="kube-system/coredns-674b8bbfcf-sbczf" Jul 15 23:57:50.847655 kubelet[3190]: I0715 23:57:50.847646 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/04eac452-7bdb-4f3a-9b64-fb865c6416b6-whisker-backend-key-pair\") pod \"whisker-7b5b98f47d-fd7kf\" (UID: \"04eac452-7bdb-4f3a-9b64-fb865c6416b6\") " pod="calico-system/whisker-7b5b98f47d-fd7kf" Jul 15 23:57:50.848563 kubelet[3190]: I0715 23:57:50.848309 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/642af4f8-45e2-4a1f-a15e-a31e1bedc462-calico-apiserver-certs\") pod \"calico-apiserver-94bf58f7b-mpkpj\" (UID: \"642af4f8-45e2-4a1f-a15e-a31e1bedc462\") " pod="calico-apiserver/calico-apiserver-94bf58f7b-mpkpj" Jul 15 23:57:50.848563 kubelet[3190]: I0715 23:57:50.848439 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c936f3e-8905-404f-b25f-a78dbd4fc552-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-9bgld\" (UID: \"0c936f3e-8905-404f-b25f-a78dbd4fc552\") " pod="calico-system/goldmane-768f4c5c69-9bgld" Jul 15 23:57:50.848563 kubelet[3190]: I0715 23:57:50.848457 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxb84\" (UniqueName: \"kubernetes.io/projected/0c936f3e-8905-404f-b25f-a78dbd4fc552-kube-api-access-bxb84\") pod \"goldmane-768f4c5c69-9bgld\" (UID: \"0c936f3e-8905-404f-b25f-a78dbd4fc552\") " pod="calico-system/goldmane-768f4c5c69-9bgld" Jul 15 23:57:50.848563 kubelet[3190]: I0715 23:57:50.848564 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbdcda6c-439d-4163-bce0-8dfbc9a6f6d5-config-volume\") pod \"coredns-674b8bbfcf-sbczf\" (UID: \"bbdcda6c-439d-4163-bce0-8dfbc9a6f6d5\") " pod="kube-system/coredns-674b8bbfcf-sbczf" Jul 15 23:57:50.994264 containerd[1896]: time="2025-07-15T23:57:50.993561424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7w44v,Uid:78eab6a1-05c2-474a-ada8-8cc9bda3773a,Namespace:kube-system,Attempt:0,}" Jul 15 23:57:51.090719 containerd[1896]: time="2025-07-15T23:57:51.088648436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94bf58f7b-mpkpj,Uid:642af4f8-45e2-4a1f-a15e-a31e1bedc462,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:57:51.133766 containerd[1896]: time="2025-07-15T23:57:51.133734308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9bgld,Uid:0c936f3e-8905-404f-b25f-a78dbd4fc552,Namespace:calico-system,Attempt:0,}" Jul 15 23:57:51.149360 containerd[1896]: time="2025-07-15T23:57:51.149187796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d5f69495f-dmf7c,Uid:20795f98-3bfb-43fb-afd0-3a7080cfb21d,Namespace:calico-system,Attempt:0,}" Jul 15 23:57:51.308019 containerd[1896]: time="2025-07-15T23:57:51.307967434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sbczf,Uid:bbdcda6c-439d-4163-bce0-8dfbc9a6f6d5,Namespace:kube-system,Attempt:0,}" Jul 15 23:57:51.354954 containerd[1896]: time="2025-07-15T23:57:51.354510476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94bf58f7b-9lt55,Uid:8d429579-dd7f-48d0-a55a-81a1c29e3add,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:57:51.376911 containerd[1896]: time="2025-07-15T23:57:51.376640017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b5b98f47d-fd7kf,Uid:04eac452-7bdb-4f3a-9b64-fb865c6416b6,Namespace:calico-system,Attempt:0,}" Jul 15 23:57:51.534595 containerd[1896]: time="2025-07-15T23:57:51.534545419Z" level=error msg="Failed to destroy network for sandbox \"556f66ad6e10a54a342e0e334fcc6baa50e09a0f5bb21aae0c042f950631d419\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.536283 containerd[1896]: time="2025-07-15T23:57:51.535920816Z" level=error msg="Failed to destroy network for sandbox \"c93b5d49815c5ecd4cf8492f4c23f95122add8bb3445f1f94c651789464a0695\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.536695 containerd[1896]: time="2025-07-15T23:57:51.536592331Z" level=error msg="Failed to destroy network for sandbox \"08756a018d9d617107186ce683468538830568bde3b89890be8b6af5c8e954de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.547246 containerd[1896]: time="2025-07-15T23:57:51.546732993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94bf58f7b-mpkpj,Uid:642af4f8-45e2-4a1f-a15e-a31e1bedc462,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"556f66ad6e10a54a342e0e334fcc6baa50e09a0f5bb21aae0c042f950631d419\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.553340 containerd[1896]: time="2025-07-15T23:57:51.551852929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d5f69495f-dmf7c,Uid:20795f98-3bfb-43fb-afd0-3a7080cfb21d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c93b5d49815c5ecd4cf8492f4c23f95122add8bb3445f1f94c651789464a0695\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.553531 kubelet[3190]: E0715 23:57:51.552851 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"556f66ad6e10a54a342e0e334fcc6baa50e09a0f5bb21aae0c042f950631d419\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.553531 kubelet[3190]: E0715 23:57:51.553178 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c93b5d49815c5ecd4cf8492f4c23f95122add8bb3445f1f94c651789464a0695\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.556330 containerd[1896]: time="2025-07-15T23:57:51.556014212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9bgld,Uid:0c936f3e-8905-404f-b25f-a78dbd4fc552,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08756a018d9d617107186ce683468538830568bde3b89890be8b6af5c8e954de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.557203 kubelet[3190]: E0715 23:57:51.557090 3190 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c93b5d49815c5ecd4cf8492f4c23f95122add8bb3445f1f94c651789464a0695\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d5f69495f-dmf7c" Jul 15 23:57:51.557203 kubelet[3190]: E0715 23:57:51.557153 3190 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c93b5d49815c5ecd4cf8492f4c23f95122add8bb3445f1f94c651789464a0695\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d5f69495f-dmf7c" Jul 15 23:57:51.557449 kubelet[3190]: E0715 23:57:51.557262 3190 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"556f66ad6e10a54a342e0e334fcc6baa50e09a0f5bb21aae0c042f950631d419\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94bf58f7b-mpkpj" Jul 15 23:57:51.557449 kubelet[3190]: E0715 23:57:51.557293 3190 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"556f66ad6e10a54a342e0e334fcc6baa50e09a0f5bb21aae0c042f950631d419\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94bf58f7b-mpkpj" Jul 15 23:57:51.559587 kubelet[3190]: E0715 23:57:51.559523 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d5f69495f-dmf7c_calico-system(20795f98-3bfb-43fb-afd0-3a7080cfb21d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d5f69495f-dmf7c_calico-system(20795f98-3bfb-43fb-afd0-3a7080cfb21d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c93b5d49815c5ecd4cf8492f4c23f95122add8bb3445f1f94c651789464a0695\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d5f69495f-dmf7c" podUID="20795f98-3bfb-43fb-afd0-3a7080cfb21d" Jul 15 23:57:51.559973 kubelet[3190]: E0715 23:57:51.559929 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-94bf58f7b-mpkpj_calico-apiserver(642af4f8-45e2-4a1f-a15e-a31e1bedc462)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-94bf58f7b-mpkpj_calico-apiserver(642af4f8-45e2-4a1f-a15e-a31e1bedc462)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"556f66ad6e10a54a342e0e334fcc6baa50e09a0f5bb21aae0c042f950631d419\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94bf58f7b-mpkpj" podUID="642af4f8-45e2-4a1f-a15e-a31e1bedc462" Jul 15 23:57:51.560088 kubelet[3190]: E0715 23:57:51.560062 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08756a018d9d617107186ce683468538830568bde3b89890be8b6af5c8e954de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.560138 kubelet[3190]: E0715 23:57:51.560105 3190 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08756a018d9d617107186ce683468538830568bde3b89890be8b6af5c8e954de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-9bgld" Jul 15 23:57:51.560138 kubelet[3190]: E0715 23:57:51.560128 3190 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08756a018d9d617107186ce683468538830568bde3b89890be8b6af5c8e954de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-9bgld" Jul 15 23:57:51.560475 kubelet[3190]: E0715 23:57:51.560182 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-9bgld_calico-system(0c936f3e-8905-404f-b25f-a78dbd4fc552)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-9bgld_calico-system(0c936f3e-8905-404f-b25f-a78dbd4fc552)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08756a018d9d617107186ce683468538830568bde3b89890be8b6af5c8e954de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-9bgld" podUID="0c936f3e-8905-404f-b25f-a78dbd4fc552" Jul 15 23:57:51.587510 containerd[1896]: time="2025-07-15T23:57:51.587397606Z" level=error msg="Failed to destroy network for sandbox \"5863ed3c0fcab09b65650b4b6dcc35e08beaad0363cf18fe2ac79f9b74d4f76e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.590797 containerd[1896]: time="2025-07-15T23:57:51.590746403Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7w44v,Uid:78eab6a1-05c2-474a-ada8-8cc9bda3773a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5863ed3c0fcab09b65650b4b6dcc35e08beaad0363cf18fe2ac79f9b74d4f76e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.592086 kubelet[3190]: E0715 23:57:51.591945 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5863ed3c0fcab09b65650b4b6dcc35e08beaad0363cf18fe2ac79f9b74d4f76e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.592958 kubelet[3190]: E0715 23:57:51.592635 3190 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5863ed3c0fcab09b65650b4b6dcc35e08beaad0363cf18fe2ac79f9b74d4f76e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7w44v" Jul 15 23:57:51.592958 kubelet[3190]: E0715 23:57:51.592669 3190 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5863ed3c0fcab09b65650b4b6dcc35e08beaad0363cf18fe2ac79f9b74d4f76e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7w44v" Jul 15 23:57:51.594198 kubelet[3190]: E0715 23:57:51.593368 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7w44v_kube-system(78eab6a1-05c2-474a-ada8-8cc9bda3773a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7w44v_kube-system(78eab6a1-05c2-474a-ada8-8cc9bda3773a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5863ed3c0fcab09b65650b4b6dcc35e08beaad0363cf18fe2ac79f9b74d4f76e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7w44v" podUID="78eab6a1-05c2-474a-ada8-8cc9bda3773a" Jul 15 23:57:51.617076 containerd[1896]: time="2025-07-15T23:57:51.616709692Z" level=error msg="Failed to destroy network for sandbox \"d4efd0207238afaf0d85c0d9e3a8ac1c87524c5d9152d83745cf442c7860828a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.619146 containerd[1896]: time="2025-07-15T23:57:51.619097694Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sbczf,Uid:bbdcda6c-439d-4163-bce0-8dfbc9a6f6d5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4efd0207238afaf0d85c0d9e3a8ac1c87524c5d9152d83745cf442c7860828a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.619485 kubelet[3190]: E0715 23:57:51.619387 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4efd0207238afaf0d85c0d9e3a8ac1c87524c5d9152d83745cf442c7860828a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.619485 kubelet[3190]: E0715 23:57:51.619460 3190 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4efd0207238afaf0d85c0d9e3a8ac1c87524c5d9152d83745cf442c7860828a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sbczf" Jul 15 23:57:51.619669 kubelet[3190]: E0715 23:57:51.619487 3190 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4efd0207238afaf0d85c0d9e3a8ac1c87524c5d9152d83745cf442c7860828a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sbczf" Jul 15 23:57:51.619669 kubelet[3190]: E0715 23:57:51.619551 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-sbczf_kube-system(bbdcda6c-439d-4163-bce0-8dfbc9a6f6d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-sbczf_kube-system(bbdcda6c-439d-4163-bce0-8dfbc9a6f6d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4efd0207238afaf0d85c0d9e3a8ac1c87524c5d9152d83745cf442c7860828a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sbczf" podUID="bbdcda6c-439d-4163-bce0-8dfbc9a6f6d5" Jul 15 23:57:51.641298 containerd[1896]: time="2025-07-15T23:57:51.641195819Z" level=error msg="Failed to destroy network for sandbox \"e3a556d18d650f43e7c634003462762a5c1548dd00ac06bc486343a7440a1049\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.643756 containerd[1896]: time="2025-07-15T23:57:51.643687606Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94bf58f7b-9lt55,Uid:8d429579-dd7f-48d0-a55a-81a1c29e3add,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3a556d18d650f43e7c634003462762a5c1548dd00ac06bc486343a7440a1049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.644549 kubelet[3190]: E0715 23:57:51.643944 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3a556d18d650f43e7c634003462762a5c1548dd00ac06bc486343a7440a1049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.644549 kubelet[3190]: E0715 23:57:51.643997 3190 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3a556d18d650f43e7c634003462762a5c1548dd00ac06bc486343a7440a1049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94bf58f7b-9lt55" Jul 15 23:57:51.644549 kubelet[3190]: E0715 23:57:51.644014 3190 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3a556d18d650f43e7c634003462762a5c1548dd00ac06bc486343a7440a1049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94bf58f7b-9lt55" Jul 15 23:57:51.644716 kubelet[3190]: E0715 23:57:51.644073 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-94bf58f7b-9lt55_calico-apiserver(8d429579-dd7f-48d0-a55a-81a1c29e3add)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-94bf58f7b-9lt55_calico-apiserver(8d429579-dd7f-48d0-a55a-81a1c29e3add)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3a556d18d650f43e7c634003462762a5c1548dd00ac06bc486343a7440a1049\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94bf58f7b-9lt55" podUID="8d429579-dd7f-48d0-a55a-81a1c29e3add" Jul 15 23:57:51.648593 containerd[1896]: time="2025-07-15T23:57:51.648206128Z" level=error msg="Failed to destroy network for sandbox \"1c2f72e9674e4dbd2cce534580ece1cc2ec806e3ba5f2797d49c723344510b0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.650623 containerd[1896]: time="2025-07-15T23:57:51.650573072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b5b98f47d-fd7kf,Uid:04eac452-7bdb-4f3a-9b64-fb865c6416b6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c2f72e9674e4dbd2cce534580ece1cc2ec806e3ba5f2797d49c723344510b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.650954 kubelet[3190]: E0715 23:57:51.650903 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c2f72e9674e4dbd2cce534580ece1cc2ec806e3ba5f2797d49c723344510b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:51.651053 kubelet[3190]: E0715 23:57:51.650965 3190 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c2f72e9674e4dbd2cce534580ece1cc2ec806e3ba5f2797d49c723344510b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b5b98f47d-fd7kf" Jul 15 23:57:51.651053 kubelet[3190]: E0715 23:57:51.650987 3190 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c2f72e9674e4dbd2cce534580ece1cc2ec806e3ba5f2797d49c723344510b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b5b98f47d-fd7kf" Jul 15 23:57:51.651053 kubelet[3190]: E0715 23:57:51.651032 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b5b98f47d-fd7kf_calico-system(04eac452-7bdb-4f3a-9b64-fb865c6416b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b5b98f47d-fd7kf_calico-system(04eac452-7bdb-4f3a-9b64-fb865c6416b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c2f72e9674e4dbd2cce534580ece1cc2ec806e3ba5f2797d49c723344510b0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b5b98f47d-fd7kf" podUID="04eac452-7bdb-4f3a-9b64-fb865c6416b6" Jul 15 23:57:52.412642 systemd[1]: Created slice kubepods-besteffort-pod128dc328_f476_4e10_9f21_e490b2e597ac.slice - libcontainer container kubepods-besteffort-pod128dc328_f476_4e10_9f21_e490b2e597ac.slice. Jul 15 23:57:52.431051 containerd[1896]: time="2025-07-15T23:57:52.430773356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5b8vh,Uid:128dc328-f476-4e10-9f21-e490b2e597ac,Namespace:calico-system,Attempt:0,}" Jul 15 23:57:52.570835 containerd[1896]: time="2025-07-15T23:57:52.568967110Z" level=error msg="Failed to destroy network for sandbox \"f15702898d08cf2e5e81eeedd27a73af560b6cd0c72a588c7ba28beb073a6967\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:52.574509 containerd[1896]: time="2025-07-15T23:57:52.574458109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5b8vh,Uid:128dc328-f476-4e10-9f21-e490b2e597ac,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f15702898d08cf2e5e81eeedd27a73af560b6cd0c72a588c7ba28beb073a6967\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:52.574854 systemd[1]: run-netns-cni\x2da2457d4e\x2d9f56\x2df17f\x2d3a90\x2dffebed7dad2b.mount: Deactivated successfully. Jul 15 23:57:52.577254 kubelet[3190]: E0715 23:57:52.576045 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f15702898d08cf2e5e81eeedd27a73af560b6cd0c72a588c7ba28beb073a6967\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:57:52.577254 kubelet[3190]: E0715 23:57:52.576153 3190 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f15702898d08cf2e5e81eeedd27a73af560b6cd0c72a588c7ba28beb073a6967\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5b8vh" Jul 15 23:57:52.577254 kubelet[3190]: E0715 23:57:52.577029 3190 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f15702898d08cf2e5e81eeedd27a73af560b6cd0c72a588c7ba28beb073a6967\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5b8vh" Jul 15 23:57:52.577778 kubelet[3190]: E0715 23:57:52.577397 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5b8vh_calico-system(128dc328-f476-4e10-9f21-e490b2e597ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5b8vh_calico-system(128dc328-f476-4e10-9f21-e490b2e597ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f15702898d08cf2e5e81eeedd27a73af560b6cd0c72a588c7ba28beb073a6967\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5b8vh" podUID="128dc328-f476-4e10-9f21-e490b2e597ac" Jul 15 23:57:57.322279 kubelet[3190]: I0715 23:57:57.322222 3190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:57:57.394978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2320597206.mount: Deactivated successfully. Jul 15 23:57:57.475185 containerd[1896]: time="2025-07-15T23:57:57.474927909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:57.507275 containerd[1896]: time="2025-07-15T23:57:57.507208973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 15 23:57:57.523063 containerd[1896]: time="2025-07-15T23:57:57.523012815Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:57.528597 containerd[1896]: time="2025-07-15T23:57:57.528549554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:57.530764 containerd[1896]: time="2025-07-15T23:57:57.530721880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.966800409s" Jul 15 23:57:57.530933 containerd[1896]: time="2025-07-15T23:57:57.530914115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 15 23:57:57.579802 containerd[1896]: time="2025-07-15T23:57:57.579690286Z" level=info msg="CreateContainer within sandbox \"b49e49f0586a6dbefe3b3c1920d75dc31ef786e27b27fe737052fb173a77d235\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 23:57:57.620826 containerd[1896]: time="2025-07-15T23:57:57.620553950Z" level=info msg="Container a69c13f22a3b28382525da0fdd8e8d911e8cfb351e0334c737a7cab1015f15eb: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:57:57.623165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3933924386.mount: Deactivated successfully. Jul 15 23:57:57.685753 containerd[1896]: time="2025-07-15T23:57:57.685702988Z" level=info msg="CreateContainer within sandbox \"b49e49f0586a6dbefe3b3c1920d75dc31ef786e27b27fe737052fb173a77d235\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a69c13f22a3b28382525da0fdd8e8d911e8cfb351e0334c737a7cab1015f15eb\"" Jul 15 23:57:57.686631 containerd[1896]: time="2025-07-15T23:57:57.686484683Z" level=info msg="StartContainer for \"a69c13f22a3b28382525da0fdd8e8d911e8cfb351e0334c737a7cab1015f15eb\"" Jul 15 23:57:57.693770 containerd[1896]: time="2025-07-15T23:57:57.692509868Z" level=info msg="connecting to shim a69c13f22a3b28382525da0fdd8e8d911e8cfb351e0334c737a7cab1015f15eb" address="unix:///run/containerd/s/c730d4a28cebcb06b6f4c06bdfef206d77c9a2caa69ac7fc73ab32c5b0c42e11" protocol=ttrpc version=3 Jul 15 23:57:57.843734 systemd[1]: Started cri-containerd-a69c13f22a3b28382525da0fdd8e8d911e8cfb351e0334c737a7cab1015f15eb.scope - libcontainer container a69c13f22a3b28382525da0fdd8e8d911e8cfb351e0334c737a7cab1015f15eb. Jul 15 23:57:57.918877 containerd[1896]: time="2025-07-15T23:57:57.918833629Z" level=info msg="StartContainer for \"a69c13f22a3b28382525da0fdd8e8d911e8cfb351e0334c737a7cab1015f15eb\" returns successfully" Jul 15 23:57:58.566800 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 23:57:58.569089 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 23:57:58.925591 kubelet[3190]: I0715 23:57:58.923151 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hnrqm" podStartSLOduration=2.464921649 podStartE2EDuration="18.923109257s" podCreationTimestamp="2025-07-15 23:57:40 +0000 UTC" firstStartedPulling="2025-07-15 23:57:41.07399127 +0000 UTC m=+20.923266741" lastFinishedPulling="2025-07-15 23:57:57.53217887 +0000 UTC m=+37.381454349" observedRunningTime="2025-07-15 23:57:58.668333022 +0000 UTC m=+38.517608500" watchObservedRunningTime="2025-07-15 23:57:58.923109257 +0000 UTC m=+38.772384736" Jul 15 23:57:59.035530 kubelet[3190]: I0715 23:57:59.034019 3190 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04eac452-7bdb-4f3a-9b64-fb865c6416b6-whisker-ca-bundle\") pod \"04eac452-7bdb-4f3a-9b64-fb865c6416b6\" (UID: \"04eac452-7bdb-4f3a-9b64-fb865c6416b6\") " Jul 15 23:57:59.035530 kubelet[3190]: I0715 23:57:59.034081 3190 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpz9z\" (UniqueName: \"kubernetes.io/projected/04eac452-7bdb-4f3a-9b64-fb865c6416b6-kube-api-access-cpz9z\") pod \"04eac452-7bdb-4f3a-9b64-fb865c6416b6\" (UID: \"04eac452-7bdb-4f3a-9b64-fb865c6416b6\") " Jul 15 23:57:59.039245 kubelet[3190]: I0715 23:57:59.039138 3190 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04eac452-7bdb-4f3a-9b64-fb865c6416b6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "04eac452-7bdb-4f3a-9b64-fb865c6416b6" (UID: "04eac452-7bdb-4f3a-9b64-fb865c6416b6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 23:57:59.039245 kubelet[3190]: I0715 23:57:59.039248 3190 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/04eac452-7bdb-4f3a-9b64-fb865c6416b6-whisker-backend-key-pair\") pod \"04eac452-7bdb-4f3a-9b64-fb865c6416b6\" (UID: \"04eac452-7bdb-4f3a-9b64-fb865c6416b6\") " Jul 15 23:57:59.039429 kubelet[3190]: I0715 23:57:59.039355 3190 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04eac452-7bdb-4f3a-9b64-fb865c6416b6-whisker-ca-bundle\") on node \"ip-172-31-28-113\" DevicePath \"\"" Jul 15 23:57:59.050032 systemd[1]: var-lib-kubelet-pods-04eac452\x2d7bdb\x2d4f3a\x2d9b64\x2dfb865c6416b6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcpz9z.mount: Deactivated successfully. Jul 15 23:57:59.051355 kubelet[3190]: I0715 23:57:59.051304 3190 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04eac452-7bdb-4f3a-9b64-fb865c6416b6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "04eac452-7bdb-4f3a-9b64-fb865c6416b6" (UID: "04eac452-7bdb-4f3a-9b64-fb865c6416b6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 23:57:59.054255 kubelet[3190]: I0715 23:57:59.052422 3190 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04eac452-7bdb-4f3a-9b64-fb865c6416b6-kube-api-access-cpz9z" (OuterVolumeSpecName: "kube-api-access-cpz9z") pod "04eac452-7bdb-4f3a-9b64-fb865c6416b6" (UID: "04eac452-7bdb-4f3a-9b64-fb865c6416b6"). InnerVolumeSpecName "kube-api-access-cpz9z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:57:59.056973 systemd[1]: var-lib-kubelet-pods-04eac452\x2d7bdb\x2d4f3a\x2d9b64\x2dfb865c6416b6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 23:57:59.131043 containerd[1896]: time="2025-07-15T23:57:59.130989332Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a69c13f22a3b28382525da0fdd8e8d911e8cfb351e0334c737a7cab1015f15eb\" id:\"c416081bd1232d777aee9740055951105b2b7778dd6746fe5ac922fe50ffa009\" pid:4434 exit_status:1 exited_at:{seconds:1752623879 nanos:127977193}" Jul 15 23:57:59.141076 kubelet[3190]: I0715 23:57:59.141001 3190 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cpz9z\" (UniqueName: \"kubernetes.io/projected/04eac452-7bdb-4f3a-9b64-fb865c6416b6-kube-api-access-cpz9z\") on node \"ip-172-31-28-113\" DevicePath \"\"" Jul 15 23:57:59.141076 kubelet[3190]: I0715 23:57:59.141045 3190 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/04eac452-7bdb-4f3a-9b64-fb865c6416b6-whisker-backend-key-pair\") on node \"ip-172-31-28-113\" DevicePath \"\"" Jul 15 23:57:59.644741 systemd[1]: Removed slice kubepods-besteffort-pod04eac452_7bdb_4f3a_9b64_fb865c6416b6.slice - libcontainer container kubepods-besteffort-pod04eac452_7bdb_4f3a_9b64_fb865c6416b6.slice. Jul 15 23:57:59.779021 systemd[1]: Created slice kubepods-besteffort-pod85b1df60_2134_44d9_afde_da8bd65b0ea9.slice - libcontainer container kubepods-besteffort-pod85b1df60_2134_44d9_afde_da8bd65b0ea9.slice. Jul 15 23:57:59.846164 kubelet[3190]: I0715 23:57:59.846117 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85b1df60-2134-44d9-afde-da8bd65b0ea9-whisker-ca-bundle\") pod \"whisker-6b7c7ccbcd-vh2c7\" (UID: \"85b1df60-2134-44d9-afde-da8bd65b0ea9\") " pod="calico-system/whisker-6b7c7ccbcd-vh2c7" Jul 15 23:57:59.846350 kubelet[3190]: I0715 23:57:59.846177 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85b1df60-2134-44d9-afde-da8bd65b0ea9-whisker-backend-key-pair\") pod \"whisker-6b7c7ccbcd-vh2c7\" (UID: \"85b1df60-2134-44d9-afde-da8bd65b0ea9\") " pod="calico-system/whisker-6b7c7ccbcd-vh2c7" Jul 15 23:57:59.846350 kubelet[3190]: I0715 23:57:59.846199 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8lvr\" (UniqueName: \"kubernetes.io/projected/85b1df60-2134-44d9-afde-da8bd65b0ea9-kube-api-access-v8lvr\") pod \"whisker-6b7c7ccbcd-vh2c7\" (UID: \"85b1df60-2134-44d9-afde-da8bd65b0ea9\") " pod="calico-system/whisker-6b7c7ccbcd-vh2c7" Jul 15 23:57:59.928828 containerd[1896]: time="2025-07-15T23:57:59.928692096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a69c13f22a3b28382525da0fdd8e8d911e8cfb351e0334c737a7cab1015f15eb\" id:\"e59960f36c3457c4b231262dcb57292445d755bbd80e85e9cdf0931766a85706\" pid:4480 exit_status:1 exited_at:{seconds:1752623879 nanos:928403954}" Jul 15 23:58:00.087487 containerd[1896]: time="2025-07-15T23:58:00.087381061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b7c7ccbcd-vh2c7,Uid:85b1df60-2134-44d9-afde-da8bd65b0ea9,Namespace:calico-system,Attempt:0,}" Jul 15 23:58:00.310951 kubelet[3190]: I0715 23:58:00.310902 3190 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04eac452-7bdb-4f3a-9b64-fb865c6416b6" path="/var/lib/kubelet/pods/04eac452-7bdb-4f3a-9b64-fb865c6416b6/volumes" Jul 15 23:58:00.753656 systemd-networkd[1813]: calib0376717131: Link UP Jul 15 23:58:00.754347 (udev-worker)[4416]: Network interface NamePolicy= disabled on kernel command line. Jul 15 23:58:00.754931 systemd-networkd[1813]: calib0376717131: Gained carrier Jul 15 23:58:00.818822 containerd[1896]: 2025-07-15 23:58:00.146 [INFO][4499] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 23:58:00.818822 containerd[1896]: 2025-07-15 23:58:00.210 [INFO][4499] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--113-k8s-whisker--6b7c7ccbcd--vh2c7-eth0 whisker-6b7c7ccbcd- calico-system 85b1df60-2134-44d9-afde-da8bd65b0ea9 921 0 2025-07-15 23:57:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b7c7ccbcd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-113 whisker-6b7c7ccbcd-vh2c7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib0376717131 [] [] }} ContainerID="d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" Namespace="calico-system" Pod="whisker-6b7c7ccbcd-vh2c7" WorkloadEndpoint="ip--172--31--28--113-k8s-whisker--6b7c7ccbcd--vh2c7-" Jul 15 23:58:00.818822 containerd[1896]: 2025-07-15 23:58:00.210 [INFO][4499] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" Namespace="calico-system" Pod="whisker-6b7c7ccbcd-vh2c7" WorkloadEndpoint="ip--172--31--28--113-k8s-whisker--6b7c7ccbcd--vh2c7-eth0" Jul 15 23:58:00.818822 containerd[1896]: 2025-07-15 23:58:00.608 [INFO][4507] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" HandleID="k8s-pod-network.d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" Workload="ip--172--31--28--113-k8s-whisker--6b7c7ccbcd--vh2c7-eth0" Jul 15 23:58:00.819624 containerd[1896]: 2025-07-15 23:58:00.617 [INFO][4507] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" HandleID="k8s-pod-network.d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" Workload="ip--172--31--28--113-k8s-whisker--6b7c7ccbcd--vh2c7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000398390), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-113", "pod":"whisker-6b7c7ccbcd-vh2c7", "timestamp":"2025-07-15 23:58:00.608026561 +0000 UTC"}, Hostname:"ip-172-31-28-113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:00.819624 containerd[1896]: 2025-07-15 23:58:00.617 [INFO][4507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:00.819624 containerd[1896]: 2025-07-15 23:58:00.617 [INFO][4507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:00.819624 containerd[1896]: 2025-07-15 23:58:00.622 [INFO][4507] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-113' Jul 15 23:58:00.819624 containerd[1896]: 2025-07-15 23:58:00.645 [INFO][4507] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" host="ip-172-31-28-113" Jul 15 23:58:00.819624 containerd[1896]: 2025-07-15 23:58:00.665 [INFO][4507] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-113" Jul 15 23:58:00.819624 containerd[1896]: 2025-07-15 23:58:00.678 [INFO][4507] ipam/ipam.go 511: Trying affinity for 192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:00.819624 containerd[1896]: 2025-07-15 23:58:00.688 [INFO][4507] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:00.819624 containerd[1896]: 2025-07-15 23:58:00.695 [INFO][4507] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:00.821199 containerd[1896]: 2025-07-15 23:58:00.695 [INFO][4507] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" host="ip-172-31-28-113" Jul 15 23:58:00.821199 containerd[1896]: 2025-07-15 23:58:00.699 [INFO][4507] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3 Jul 15 23:58:00.821199 containerd[1896]: 2025-07-15 23:58:00.711 [INFO][4507] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" host="ip-172-31-28-113" Jul 15 23:58:00.821199 containerd[1896]: 2025-07-15 23:58:00.720 [INFO][4507] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.29.129/26] block=192.168.29.128/26 handle="k8s-pod-network.d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" host="ip-172-31-28-113" Jul 15 23:58:00.821199 containerd[1896]: 2025-07-15 23:58:00.720 [INFO][4507] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.129/26] handle="k8s-pod-network.d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" host="ip-172-31-28-113" Jul 15 23:58:00.821199 containerd[1896]: 2025-07-15 23:58:00.721 [INFO][4507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:00.821199 containerd[1896]: 2025-07-15 23:58:00.721 [INFO][4507] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.129/26] IPv6=[] ContainerID="d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" HandleID="k8s-pod-network.d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" Workload="ip--172--31--28--113-k8s-whisker--6b7c7ccbcd--vh2c7-eth0" Jul 15 23:58:00.821654 containerd[1896]: 2025-07-15 23:58:00.729 [INFO][4499] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" Namespace="calico-system" Pod="whisker-6b7c7ccbcd-vh2c7" WorkloadEndpoint="ip--172--31--28--113-k8s-whisker--6b7c7ccbcd--vh2c7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--113-k8s-whisker--6b7c7ccbcd--vh2c7-eth0", GenerateName:"whisker-6b7c7ccbcd-", Namespace:"calico-system", SelfLink:"", UID:"85b1df60-2134-44d9-afde-da8bd65b0ea9", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b7c7ccbcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-113", ContainerID:"", Pod:"whisker-6b7c7ccbcd-vh2c7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.29.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib0376717131", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:00.821654 containerd[1896]: 2025-07-15 23:58:00.729 [INFO][4499] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.129/32] ContainerID="d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" Namespace="calico-system" Pod="whisker-6b7c7ccbcd-vh2c7" WorkloadEndpoint="ip--172--31--28--113-k8s-whisker--6b7c7ccbcd--vh2c7-eth0" Jul 15 23:58:00.821838 containerd[1896]: 2025-07-15 23:58:00.729 [INFO][4499] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0376717131 ContainerID="d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" Namespace="calico-system" Pod="whisker-6b7c7ccbcd-vh2c7" WorkloadEndpoint="ip--172--31--28--113-k8s-whisker--6b7c7ccbcd--vh2c7-eth0" Jul 15 23:58:00.821838 containerd[1896]: 2025-07-15 23:58:00.749 [INFO][4499] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" Namespace="calico-system" Pod="whisker-6b7c7ccbcd-vh2c7" WorkloadEndpoint="ip--172--31--28--113-k8s-whisker--6b7c7ccbcd--vh2c7-eth0" Jul 15 23:58:00.821941 containerd[1896]: 2025-07-15 23:58:00.750 [INFO][4499] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" Namespace="calico-system" Pod="whisker-6b7c7ccbcd-vh2c7" WorkloadEndpoint="ip--172--31--28--113-k8s-whisker--6b7c7ccbcd--vh2c7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--113-k8s-whisker--6b7c7ccbcd--vh2c7-eth0", GenerateName:"whisker-6b7c7ccbcd-", Namespace:"calico-system", SelfLink:"", UID:"85b1df60-2134-44d9-afde-da8bd65b0ea9", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b7c7ccbcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-113", ContainerID:"d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3", Pod:"whisker-6b7c7ccbcd-vh2c7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.29.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib0376717131", MAC:"d6:77:c8:a8:63:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:00.822028 containerd[1896]: 2025-07-15 23:58:00.781 [INFO][4499] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" Namespace="calico-system" Pod="whisker-6b7c7ccbcd-vh2c7" WorkloadEndpoint="ip--172--31--28--113-k8s-whisker--6b7c7ccbcd--vh2c7-eth0" Jul 15 23:58:01.188728 containerd[1896]: time="2025-07-15T23:58:01.187354134Z" level=info msg="connecting to shim d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3" address="unix:///run/containerd/s/2a4d3bdb09568ec781619f7494a32b3664e4122e37b9136f6f4275e5458ffea6" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:01.257081 systemd[1]: Started cri-containerd-d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3.scope - libcontainer container d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3. Jul 15 23:58:01.418643 containerd[1896]: time="2025-07-15T23:58:01.418597436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b7c7ccbcd-vh2c7,Uid:85b1df60-2134-44d9-afde-da8bd65b0ea9,Namespace:calico-system,Attempt:0,} returns sandbox id \"d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3\"" Jul 15 23:58:01.440384 containerd[1896]: time="2025-07-15T23:58:01.439437807Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a69c13f22a3b28382525da0fdd8e8d911e8cfb351e0334c737a7cab1015f15eb\" id:\"501c1a14dd4bd4f67a3bc7a2a5caa99ebfc27b7997376f1710c6b86bdfabb6f8\" pid:4611 exit_status:1 exited_at:{seconds:1752623881 nanos:429767304}" Jul 15 23:58:01.461407 containerd[1896]: time="2025-07-15T23:58:01.461357360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 23:58:01.864537 systemd-networkd[1813]: calib0376717131: Gained IPv6LL Jul 15 23:58:01.917437 systemd-networkd[1813]: vxlan.calico: Link UP Jul 15 23:58:01.917449 systemd-networkd[1813]: vxlan.calico: Gained carrier Jul 15 23:58:01.954621 (udev-worker)[4413]: Network interface NamePolicy= disabled on kernel command line. Jul 15 23:58:02.310524 containerd[1896]: time="2025-07-15T23:58:02.309910705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d5f69495f-dmf7c,Uid:20795f98-3bfb-43fb-afd0-3a7080cfb21d,Namespace:calico-system,Attempt:0,}" Jul 15 23:58:02.310524 containerd[1896]: time="2025-07-15T23:58:02.310328287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9bgld,Uid:0c936f3e-8905-404f-b25f-a78dbd4fc552,Namespace:calico-system,Attempt:0,}" Jul 15 23:58:02.565255 systemd-networkd[1813]: calif66ba99dcd4: Link UP Jul 15 23:58:02.569492 systemd-networkd[1813]: calif66ba99dcd4: Gained carrier Jul 15 23:58:02.601755 containerd[1896]: 2025-07-15 23:58:02.409 [INFO][4785] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--113-k8s-goldmane--768f4c5c69--9bgld-eth0 goldmane-768f4c5c69- calico-system 0c936f3e-8905-404f-b25f-a78dbd4fc552 849 0 2025-07-15 23:57:40 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-113 goldmane-768f4c5c69-9bgld eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif66ba99dcd4 [] [] }} ContainerID="1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" Namespace="calico-system" Pod="goldmane-768f4c5c69-9bgld" WorkloadEndpoint="ip--172--31--28--113-k8s-goldmane--768f4c5c69--9bgld-" Jul 15 23:58:02.601755 containerd[1896]: 2025-07-15 23:58:02.409 [INFO][4785] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" Namespace="calico-system" Pod="goldmane-768f4c5c69-9bgld" WorkloadEndpoint="ip--172--31--28--113-k8s-goldmane--768f4c5c69--9bgld-eth0" Jul 15 23:58:02.601755 containerd[1896]: 2025-07-15 23:58:02.475 [INFO][4807] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" HandleID="k8s-pod-network.1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" Workload="ip--172--31--28--113-k8s-goldmane--768f4c5c69--9bgld-eth0" Jul 15 23:58:02.601975 containerd[1896]: 2025-07-15 23:58:02.475 [INFO][4807] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" HandleID="k8s-pod-network.1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" Workload="ip--172--31--28--113-k8s-goldmane--768f4c5c69--9bgld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-113", "pod":"goldmane-768f4c5c69-9bgld", "timestamp":"2025-07-15 23:58:02.47312453 +0000 UTC"}, Hostname:"ip-172-31-28-113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:02.601975 containerd[1896]: 2025-07-15 23:58:02.475 [INFO][4807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:02.601975 containerd[1896]: 2025-07-15 23:58:02.475 [INFO][4807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:02.601975 containerd[1896]: 2025-07-15 23:58:02.475 [INFO][4807] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-113' Jul 15 23:58:02.601975 containerd[1896]: 2025-07-15 23:58:02.499 [INFO][4807] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" host="ip-172-31-28-113" Jul 15 23:58:02.601975 containerd[1896]: 2025-07-15 23:58:02.509 [INFO][4807] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-113" Jul 15 23:58:02.601975 containerd[1896]: 2025-07-15 23:58:02.521 [INFO][4807] ipam/ipam.go 511: Trying affinity for 192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:02.601975 containerd[1896]: 2025-07-15 23:58:02.527 [INFO][4807] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:02.601975 containerd[1896]: 2025-07-15 23:58:02.530 [INFO][4807] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:02.602212 containerd[1896]: 2025-07-15 23:58:02.530 [INFO][4807] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" host="ip-172-31-28-113" Jul 15 23:58:02.602212 containerd[1896]: 2025-07-15 23:58:02.533 [INFO][4807] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d Jul 15 23:58:02.602212 containerd[1896]: 2025-07-15 23:58:02.540 [INFO][4807] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" host="ip-172-31-28-113" Jul 15 23:58:02.602212 containerd[1896]: 2025-07-15 23:58:02.551 [INFO][4807] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.29.130/26] block=192.168.29.128/26 handle="k8s-pod-network.1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" host="ip-172-31-28-113" Jul 15 23:58:02.602212 containerd[1896]: 2025-07-15 23:58:02.551 [INFO][4807] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.130/26] handle="k8s-pod-network.1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" host="ip-172-31-28-113" Jul 15 23:58:02.602212 containerd[1896]: 2025-07-15 23:58:02.551 [INFO][4807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:02.602212 containerd[1896]: 2025-07-15 23:58:02.551 [INFO][4807] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.130/26] IPv6=[] ContainerID="1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" HandleID="k8s-pod-network.1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" Workload="ip--172--31--28--113-k8s-goldmane--768f4c5c69--9bgld-eth0" Jul 15 23:58:02.604334 containerd[1896]: 2025-07-15 23:58:02.559 [INFO][4785] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" Namespace="calico-system" Pod="goldmane-768f4c5c69-9bgld" WorkloadEndpoint="ip--172--31--28--113-k8s-goldmane--768f4c5c69--9bgld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--113-k8s-goldmane--768f4c5c69--9bgld-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"0c936f3e-8905-404f-b25f-a78dbd4fc552", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-113", ContainerID:"", Pod:"goldmane-768f4c5c69-9bgld", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif66ba99dcd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:02.604334 containerd[1896]: 2025-07-15 23:58:02.559 [INFO][4785] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.130/32] ContainerID="1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" Namespace="calico-system" Pod="goldmane-768f4c5c69-9bgld" WorkloadEndpoint="ip--172--31--28--113-k8s-goldmane--768f4c5c69--9bgld-eth0" Jul 15 23:58:02.604667 containerd[1896]: 2025-07-15 23:58:02.559 [INFO][4785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif66ba99dcd4 ContainerID="1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" Namespace="calico-system" Pod="goldmane-768f4c5c69-9bgld" WorkloadEndpoint="ip--172--31--28--113-k8s-goldmane--768f4c5c69--9bgld-eth0" Jul 15 23:58:02.604667 containerd[1896]: 2025-07-15 23:58:02.571 [INFO][4785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" Namespace="calico-system" Pod="goldmane-768f4c5c69-9bgld" WorkloadEndpoint="ip--172--31--28--113-k8s-goldmane--768f4c5c69--9bgld-eth0" Jul 15 23:58:02.604729 containerd[1896]: 2025-07-15 23:58:02.572 [INFO][4785] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" Namespace="calico-system" Pod="goldmane-768f4c5c69-9bgld" WorkloadEndpoint="ip--172--31--28--113-k8s-goldmane--768f4c5c69--9bgld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--113-k8s-goldmane--768f4c5c69--9bgld-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"0c936f3e-8905-404f-b25f-a78dbd4fc552", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-113", ContainerID:"1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d", Pod:"goldmane-768f4c5c69-9bgld", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif66ba99dcd4", MAC:"52:8d:48:69:05:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:02.604787 containerd[1896]: 2025-07-15 23:58:02.595 [INFO][4785] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" Namespace="calico-system" Pod="goldmane-768f4c5c69-9bgld" WorkloadEndpoint="ip--172--31--28--113-k8s-goldmane--768f4c5c69--9bgld-eth0" Jul 15 23:58:02.665684 containerd[1896]: time="2025-07-15T23:58:02.665612734Z" level=info msg="connecting to shim 1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d" address="unix:///run/containerd/s/d808b61ef0ea7d6447376196ce92ca0cf479d7e169457fc09c67cc4565a2dafc" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:02.726764 systemd[1]: Started cri-containerd-1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d.scope - libcontainer container 1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d. Jul 15 23:58:02.727871 systemd-networkd[1813]: calid7c4f9d2223: Link UP Jul 15 23:58:02.732216 systemd-networkd[1813]: calid7c4f9d2223: Gained carrier Jul 15 23:58:02.769806 containerd[1896]: 2025-07-15 23:58:02.417 [INFO][4782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--113-k8s-calico--kube--controllers--d5f69495f--dmf7c-eth0 calico-kube-controllers-d5f69495f- calico-system 20795f98-3bfb-43fb-afd0-3a7080cfb21d 846 0 2025-07-15 23:57:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d5f69495f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-113 calico-kube-controllers-d5f69495f-dmf7c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid7c4f9d2223 [] [] }} ContainerID="4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" Namespace="calico-system" Pod="calico-kube-controllers-d5f69495f-dmf7c" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--kube--controllers--d5f69495f--dmf7c-" Jul 15 23:58:02.769806 containerd[1896]: 2025-07-15 23:58:02.418 [INFO][4782] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" Namespace="calico-system" Pod="calico-kube-controllers-d5f69495f-dmf7c" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--kube--controllers--d5f69495f--dmf7c-eth0" Jul 15 23:58:02.769806 containerd[1896]: 2025-07-15 23:58:02.497 [INFO][4812] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" HandleID="k8s-pod-network.4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" Workload="ip--172--31--28--113-k8s-calico--kube--controllers--d5f69495f--dmf7c-eth0" Jul 15 23:58:02.770176 containerd[1896]: 2025-07-15 23:58:02.499 [INFO][4812] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" HandleID="k8s-pod-network.4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" Workload="ip--172--31--28--113-k8s-calico--kube--controllers--d5f69495f--dmf7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fa00), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-113", "pod":"calico-kube-controllers-d5f69495f-dmf7c", "timestamp":"2025-07-15 23:58:02.496973136 +0000 UTC"}, Hostname:"ip-172-31-28-113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:02.770176 containerd[1896]: 2025-07-15 23:58:02.499 [INFO][4812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:02.770176 containerd[1896]: 2025-07-15 23:58:02.551 [INFO][4812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:02.770176 containerd[1896]: 2025-07-15 23:58:02.552 [INFO][4812] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-113' Jul 15 23:58:02.770176 containerd[1896]: 2025-07-15 23:58:02.604 [INFO][4812] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" host="ip-172-31-28-113" Jul 15 23:58:02.770176 containerd[1896]: 2025-07-15 23:58:02.620 [INFO][4812] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-113" Jul 15 23:58:02.770176 containerd[1896]: 2025-07-15 23:58:02.642 [INFO][4812] ipam/ipam.go 511: Trying affinity for 192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:02.770176 containerd[1896]: 2025-07-15 23:58:02.649 [INFO][4812] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:02.770176 containerd[1896]: 2025-07-15 23:58:02.658 [INFO][4812] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:02.770576 containerd[1896]: 2025-07-15 23:58:02.659 [INFO][4812] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" host="ip-172-31-28-113" Jul 15 23:58:02.770576 containerd[1896]: 2025-07-15 23:58:02.663 [INFO][4812] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964 Jul 15 23:58:02.770576 containerd[1896]: 2025-07-15 23:58:02.680 [INFO][4812] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" host="ip-172-31-28-113" Jul 15 23:58:02.770576 containerd[1896]: 2025-07-15 23:58:02.706 [INFO][4812] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.29.131/26] block=192.168.29.128/26 handle="k8s-pod-network.4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" host="ip-172-31-28-113" Jul 15 23:58:02.770576 containerd[1896]: 2025-07-15 23:58:02.707 [INFO][4812] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.131/26] handle="k8s-pod-network.4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" host="ip-172-31-28-113" Jul 15 23:58:02.770576 containerd[1896]: 2025-07-15 23:58:02.707 [INFO][4812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:02.770576 containerd[1896]: 2025-07-15 23:58:02.707 [INFO][4812] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.131/26] IPv6=[] ContainerID="4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" HandleID="k8s-pod-network.4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" Workload="ip--172--31--28--113-k8s-calico--kube--controllers--d5f69495f--dmf7c-eth0" Jul 15 23:58:02.773388 containerd[1896]: 2025-07-15 23:58:02.715 [INFO][4782] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" Namespace="calico-system" Pod="calico-kube-controllers-d5f69495f-dmf7c" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--kube--controllers--d5f69495f--dmf7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--113-k8s-calico--kube--controllers--d5f69495f--dmf7c-eth0", GenerateName:"calico-kube-controllers-d5f69495f-", Namespace:"calico-system", SelfLink:"", UID:"20795f98-3bfb-43fb-afd0-3a7080cfb21d", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d5f69495f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-113", ContainerID:"", Pod:"calico-kube-controllers-d5f69495f-dmf7c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid7c4f9d2223", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:02.773505 containerd[1896]: 2025-07-15 23:58:02.716 [INFO][4782] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.131/32] ContainerID="4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" Namespace="calico-system" Pod="calico-kube-controllers-d5f69495f-dmf7c" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--kube--controllers--d5f69495f--dmf7c-eth0" Jul 15 23:58:02.773505 containerd[1896]: 2025-07-15 23:58:02.716 [INFO][4782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7c4f9d2223 ContainerID="4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" Namespace="calico-system" Pod="calico-kube-controllers-d5f69495f-dmf7c" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--kube--controllers--d5f69495f--dmf7c-eth0" Jul 15 23:58:02.773505 containerd[1896]: 2025-07-15 23:58:02.734 [INFO][4782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" Namespace="calico-system" Pod="calico-kube-controllers-d5f69495f-dmf7c" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--kube--controllers--d5f69495f--dmf7c-eth0" Jul 15 23:58:02.773641 containerd[1896]: 2025-07-15 23:58:02.741 [INFO][4782] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" Namespace="calico-system" Pod="calico-kube-controllers-d5f69495f-dmf7c" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--kube--controllers--d5f69495f--dmf7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--113-k8s-calico--kube--controllers--d5f69495f--dmf7c-eth0", GenerateName:"calico-kube-controllers-d5f69495f-", Namespace:"calico-system", SelfLink:"", UID:"20795f98-3bfb-43fb-afd0-3a7080cfb21d", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d5f69495f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-113", ContainerID:"4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964", Pod:"calico-kube-controllers-d5f69495f-dmf7c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid7c4f9d2223", MAC:"0e:86:de:3d:95:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:02.773738 containerd[1896]: 2025-07-15 23:58:02.757 [INFO][4782] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" Namespace="calico-system" Pod="calico-kube-controllers-d5f69495f-dmf7c" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--kube--controllers--d5f69495f--dmf7c-eth0" Jul 15 23:58:02.879351 containerd[1896]: time="2025-07-15T23:58:02.878750323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9bgld,Uid:0c936f3e-8905-404f-b25f-a78dbd4fc552,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d\"" Jul 15 23:58:02.897349 containerd[1896]: time="2025-07-15T23:58:02.897280629Z" level=info msg="connecting to shim 4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964" address="unix:///run/containerd/s/97a7a53e26b473c15b8c39bd67d06afc3e90ead10764fffc4358403912b58d56" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:02.945708 systemd[1]: Started cri-containerd-4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964.scope - libcontainer container 4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964. Jul 15 23:58:03.015465 containerd[1896]: time="2025-07-15T23:58:03.015374579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:03.018086 containerd[1896]: time="2025-07-15T23:58:03.018037141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 15 23:58:03.020699 containerd[1896]: time="2025-07-15T23:58:03.020656786Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:03.025308 containerd[1896]: time="2025-07-15T23:58:03.025066049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:03.026085 containerd[1896]: time="2025-07-15T23:58:03.026044605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.564627207s" Jul 15 23:58:03.026184 containerd[1896]: time="2025-07-15T23:58:03.026088465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 15 23:58:03.032100 containerd[1896]: time="2025-07-15T23:58:03.031681693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d5f69495f-dmf7c,Uid:20795f98-3bfb-43fb-afd0-3a7080cfb21d,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964\"" Jul 15 23:58:03.045476 containerd[1896]: time="2025-07-15T23:58:03.045433249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 23:58:03.076868 containerd[1896]: time="2025-07-15T23:58:03.076824737Z" level=info msg="CreateContainer within sandbox \"d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 23:58:03.090261 containerd[1896]: time="2025-07-15T23:58:03.090166635Z" level=info msg="Container 224c647cf1356ee9338c3bda74938a99e2ecf1f26e522227ee36c3a6350594bf: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:03.103478 containerd[1896]: time="2025-07-15T23:58:03.103432763Z" level=info msg="CreateContainer within sandbox \"d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"224c647cf1356ee9338c3bda74938a99e2ecf1f26e522227ee36c3a6350594bf\"" Jul 15 23:58:03.104265 containerd[1896]: time="2025-07-15T23:58:03.104141584Z" level=info msg="StartContainer for \"224c647cf1356ee9338c3bda74938a99e2ecf1f26e522227ee36c3a6350594bf\"" Jul 15 23:58:03.106041 containerd[1896]: time="2025-07-15T23:58:03.105827978Z" level=info msg="connecting to shim 224c647cf1356ee9338c3bda74938a99e2ecf1f26e522227ee36c3a6350594bf" address="unix:///run/containerd/s/2a4d3bdb09568ec781619f7494a32b3664e4122e37b9136f6f4275e5458ffea6" protocol=ttrpc version=3 Jul 15 23:58:03.129632 systemd[1]: Started cri-containerd-224c647cf1356ee9338c3bda74938a99e2ecf1f26e522227ee36c3a6350594bf.scope - libcontainer container 224c647cf1356ee9338c3bda74938a99e2ecf1f26e522227ee36c3a6350594bf. Jul 15 23:58:03.191729 containerd[1896]: time="2025-07-15T23:58:03.191679729Z" level=info msg="StartContainer for \"224c647cf1356ee9338c3bda74938a99e2ecf1f26e522227ee36c3a6350594bf\" returns successfully" Jul 15 23:58:03.308645 containerd[1896]: time="2025-07-15T23:58:03.308395617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94bf58f7b-9lt55,Uid:8d429579-dd7f-48d0-a55a-81a1c29e3add,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:58:03.308645 containerd[1896]: time="2025-07-15T23:58:03.308590733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5b8vh,Uid:128dc328-f476-4e10-9f21-e490b2e597ac,Namespace:calico-system,Attempt:0,}" Jul 15 23:58:03.488509 systemd-networkd[1813]: calic374937ce2e: Link UP Jul 15 23:58:03.489490 systemd-networkd[1813]: calic374937ce2e: Gained carrier Jul 15 23:58:03.511075 containerd[1896]: 2025-07-15 23:58:03.376 [INFO][4977] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--113-k8s-csi--node--driver--5b8vh-eth0 csi-node-driver- calico-system 128dc328-f476-4e10-9f21-e490b2e597ac 736 0 2025-07-15 23:57:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-113 csi-node-driver-5b8vh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic374937ce2e [] [] }} ContainerID="d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" Namespace="calico-system" Pod="csi-node-driver-5b8vh" WorkloadEndpoint="ip--172--31--28--113-k8s-csi--node--driver--5b8vh-" Jul 15 23:58:03.511075 containerd[1896]: 2025-07-15 23:58:03.376 [INFO][4977] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" Namespace="calico-system" Pod="csi-node-driver-5b8vh" WorkloadEndpoint="ip--172--31--28--113-k8s-csi--node--driver--5b8vh-eth0" Jul 15 23:58:03.511075 containerd[1896]: 2025-07-15 23:58:03.424 [INFO][4996] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" HandleID="k8s-pod-network.d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" Workload="ip--172--31--28--113-k8s-csi--node--driver--5b8vh-eth0" Jul 15 23:58:03.512157 containerd[1896]: 2025-07-15 23:58:03.424 [INFO][4996] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" HandleID="k8s-pod-network.d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" Workload="ip--172--31--28--113-k8s-csi--node--driver--5b8vh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-113", "pod":"csi-node-driver-5b8vh", "timestamp":"2025-07-15 23:58:03.424135515 +0000 UTC"}, Hostname:"ip-172-31-28-113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:03.512157 containerd[1896]: 2025-07-15 23:58:03.424 [INFO][4996] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:03.512157 containerd[1896]: 2025-07-15 23:58:03.424 [INFO][4996] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:03.512157 containerd[1896]: 2025-07-15 23:58:03.424 [INFO][4996] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-113' Jul 15 23:58:03.512157 containerd[1896]: 2025-07-15 23:58:03.440 [INFO][4996] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" host="ip-172-31-28-113" Jul 15 23:58:03.512157 containerd[1896]: 2025-07-15 23:58:03.447 [INFO][4996] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-113" Jul 15 23:58:03.512157 containerd[1896]: 2025-07-15 23:58:03.456 [INFO][4996] ipam/ipam.go 511: Trying affinity for 192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:03.512157 containerd[1896]: 2025-07-15 23:58:03.459 [INFO][4996] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:03.512157 containerd[1896]: 2025-07-15 23:58:03.461 [INFO][4996] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:03.513972 containerd[1896]: 2025-07-15 23:58:03.462 [INFO][4996] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" host="ip-172-31-28-113" Jul 15 23:58:03.513972 containerd[1896]: 2025-07-15 23:58:03.463 [INFO][4996] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470 Jul 15 23:58:03.513972 containerd[1896]: 2025-07-15 23:58:03.468 [INFO][4996] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" host="ip-172-31-28-113" Jul 15 23:58:03.513972 containerd[1896]: 2025-07-15 23:58:03.478 [INFO][4996] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.29.132/26] block=192.168.29.128/26 handle="k8s-pod-network.d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" host="ip-172-31-28-113" Jul 15 23:58:03.513972 containerd[1896]: 2025-07-15 23:58:03.479 [INFO][4996] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.132/26] handle="k8s-pod-network.d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" host="ip-172-31-28-113" Jul 15 23:58:03.513972 containerd[1896]: 2025-07-15 23:58:03.479 [INFO][4996] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:03.513972 containerd[1896]: 2025-07-15 23:58:03.479 [INFO][4996] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.132/26] IPv6=[] ContainerID="d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" HandleID="k8s-pod-network.d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" Workload="ip--172--31--28--113-k8s-csi--node--driver--5b8vh-eth0" Jul 15 23:58:03.514145 containerd[1896]: 2025-07-15 23:58:03.483 [INFO][4977] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" Namespace="calico-system" Pod="csi-node-driver-5b8vh" WorkloadEndpoint="ip--172--31--28--113-k8s-csi--node--driver--5b8vh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--113-k8s-csi--node--driver--5b8vh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"128dc328-f476-4e10-9f21-e490b2e597ac", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-113", ContainerID:"", Pod:"csi-node-driver-5b8vh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic374937ce2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:03.514224 containerd[1896]: 2025-07-15 23:58:03.483 [INFO][4977] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.132/32] ContainerID="d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" Namespace="calico-system" Pod="csi-node-driver-5b8vh" WorkloadEndpoint="ip--172--31--28--113-k8s-csi--node--driver--5b8vh-eth0" Jul 15 23:58:03.514224 containerd[1896]: 2025-07-15 23:58:03.483 [INFO][4977] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic374937ce2e ContainerID="d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" Namespace="calico-system" Pod="csi-node-driver-5b8vh" WorkloadEndpoint="ip--172--31--28--113-k8s-csi--node--driver--5b8vh-eth0" Jul 15 23:58:03.514224 containerd[1896]: 2025-07-15 23:58:03.485 [INFO][4977] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" Namespace="calico-system" Pod="csi-node-driver-5b8vh" WorkloadEndpoint="ip--172--31--28--113-k8s-csi--node--driver--5b8vh-eth0" Jul 15 23:58:03.514815 containerd[1896]: 2025-07-15 23:58:03.486 [INFO][4977] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" Namespace="calico-system" Pod="csi-node-driver-5b8vh" WorkloadEndpoint="ip--172--31--28--113-k8s-csi--node--driver--5b8vh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--113-k8s-csi--node--driver--5b8vh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"128dc328-f476-4e10-9f21-e490b2e597ac", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-113", ContainerID:"d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470", Pod:"csi-node-driver-5b8vh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic374937ce2e", MAC:"ae:fd:df:87:24:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:03.514885 containerd[1896]: 2025-07-15 23:58:03.508 [INFO][4977] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" Namespace="calico-system" Pod="csi-node-driver-5b8vh" WorkloadEndpoint="ip--172--31--28--113-k8s-csi--node--driver--5b8vh-eth0" Jul 15 23:58:03.552219 containerd[1896]: time="2025-07-15T23:58:03.552169564Z" level=info msg="connecting to shim d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470" address="unix:///run/containerd/s/2e7dd289ed55024b963a649d8cc02e1d0e3ee7d4e683ce3c382c9846ccf01211" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:03.592583 systemd-networkd[1813]: calif66ba99dcd4: Gained IPv6LL Jul 15 23:58:03.594187 systemd[1]: Started cri-containerd-d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470.scope - libcontainer container d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470. Jul 15 23:58:03.623347 systemd-networkd[1813]: cali1152c7982b3: Link UP Jul 15 23:58:03.626619 systemd-networkd[1813]: cali1152c7982b3: Gained carrier Jul 15 23:58:03.661383 containerd[1896]: 2025-07-15 23:58:03.367 [INFO][4967] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--9lt55-eth0 calico-apiserver-94bf58f7b- calico-apiserver 8d429579-dd7f-48d0-a55a-81a1c29e3add 845 0 2025-07-15 23:57:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:94bf58f7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-113 calico-apiserver-94bf58f7b-9lt55 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1152c7982b3 [] [] }} ContainerID="060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" Namespace="calico-apiserver" Pod="calico-apiserver-94bf58f7b-9lt55" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--9lt55-" Jul 15 23:58:03.661383 containerd[1896]: 2025-07-15 23:58:03.367 [INFO][4967] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" Namespace="calico-apiserver" Pod="calico-apiserver-94bf58f7b-9lt55" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--9lt55-eth0" Jul 15 23:58:03.661383 containerd[1896]: 2025-07-15 23:58:03.439 [INFO][4991] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" HandleID="k8s-pod-network.060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" Workload="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--9lt55-eth0" Jul 15 23:58:03.661779 containerd[1896]: 2025-07-15 23:58:03.440 [INFO][4991] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" HandleID="k8s-pod-network.060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" Workload="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--9lt55-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-113", "pod":"calico-apiserver-94bf58f7b-9lt55", "timestamp":"2025-07-15 23:58:03.439315333 +0000 UTC"}, Hostname:"ip-172-31-28-113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:03.661779 containerd[1896]: 2025-07-15 23:58:03.440 [INFO][4991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:03.661779 containerd[1896]: 2025-07-15 23:58:03.479 [INFO][4991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:03.661779 containerd[1896]: 2025-07-15 23:58:03.479 [INFO][4991] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-113' Jul 15 23:58:03.661779 containerd[1896]: 2025-07-15 23:58:03.541 [INFO][4991] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" host="ip-172-31-28-113" Jul 15 23:58:03.661779 containerd[1896]: 2025-07-15 23:58:03.550 [INFO][4991] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-113" Jul 15 23:58:03.661779 containerd[1896]: 2025-07-15 23:58:03.559 [INFO][4991] ipam/ipam.go 511: Trying affinity for 192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:03.661779 containerd[1896]: 2025-07-15 23:58:03.568 [INFO][4991] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:03.661779 containerd[1896]: 2025-07-15 23:58:03.577 [INFO][4991] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:03.662096 containerd[1896]: 2025-07-15 23:58:03.577 [INFO][4991] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" host="ip-172-31-28-113" Jul 15 23:58:03.662096 containerd[1896]: 2025-07-15 23:58:03.583 [INFO][4991] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e Jul 15 23:58:03.662096 containerd[1896]: 2025-07-15 23:58:03.596 [INFO][4991] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" host="ip-172-31-28-113" Jul 15 23:58:03.662096 containerd[1896]: 2025-07-15 23:58:03.611 [INFO][4991] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.29.133/26] block=192.168.29.128/26 handle="k8s-pod-network.060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" host="ip-172-31-28-113" Jul 15 23:58:03.662096 containerd[1896]: 2025-07-15 23:58:03.612 [INFO][4991] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.133/26] handle="k8s-pod-network.060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" host="ip-172-31-28-113" Jul 15 23:58:03.662096 containerd[1896]: 2025-07-15 23:58:03.612 [INFO][4991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:03.662096 containerd[1896]: 2025-07-15 23:58:03.612 [INFO][4991] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.133/26] IPv6=[] ContainerID="060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" HandleID="k8s-pod-network.060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" Workload="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--9lt55-eth0" Jul 15 23:58:03.663086 containerd[1896]: 2025-07-15 23:58:03.616 [INFO][4967] cni-plugin/k8s.go 418: Populated endpoint ContainerID="060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" Namespace="calico-apiserver" Pod="calico-apiserver-94bf58f7b-9lt55" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--9lt55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--9lt55-eth0", GenerateName:"calico-apiserver-94bf58f7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d429579-dd7f-48d0-a55a-81a1c29e3add", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94bf58f7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-113", ContainerID:"", Pod:"calico-apiserver-94bf58f7b-9lt55", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1152c7982b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:03.663205 containerd[1896]: 2025-07-15 23:58:03.616 [INFO][4967] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.133/32] ContainerID="060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" Namespace="calico-apiserver" Pod="calico-apiserver-94bf58f7b-9lt55" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--9lt55-eth0" Jul 15 23:58:03.663205 containerd[1896]: 2025-07-15 23:58:03.616 [INFO][4967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1152c7982b3 ContainerID="060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" Namespace="calico-apiserver" Pod="calico-apiserver-94bf58f7b-9lt55" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--9lt55-eth0" Jul 15 23:58:03.663205 containerd[1896]: 2025-07-15 23:58:03.628 [INFO][4967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" Namespace="calico-apiserver" Pod="calico-apiserver-94bf58f7b-9lt55" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--9lt55-eth0" Jul 15 23:58:03.663362 containerd[1896]: 2025-07-15 23:58:03.629 [INFO][4967] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" Namespace="calico-apiserver" Pod="calico-apiserver-94bf58f7b-9lt55" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--9lt55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--9lt55-eth0", GenerateName:"calico-apiserver-94bf58f7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d429579-dd7f-48d0-a55a-81a1c29e3add", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94bf58f7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-113", ContainerID:"060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e", Pod:"calico-apiserver-94bf58f7b-9lt55", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1152c7982b3", MAC:"22:a6:5e:b3:7e:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:03.663470 containerd[1896]: 2025-07-15 23:58:03.647 [INFO][4967] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" Namespace="calico-apiserver" Pod="calico-apiserver-94bf58f7b-9lt55" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--9lt55-eth0" Jul 15 23:58:03.691176 containerd[1896]: time="2025-07-15T23:58:03.691042171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5b8vh,Uid:128dc328-f476-4e10-9f21-e490b2e597ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470\"" Jul 15 23:58:03.734469 containerd[1896]: time="2025-07-15T23:58:03.734422001Z" level=info msg="connecting to shim 060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e" address="unix:///run/containerd/s/e13d07f48f22eff53092a4a3eb9daecdb0a6edde83b2d08ba5264aadfa952795" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:03.770464 systemd[1]: Started cri-containerd-060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e.scope - libcontainer container 060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e. Jul 15 23:58:03.833051 containerd[1896]: time="2025-07-15T23:58:03.833010099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94bf58f7b-9lt55,Uid:8d429579-dd7f-48d0-a55a-81a1c29e3add,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e\"" Jul 15 23:58:03.912710 systemd-networkd[1813]: vxlan.calico: Gained IPv6LL Jul 15 23:58:03.976399 systemd-networkd[1813]: calid7c4f9d2223: Gained IPv6LL Jul 15 23:58:04.316690 containerd[1896]: time="2025-07-15T23:58:04.316072804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sbczf,Uid:bbdcda6c-439d-4163-bce0-8dfbc9a6f6d5,Namespace:kube-system,Attempt:0,}" Jul 15 23:58:04.577519 systemd-networkd[1813]: cali048a1675e96: Link UP Jul 15 23:58:04.587206 systemd-networkd[1813]: cali048a1675e96: Gained carrier Jul 15 23:58:04.640791 containerd[1896]: 2025-07-15 23:58:04.406 [INFO][5114] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--113-k8s-coredns--674b8bbfcf--sbczf-eth0 coredns-674b8bbfcf- kube-system bbdcda6c-439d-4163-bce0-8dfbc9a6f6d5 841 0 2025-07-15 23:57:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-113 coredns-674b8bbfcf-sbczf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali048a1675e96 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" Namespace="kube-system" Pod="coredns-674b8bbfcf-sbczf" WorkloadEndpoint="ip--172--31--28--113-k8s-coredns--674b8bbfcf--sbczf-" Jul 15 23:58:04.640791 containerd[1896]: 2025-07-15 23:58:04.407 [INFO][5114] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" Namespace="kube-system" Pod="coredns-674b8bbfcf-sbczf" WorkloadEndpoint="ip--172--31--28--113-k8s-coredns--674b8bbfcf--sbczf-eth0" Jul 15 23:58:04.640791 containerd[1896]: 2025-07-15 23:58:04.453 [INFO][5128] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" HandleID="k8s-pod-network.fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" Workload="ip--172--31--28--113-k8s-coredns--674b8bbfcf--sbczf-eth0" Jul 15 23:58:04.642581 containerd[1896]: 2025-07-15 23:58:04.454 [INFO][5128] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" HandleID="k8s-pod-network.fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" Workload="ip--172--31--28--113-k8s-coredns--674b8bbfcf--sbczf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5930), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-113", "pod":"coredns-674b8bbfcf-sbczf", "timestamp":"2025-07-15 23:58:04.45376467 +0000 UTC"}, Hostname:"ip-172-31-28-113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:04.642581 containerd[1896]: 2025-07-15 23:58:04.454 [INFO][5128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:04.642581 containerd[1896]: 2025-07-15 23:58:04.454 [INFO][5128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:04.642581 containerd[1896]: 2025-07-15 23:58:04.454 [INFO][5128] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-113' Jul 15 23:58:04.642581 containerd[1896]: 2025-07-15 23:58:04.473 [INFO][5128] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" host="ip-172-31-28-113" Jul 15 23:58:04.642581 containerd[1896]: 2025-07-15 23:58:04.493 [INFO][5128] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-113" Jul 15 23:58:04.642581 containerd[1896]: 2025-07-15 23:58:04.512 [INFO][5128] ipam/ipam.go 511: Trying affinity for 192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:04.642581 containerd[1896]: 2025-07-15 23:58:04.517 [INFO][5128] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:04.642581 containerd[1896]: 2025-07-15 23:58:04.520 [INFO][5128] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:04.642982 containerd[1896]: 2025-07-15 23:58:04.520 [INFO][5128] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" host="ip-172-31-28-113" Jul 15 23:58:04.642982 containerd[1896]: 2025-07-15 23:58:04.525 [INFO][5128] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e Jul 15 23:58:04.642982 containerd[1896]: 2025-07-15 23:58:04.533 [INFO][5128] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" host="ip-172-31-28-113" Jul 15 23:58:04.642982 containerd[1896]: 2025-07-15 23:58:04.554 [INFO][5128] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.29.134/26] block=192.168.29.128/26 handle="k8s-pod-network.fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" host="ip-172-31-28-113" Jul 15 23:58:04.642982 containerd[1896]: 2025-07-15 23:58:04.554 [INFO][5128] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.134/26] handle="k8s-pod-network.fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" host="ip-172-31-28-113" Jul 15 23:58:04.642982 containerd[1896]: 2025-07-15 23:58:04.554 [INFO][5128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:04.642982 containerd[1896]: 2025-07-15 23:58:04.554 [INFO][5128] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.134/26] IPv6=[] ContainerID="fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" HandleID="k8s-pod-network.fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" Workload="ip--172--31--28--113-k8s-coredns--674b8bbfcf--sbczf-eth0" Jul 15 23:58:04.644486 containerd[1896]: 2025-07-15 23:58:04.563 [INFO][5114] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" Namespace="kube-system" Pod="coredns-674b8bbfcf-sbczf" WorkloadEndpoint="ip--172--31--28--113-k8s-coredns--674b8bbfcf--sbczf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--113-k8s-coredns--674b8bbfcf--sbczf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bbdcda6c-439d-4163-bce0-8dfbc9a6f6d5", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-113", ContainerID:"", Pod:"coredns-674b8bbfcf-sbczf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali048a1675e96", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:04.644486 containerd[1896]: 2025-07-15 23:58:04.563 [INFO][5114] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.134/32] ContainerID="fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" Namespace="kube-system" Pod="coredns-674b8bbfcf-sbczf" WorkloadEndpoint="ip--172--31--28--113-k8s-coredns--674b8bbfcf--sbczf-eth0" Jul 15 23:58:04.644486 containerd[1896]: 2025-07-15 23:58:04.563 [INFO][5114] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali048a1675e96 ContainerID="fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" Namespace="kube-system" Pod="coredns-674b8bbfcf-sbczf" WorkloadEndpoint="ip--172--31--28--113-k8s-coredns--674b8bbfcf--sbczf-eth0" Jul 15 23:58:04.644486 containerd[1896]: 2025-07-15 23:58:04.593 [INFO][5114] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" Namespace="kube-system" Pod="coredns-674b8bbfcf-sbczf" WorkloadEndpoint="ip--172--31--28--113-k8s-coredns--674b8bbfcf--sbczf-eth0" Jul 15 23:58:04.644486 containerd[1896]: 2025-07-15 23:58:04.599 [INFO][5114] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" Namespace="kube-system" Pod="coredns-674b8bbfcf-sbczf" WorkloadEndpoint="ip--172--31--28--113-k8s-coredns--674b8bbfcf--sbczf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--113-k8s-coredns--674b8bbfcf--sbczf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bbdcda6c-439d-4163-bce0-8dfbc9a6f6d5", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-113", ContainerID:"fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e", Pod:"coredns-674b8bbfcf-sbczf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali048a1675e96", MAC:"b2:6a:37:b8:ec:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:04.644486 containerd[1896]: 2025-07-15 23:58:04.627 [INFO][5114] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" Namespace="kube-system" Pod="coredns-674b8bbfcf-sbczf" WorkloadEndpoint="ip--172--31--28--113-k8s-coredns--674b8bbfcf--sbczf-eth0" Jul 15 23:58:04.740909 containerd[1896]: time="2025-07-15T23:58:04.740861012Z" level=info msg="connecting to shim fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e" address="unix:///run/containerd/s/1a0ec0ae6a3fcecd6404530c77b219102e0220f860ff3028e3b9fa36c631f6ea" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:04.792149 systemd[1]: Started cri-containerd-fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e.scope - libcontainer container fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e. Jul 15 23:58:04.882678 containerd[1896]: time="2025-07-15T23:58:04.882524601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sbczf,Uid:bbdcda6c-439d-4163-bce0-8dfbc9a6f6d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e\"" Jul 15 23:58:04.903914 containerd[1896]: time="2025-07-15T23:58:04.903852398Z" level=info msg="CreateContainer within sandbox \"fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:58:04.933995 containerd[1896]: time="2025-07-15T23:58:04.933870917Z" level=info msg="Container ede66f6afeb1dcaa7147a975dd38cd3a33f455b50056af31a1ccb064c934824a: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:04.936488 systemd-networkd[1813]: calic374937ce2e: Gained IPv6LL Jul 15 23:58:04.952449 containerd[1896]: time="2025-07-15T23:58:04.951930213Z" level=info msg="CreateContainer within sandbox \"fa42a6e0a5e94e3219c6cb96911a5e9b7fe0b06551d18f8700ad2634623efa6e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ede66f6afeb1dcaa7147a975dd38cd3a33f455b50056af31a1ccb064c934824a\"" Jul 15 23:58:04.953575 containerd[1896]: time="2025-07-15T23:58:04.953548111Z" level=info msg="StartContainer for \"ede66f6afeb1dcaa7147a975dd38cd3a33f455b50056af31a1ccb064c934824a\"" Jul 15 23:58:04.958246 containerd[1896]: time="2025-07-15T23:58:04.957824694Z" level=info msg="connecting to shim ede66f6afeb1dcaa7147a975dd38cd3a33f455b50056af31a1ccb064c934824a" address="unix:///run/containerd/s/1a0ec0ae6a3fcecd6404530c77b219102e0220f860ff3028e3b9fa36c631f6ea" protocol=ttrpc version=3 Jul 15 23:58:04.999875 systemd[1]: Started cri-containerd-ede66f6afeb1dcaa7147a975dd38cd3a33f455b50056af31a1ccb064c934824a.scope - libcontainer container ede66f6afeb1dcaa7147a975dd38cd3a33f455b50056af31a1ccb064c934824a. Jul 15 23:58:05.083314 containerd[1896]: time="2025-07-15T23:58:05.083274969Z" level=info msg="StartContainer for \"ede66f6afeb1dcaa7147a975dd38cd3a33f455b50056af31a1ccb064c934824a\" returns successfully" Jul 15 23:58:05.256941 systemd-networkd[1813]: cali1152c7982b3: Gained IPv6LL Jul 15 23:58:05.336599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2526737120.mount: Deactivated successfully. Jul 15 23:58:05.346629 containerd[1896]: time="2025-07-15T23:58:05.346312722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7w44v,Uid:78eab6a1-05c2-474a-ada8-8cc9bda3773a,Namespace:kube-system,Attempt:0,}" Jul 15 23:58:05.768401 systemd-networkd[1813]: cali048a1675e96: Gained IPv6LL Jul 15 23:58:05.793911 systemd-networkd[1813]: calibb281e3bf58: Link UP Jul 15 23:58:05.797946 systemd-networkd[1813]: calibb281e3bf58: Gained carrier Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.586 [INFO][5232] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--113-k8s-coredns--674b8bbfcf--7w44v-eth0 coredns-674b8bbfcf- kube-system 78eab6a1-05c2-474a-ada8-8cc9bda3773a 837 0 2025-07-15 23:57:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-113 coredns-674b8bbfcf-7w44v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibb281e3bf58 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" Namespace="kube-system" Pod="coredns-674b8bbfcf-7w44v" WorkloadEndpoint="ip--172--31--28--113-k8s-coredns--674b8bbfcf--7w44v-" Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.589 [INFO][5232] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" Namespace="kube-system" Pod="coredns-674b8bbfcf-7w44v" WorkloadEndpoint="ip--172--31--28--113-k8s-coredns--674b8bbfcf--7w44v-eth0" Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.678 [INFO][5241] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" HandleID="k8s-pod-network.69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" Workload="ip--172--31--28--113-k8s-coredns--674b8bbfcf--7w44v-eth0" Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.678 [INFO][5241] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" HandleID="k8s-pod-network.69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" Workload="ip--172--31--28--113-k8s-coredns--674b8bbfcf--7w44v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00028d8d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-113", "pod":"coredns-674b8bbfcf-7w44v", "timestamp":"2025-07-15 23:58:05.678104065 +0000 UTC"}, Hostname:"ip-172-31-28-113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.678 [INFO][5241] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.679 [INFO][5241] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.679 [INFO][5241] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-113' Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.708 [INFO][5241] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" host="ip-172-31-28-113" Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.726 [INFO][5241] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-113" Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.736 [INFO][5241] ipam/ipam.go 511: Trying affinity for 192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.739 [INFO][5241] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.742 [INFO][5241] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.742 [INFO][5241] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" host="ip-172-31-28-113" Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.746 [INFO][5241] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.754 [INFO][5241] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" host="ip-172-31-28-113" Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.773 [INFO][5241] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.29.135/26] block=192.168.29.128/26 handle="k8s-pod-network.69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" host="ip-172-31-28-113" Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.774 [INFO][5241] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.135/26] handle="k8s-pod-network.69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" host="ip-172-31-28-113" Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.775 [INFO][5241] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:05.855306 containerd[1896]: 2025-07-15 23:58:05.775 [INFO][5241] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.135/26] IPv6=[] ContainerID="69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" HandleID="k8s-pod-network.69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" Workload="ip--172--31--28--113-k8s-coredns--674b8bbfcf--7w44v-eth0" Jul 15 23:58:05.864080 containerd[1896]: 2025-07-15 23:58:05.786 [INFO][5232] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" Namespace="kube-system" Pod="coredns-674b8bbfcf-7w44v" WorkloadEndpoint="ip--172--31--28--113-k8s-coredns--674b8bbfcf--7w44v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--113-k8s-coredns--674b8bbfcf--7w44v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"78eab6a1-05c2-474a-ada8-8cc9bda3773a", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-113", ContainerID:"", Pod:"coredns-674b8bbfcf-7w44v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb281e3bf58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:05.864080 containerd[1896]: 2025-07-15 23:58:05.786 [INFO][5232] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.135/32] ContainerID="69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" Namespace="kube-system" Pod="coredns-674b8bbfcf-7w44v" WorkloadEndpoint="ip--172--31--28--113-k8s-coredns--674b8bbfcf--7w44v-eth0" Jul 15 23:58:05.864080 containerd[1896]: 2025-07-15 23:58:05.786 [INFO][5232] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb281e3bf58 ContainerID="69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" Namespace="kube-system" Pod="coredns-674b8bbfcf-7w44v" WorkloadEndpoint="ip--172--31--28--113-k8s-coredns--674b8bbfcf--7w44v-eth0" Jul 15 23:58:05.864080 containerd[1896]: 2025-07-15 23:58:05.796 [INFO][5232] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" Namespace="kube-system" Pod="coredns-674b8bbfcf-7w44v" WorkloadEndpoint="ip--172--31--28--113-k8s-coredns--674b8bbfcf--7w44v-eth0" Jul 15 23:58:05.864080 containerd[1896]: 2025-07-15 23:58:05.798 [INFO][5232] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" Namespace="kube-system" Pod="coredns-674b8bbfcf-7w44v" WorkloadEndpoint="ip--172--31--28--113-k8s-coredns--674b8bbfcf--7w44v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--113-k8s-coredns--674b8bbfcf--7w44v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"78eab6a1-05c2-474a-ada8-8cc9bda3773a", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-113", ContainerID:"69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de", Pod:"coredns-674b8bbfcf-7w44v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb281e3bf58", MAC:"e6:3f:50:8d:bb:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:05.864080 containerd[1896]: 2025-07-15 23:58:05.828 [INFO][5232] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" Namespace="kube-system" Pod="coredns-674b8bbfcf-7w44v" WorkloadEndpoint="ip--172--31--28--113-k8s-coredns--674b8bbfcf--7w44v-eth0" Jul 15 23:58:05.993073 kubelet[3190]: I0715 23:58:05.978051 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sbczf" podStartSLOduration=39.936767472 podStartE2EDuration="39.936767472s" podCreationTimestamp="2025-07-15 23:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:58:05.896479912 +0000 UTC m=+45.745755399" watchObservedRunningTime="2025-07-15 23:58:05.936767472 +0000 UTC m=+45.786042949" Jul 15 23:58:06.007061 containerd[1896]: time="2025-07-15T23:58:06.007012991Z" level=info msg="connecting to shim 69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de" address="unix:///run/containerd/s/19267004bdc2d1131dc4fd5f94a1345d513858c83fd5fe98590b23520fbb7387" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:06.086602 systemd[1]: Started cri-containerd-69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de.scope - libcontainer container 69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de. Jul 15 23:58:06.190121 containerd[1896]: time="2025-07-15T23:58:06.190082934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7w44v,Uid:78eab6a1-05c2-474a-ada8-8cc9bda3773a,Namespace:kube-system,Attempt:0,} returns sandbox id \"69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de\"" Jul 15 23:58:06.207023 containerd[1896]: time="2025-07-15T23:58:06.206984076Z" level=info msg="CreateContainer within sandbox \"69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:58:06.228108 containerd[1896]: time="2025-07-15T23:58:06.228053453Z" level=info msg="Container 04948f6f4f6ffe376b45214255e4e4352da8541dea70941e7e49fb6709eaf228: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:06.244944 containerd[1896]: time="2025-07-15T23:58:06.244903257Z" level=info msg="CreateContainer within sandbox \"69c72c6235d1c9302327ab0d626df9df762285b2d78e82061c68fafbd526e6de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04948f6f4f6ffe376b45214255e4e4352da8541dea70941e7e49fb6709eaf228\"" Jul 15 23:58:06.247167 containerd[1896]: time="2025-07-15T23:58:06.247124972Z" level=info msg="StartContainer for \"04948f6f4f6ffe376b45214255e4e4352da8541dea70941e7e49fb6709eaf228\"" Jul 15 23:58:06.251602 containerd[1896]: time="2025-07-15T23:58:06.251524966Z" level=info msg="connecting to shim 04948f6f4f6ffe376b45214255e4e4352da8541dea70941e7e49fb6709eaf228" address="unix:///run/containerd/s/19267004bdc2d1131dc4fd5f94a1345d513858c83fd5fe98590b23520fbb7387" protocol=ttrpc version=3 Jul 15 23:58:06.279608 systemd[1]: Started cri-containerd-04948f6f4f6ffe376b45214255e4e4352da8541dea70941e7e49fb6709eaf228.scope - libcontainer container 04948f6f4f6ffe376b45214255e4e4352da8541dea70941e7e49fb6709eaf228. Jul 15 23:58:06.332875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount706251625.mount: Deactivated successfully. Jul 15 23:58:06.334410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601726036.mount: Deactivated successfully. Jul 15 23:58:06.357968 containerd[1896]: time="2025-07-15T23:58:06.357628659Z" level=info msg="StartContainer for \"04948f6f4f6ffe376b45214255e4e4352da8541dea70941e7e49fb6709eaf228\" returns successfully" Jul 15 23:58:06.779557 kubelet[3190]: I0715 23:58:06.779300 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7w44v" podStartSLOduration=40.779279662 podStartE2EDuration="40.779279662s" podCreationTimestamp="2025-07-15 23:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:58:06.778290308 +0000 UTC m=+46.627565787" watchObservedRunningTime="2025-07-15 23:58:06.779279662 +0000 UTC m=+46.628555141" Jul 15 23:58:07.213151 containerd[1896]: time="2025-07-15T23:58:07.213101001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:07.214404 containerd[1896]: time="2025-07-15T23:58:07.214016914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 15 23:58:07.221518 containerd[1896]: time="2025-07-15T23:58:07.221441755Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:07.224801 containerd[1896]: time="2025-07-15T23:58:07.224706031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:07.225566 containerd[1896]: time="2025-07-15T23:58:07.225386460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.179880689s" Jul 15 23:58:07.225566 containerd[1896]: time="2025-07-15T23:58:07.225420363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 15 23:58:07.226730 containerd[1896]: time="2025-07-15T23:58:07.226682742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 23:58:07.260779 containerd[1896]: time="2025-07-15T23:58:07.260735589Z" level=info msg="CreateContainer within sandbox \"1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 23:58:07.277301 containerd[1896]: time="2025-07-15T23:58:07.276815601Z" level=info msg="Container 9e6190860a6ff26e57706a5feeb6f6220d91ea9725f9099fba132e188bf239f4: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:07.304018 containerd[1896]: time="2025-07-15T23:58:07.303901379Z" level=info msg="CreateContainer within sandbox \"1e844208123baa5ddfed1cf713a6942ed810f0626fa7badb621d0bf9189ffd2d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9e6190860a6ff26e57706a5feeb6f6220d91ea9725f9099fba132e188bf239f4\"" Jul 15 23:58:07.305033 containerd[1896]: time="2025-07-15T23:58:07.304985784Z" level=info msg="StartContainer for \"9e6190860a6ff26e57706a5feeb6f6220d91ea9725f9099fba132e188bf239f4\"" Jul 15 23:58:07.306807 containerd[1896]: time="2025-07-15T23:58:07.306767426Z" level=info msg="connecting to shim 9e6190860a6ff26e57706a5feeb6f6220d91ea9725f9099fba132e188bf239f4" address="unix:///run/containerd/s/d808b61ef0ea7d6447376196ce92ca0cf479d7e169457fc09c67cc4565a2dafc" protocol=ttrpc version=3 Jul 15 23:58:07.314293 containerd[1896]: time="2025-07-15T23:58:07.313506131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94bf58f7b-mpkpj,Uid:642af4f8-45e2-4a1f-a15e-a31e1bedc462,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:58:07.349028 systemd[1]: Started cri-containerd-9e6190860a6ff26e57706a5feeb6f6220d91ea9725f9099fba132e188bf239f4.scope - libcontainer container 9e6190860a6ff26e57706a5feeb6f6220d91ea9725f9099fba132e188bf239f4. Jul 15 23:58:07.435370 containerd[1896]: time="2025-07-15T23:58:07.435290252Z" level=info msg="StartContainer for \"9e6190860a6ff26e57706a5feeb6f6220d91ea9725f9099fba132e188bf239f4\" returns successfully" Jul 15 23:58:07.582954 systemd-networkd[1813]: cali828a1750e1a: Link UP Jul 15 23:58:07.583737 systemd-networkd[1813]: cali828a1750e1a: Gained carrier Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.460 [INFO][5366] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--mpkpj-eth0 calico-apiserver-94bf58f7b- calico-apiserver 642af4f8-45e2-4a1f-a15e-a31e1bedc462 848 0 2025-07-15 23:57:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:94bf58f7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-113 calico-apiserver-94bf58f7b-mpkpj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali828a1750e1a [] [] }} ContainerID="71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" Namespace="calico-apiserver" Pod="calico-apiserver-94bf58f7b-mpkpj" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--mpkpj-" Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.460 [INFO][5366] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" Namespace="calico-apiserver" Pod="calico-apiserver-94bf58f7b-mpkpj" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--mpkpj-eth0" Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.508 [INFO][5403] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" HandleID="k8s-pod-network.71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" Workload="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--mpkpj-eth0" Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.508 [INFO][5403] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" HandleID="k8s-pod-network.71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" Workload="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--mpkpj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf0a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-113", "pod":"calico-apiserver-94bf58f7b-mpkpj", "timestamp":"2025-07-15 23:58:07.508453691 +0000 UTC"}, Hostname:"ip-172-31-28-113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.508 [INFO][5403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.509 [INFO][5403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.509 [INFO][5403] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-113' Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.518 [INFO][5403] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" host="ip-172-31-28-113" Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.528 [INFO][5403] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-113" Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.535 [INFO][5403] ipam/ipam.go 511: Trying affinity for 192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.538 [INFO][5403] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.543 [INFO][5403] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ip-172-31-28-113" Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.544 [INFO][5403] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" host="ip-172-31-28-113" Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.545 [INFO][5403] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966 Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.554 [INFO][5403] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" host="ip-172-31-28-113" Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.568 [INFO][5403] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.29.136/26] block=192.168.29.128/26 handle="k8s-pod-network.71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" host="ip-172-31-28-113" Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.569 [INFO][5403] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.136/26] handle="k8s-pod-network.71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" host="ip-172-31-28-113" Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.569 [INFO][5403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:07.609645 containerd[1896]: 2025-07-15 23:58:07.569 [INFO][5403] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.136/26] IPv6=[] ContainerID="71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" HandleID="k8s-pod-network.71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" Workload="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--mpkpj-eth0" Jul 15 23:58:07.612617 containerd[1896]: 2025-07-15 23:58:07.576 [INFO][5366] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" Namespace="calico-apiserver" Pod="calico-apiserver-94bf58f7b-mpkpj" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--mpkpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--mpkpj-eth0", GenerateName:"calico-apiserver-94bf58f7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"642af4f8-45e2-4a1f-a15e-a31e1bedc462", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94bf58f7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-113", ContainerID:"", Pod:"calico-apiserver-94bf58f7b-mpkpj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali828a1750e1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:07.612617 containerd[1896]: 2025-07-15 23:58:07.577 [INFO][5366] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.136/32] ContainerID="71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" Namespace="calico-apiserver" Pod="calico-apiserver-94bf58f7b-mpkpj" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--mpkpj-eth0" Jul 15 23:58:07.612617 containerd[1896]: 2025-07-15 23:58:07.577 [INFO][5366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali828a1750e1a ContainerID="71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" Namespace="calico-apiserver" Pod="calico-apiserver-94bf58f7b-mpkpj" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--mpkpj-eth0" Jul 15 23:58:07.612617 containerd[1896]: 2025-07-15 23:58:07.580 [INFO][5366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" Namespace="calico-apiserver" Pod="calico-apiserver-94bf58f7b-mpkpj" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--mpkpj-eth0" Jul 15 23:58:07.612617 containerd[1896]: 2025-07-15 23:58:07.581 [INFO][5366] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" Namespace="calico-apiserver" Pod="calico-apiserver-94bf58f7b-mpkpj" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--mpkpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--mpkpj-eth0", GenerateName:"calico-apiserver-94bf58f7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"642af4f8-45e2-4a1f-a15e-a31e1bedc462", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94bf58f7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-113", ContainerID:"71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966", Pod:"calico-apiserver-94bf58f7b-mpkpj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali828a1750e1a", MAC:"fe:e8:71:d9:ad:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:07.612617 containerd[1896]: 2025-07-15 23:58:07.599 [INFO][5366] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" Namespace="calico-apiserver" Pod="calico-apiserver-94bf58f7b-mpkpj" WorkloadEndpoint="ip--172--31--28--113-k8s-calico--apiserver--94bf58f7b--mpkpj-eth0" Jul 15 23:58:07.624405 systemd-networkd[1813]: calibb281e3bf58: Gained IPv6LL Jul 15 23:58:07.684209 containerd[1896]: time="2025-07-15T23:58:07.682913774Z" level=info msg="connecting to shim 71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966" address="unix:///run/containerd/s/12da6654461edc75b46931ab6cab0012fd3082ba4696365ff9eed58ec546bcd5" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:07.738608 systemd[1]: Started cri-containerd-71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966.scope - libcontainer container 71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966. Jul 15 23:58:07.797259 kubelet[3190]: I0715 23:58:07.796639 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-9bgld" podStartSLOduration=23.453882081 podStartE2EDuration="27.796601304s" podCreationTimestamp="2025-07-15 23:57:40 +0000 UTC" firstStartedPulling="2025-07-15 23:58:02.883823609 +0000 UTC m=+42.733099064" lastFinishedPulling="2025-07-15 23:58:07.22654283 +0000 UTC m=+47.075818287" observedRunningTime="2025-07-15 23:58:07.794592639 +0000 UTC m=+47.643868116" watchObservedRunningTime="2025-07-15 23:58:07.796601304 +0000 UTC m=+47.645876781" Jul 15 23:58:07.840027 containerd[1896]: time="2025-07-15T23:58:07.839892345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94bf58f7b-mpkpj,Uid:642af4f8-45e2-4a1f-a15e-a31e1bedc462,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966\"" Jul 15 23:58:08.694808 systemd[1]: Started sshd@7-172.31.28.113:22-139.178.89.65:60768.service - OpenSSH per-connection server daemon (139.178.89.65:60768). Jul 15 23:58:08.916758 sshd[5472]: Accepted publickey for core from 139.178.89.65 port 60768 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:58:08.921305 sshd-session[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:08.933550 systemd-logind[1854]: New session 8 of user core. Jul 15 23:58:08.942437 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 23:58:09.075702 containerd[1896]: time="2025-07-15T23:58:09.075189596Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e6190860a6ff26e57706a5feeb6f6220d91ea9725f9099fba132e188bf239f4\" id:\"259f50bb251e5f3964b8f43ac9fc8a3e89fe5a5dab0e0e6df1dd2ac5c7fc58ea\" pid:5486 exit_status:1 exited_at:{seconds:1752623888 nanos:965861021}" Jul 15 23:58:09.417462 systemd-networkd[1813]: cali828a1750e1a: Gained IPv6LL Jul 15 23:58:10.039415 containerd[1896]: time="2025-07-15T23:58:10.039366719Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e6190860a6ff26e57706a5feeb6f6220d91ea9725f9099fba132e188bf239f4\" id:\"ad35b92c1545fd4b66c40518c354e57a2bb532670e3f299303bf259fd6c37484\" pid:5519 exit_status:1 exited_at:{seconds:1752623890 nanos:38714449}" Jul 15 23:58:10.041255 sshd[5497]: Connection closed by 139.178.89.65 port 60768 Jul 15 23:58:10.042401 sshd-session[5472]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:10.050396 systemd[1]: sshd@7-172.31.28.113:22-139.178.89.65:60768.service: Deactivated successfully. Jul 15 23:58:10.053628 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 23:58:10.055376 systemd[1]: session-8.scope: Consumed 459ms CPU time, 64.3M memory peak. Jul 15 23:58:10.058407 systemd-logind[1854]: Session 8 logged out. Waiting for processes to exit. Jul 15 23:58:10.062096 systemd-logind[1854]: Removed session 8. Jul 15 23:58:11.852075 containerd[1896]: time="2025-07-15T23:58:11.852008136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:11.853072 containerd[1896]: time="2025-07-15T23:58:11.852979171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 15 23:58:11.854433 containerd[1896]: time="2025-07-15T23:58:11.854397602Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:11.856711 containerd[1896]: time="2025-07-15T23:58:11.856540937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:11.857400 containerd[1896]: time="2025-07-15T23:58:11.857368375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.630531608s" Jul 15 23:58:11.857479 containerd[1896]: time="2025-07-15T23:58:11.857401556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 15 23:58:11.858743 containerd[1896]: time="2025-07-15T23:58:11.858718900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 23:58:11.893682 containerd[1896]: time="2025-07-15T23:58:11.893340749Z" level=info msg="CreateContainer within sandbox \"4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 23:58:11.903190 containerd[1896]: time="2025-07-15T23:58:11.903145735Z" level=info msg="Container 68124cd3437dd2203c8296e8142f971c5929c76891de58bdca5dae68d03a52c2: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:11.910903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4247117295.mount: Deactivated successfully. Jul 15 23:58:11.915897 containerd[1896]: time="2025-07-15T23:58:11.915802482Z" level=info msg="CreateContainer within sandbox \"4e34c9abe5d139f0e674f6d2074c6f21c0114ae587d9e2bc13af46018a2b2964\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"68124cd3437dd2203c8296e8142f971c5929c76891de58bdca5dae68d03a52c2\"" Jul 15 23:58:11.917465 containerd[1896]: time="2025-07-15T23:58:11.916571868Z" level=info msg="StartContainer for \"68124cd3437dd2203c8296e8142f971c5929c76891de58bdca5dae68d03a52c2\"" Jul 15 23:58:11.919773 containerd[1896]: time="2025-07-15T23:58:11.919734461Z" level=info msg="connecting to shim 68124cd3437dd2203c8296e8142f971c5929c76891de58bdca5dae68d03a52c2" address="unix:///run/containerd/s/97a7a53e26b473c15b8c39bd67d06afc3e90ead10764fffc4358403912b58d56" protocol=ttrpc version=3 Jul 15 23:58:11.999518 systemd[1]: Started cri-containerd-68124cd3437dd2203c8296e8142f971c5929c76891de58bdca5dae68d03a52c2.scope - libcontainer container 68124cd3437dd2203c8296e8142f971c5929c76891de58bdca5dae68d03a52c2. Jul 15 23:58:12.065565 ntpd[1848]: Listen normally on 7 vxlan.calico 192.168.29.128:123 Jul 15 23:58:12.066726 ntpd[1848]: 15 Jul 23:58:12 ntpd[1848]: Listen normally on 7 vxlan.calico 192.168.29.128:123 Jul 15 23:58:12.066726 ntpd[1848]: 15 Jul 23:58:12 ntpd[1848]: Listen normally on 8 calib0376717131 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 23:58:12.066726 ntpd[1848]: 15 Jul 23:58:12 ntpd[1848]: Listen normally on 9 vxlan.calico [fe80::64ed:22ff:feaa:b045%5]:123 Jul 15 23:58:12.066726 ntpd[1848]: 15 Jul 23:58:12 ntpd[1848]: Listen normally on 10 calif66ba99dcd4 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 23:58:12.066726 ntpd[1848]: 15 Jul 23:58:12 ntpd[1848]: Listen normally on 11 calid7c4f9d2223 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 23:58:12.066726 ntpd[1848]: 15 Jul 23:58:12 ntpd[1848]: Listen normally on 12 calic374937ce2e [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 23:58:12.066726 ntpd[1848]: 15 Jul 23:58:12 ntpd[1848]: Listen normally on 13 cali1152c7982b3 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 23:58:12.066726 ntpd[1848]: 15 Jul 23:58:12 ntpd[1848]: Listen normally on 14 cali048a1675e96 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 15 23:58:12.066726 ntpd[1848]: 15 Jul 23:58:12 ntpd[1848]: Listen normally on 15 calibb281e3bf58 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 15 23:58:12.066726 ntpd[1848]: 15 Jul 23:58:12 ntpd[1848]: Listen normally on 16 cali828a1750e1a [fe80::ecee:eeff:feee:eeee%14]:123 Jul 15 23:58:12.065661 ntpd[1848]: Listen normally on 8 calib0376717131 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 23:58:12.065715 ntpd[1848]: Listen normally on 9 vxlan.calico [fe80::64ed:22ff:feaa:b045%5]:123 Jul 15 23:58:12.065753 ntpd[1848]: Listen normally on 10 calif66ba99dcd4 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 23:58:12.065806 ntpd[1848]: Listen normally on 11 calid7c4f9d2223 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 23:58:12.065848 ntpd[1848]: Listen normally on 12 calic374937ce2e [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 23:58:12.065889 ntpd[1848]: Listen normally on 13 cali1152c7982b3 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 23:58:12.065927 ntpd[1848]: Listen normally on 14 cali048a1675e96 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 15 23:58:12.065975 ntpd[1848]: Listen normally on 15 calibb281e3bf58 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 15 23:58:12.066011 ntpd[1848]: Listen normally on 16 cali828a1750e1a [fe80::ecee:eeff:feee:eeee%14]:123 Jul 15 23:58:12.102197 containerd[1896]: time="2025-07-15T23:58:12.093077168Z" level=info msg="StartContainer for \"68124cd3437dd2203c8296e8142f971c5929c76891de58bdca5dae68d03a52c2\" returns successfully" Jul 15 23:58:12.815140 kubelet[3190]: I0715 23:58:12.815054 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d5f69495f-dmf7c" podStartSLOduration=23.990687383 podStartE2EDuration="32.815036828s" podCreationTimestamp="2025-07-15 23:57:40 +0000 UTC" firstStartedPulling="2025-07-15 23:58:03.034172618 +0000 UTC m=+42.883448072" lastFinishedPulling="2025-07-15 23:58:11.858522053 +0000 UTC m=+51.707797517" observedRunningTime="2025-07-15 23:58:12.814548043 +0000 UTC m=+52.663823520" watchObservedRunningTime="2025-07-15 23:58:12.815036828 +0000 UTC m=+52.664312333" Jul 15 23:58:12.887019 containerd[1896]: time="2025-07-15T23:58:12.886960820Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68124cd3437dd2203c8296e8142f971c5929c76891de58bdca5dae68d03a52c2\" id:\"5569848b7f0aa160bb722c75ae7bc0e95e38998b3f940980127328495207f3f1\" pid:5604 exited_at:{seconds:1752623892 nanos:878361758}" Jul 15 23:58:13.890181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2365115635.mount: Deactivated successfully. Jul 15 23:58:13.944099 containerd[1896]: time="2025-07-15T23:58:13.943981607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:13.947935 containerd[1896]: time="2025-07-15T23:58:13.947896554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 15 23:58:13.950707 containerd[1896]: time="2025-07-15T23:58:13.950632144Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:13.955805 containerd[1896]: time="2025-07-15T23:58:13.954977361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:13.955805 containerd[1896]: time="2025-07-15T23:58:13.955656428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.096901569s" Jul 15 23:58:13.955805 containerd[1896]: time="2025-07-15T23:58:13.955693890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 15 23:58:13.957018 containerd[1896]: time="2025-07-15T23:58:13.956980767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 23:58:13.963394 containerd[1896]: time="2025-07-15T23:58:13.963357262Z" level=info msg="CreateContainer within sandbox \"d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 23:58:14.014800 containerd[1896]: time="2025-07-15T23:58:14.012929416Z" level=info msg="Container 3659bfff3ed05436685a7950ba74834200fd90bf464675e842b1915d154c79f8: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:14.112918 containerd[1896]: time="2025-07-15T23:58:14.112869136Z" level=info msg="CreateContainer within sandbox \"d877903f15bd54b550545db1d988fba4212b2ce076db301776ed2086b09784c3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3659bfff3ed05436685a7950ba74834200fd90bf464675e842b1915d154c79f8\"" Jul 15 23:58:14.114883 containerd[1896]: time="2025-07-15T23:58:14.114576320Z" level=info msg="StartContainer for \"3659bfff3ed05436685a7950ba74834200fd90bf464675e842b1915d154c79f8\"" Jul 15 23:58:14.121594 containerd[1896]: time="2025-07-15T23:58:14.121542892Z" level=info msg="connecting to shim 3659bfff3ed05436685a7950ba74834200fd90bf464675e842b1915d154c79f8" address="unix:///run/containerd/s/2a4d3bdb09568ec781619f7494a32b3664e4122e37b9136f6f4275e5458ffea6" protocol=ttrpc version=3 Jul 15 23:58:14.177559 systemd[1]: Started cri-containerd-3659bfff3ed05436685a7950ba74834200fd90bf464675e842b1915d154c79f8.scope - libcontainer container 3659bfff3ed05436685a7950ba74834200fd90bf464675e842b1915d154c79f8. Jul 15 23:58:14.360495 containerd[1896]: time="2025-07-15T23:58:14.360438708Z" level=info msg="StartContainer for \"3659bfff3ed05436685a7950ba74834200fd90bf464675e842b1915d154c79f8\" returns successfully" Jul 15 23:58:14.819366 kubelet[3190]: I0715 23:58:14.819312 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6b7c7ccbcd-vh2c7" podStartSLOduration=3.299784465 podStartE2EDuration="15.819297536s" podCreationTimestamp="2025-07-15 23:57:59 +0000 UTC" firstStartedPulling="2025-07-15 23:58:01.437258064 +0000 UTC m=+41.286533521" lastFinishedPulling="2025-07-15 23:58:13.956771118 +0000 UTC m=+53.806046592" observedRunningTime="2025-07-15 23:58:14.8191573 +0000 UTC m=+54.668432777" watchObservedRunningTime="2025-07-15 23:58:14.819297536 +0000 UTC m=+54.668573003" Jul 15 23:58:15.074571 systemd[1]: Started sshd@8-172.31.28.113:22-139.178.89.65:36770.service - OpenSSH per-connection server daemon (139.178.89.65:36770). Jul 15 23:58:15.320346 sshd[5657]: Accepted publickey for core from 139.178.89.65 port 36770 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:58:15.324350 sshd-session[5657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:15.330880 systemd-logind[1854]: New session 9 of user core. Jul 15 23:58:15.335523 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 23:58:15.982507 sshd[5659]: Connection closed by 139.178.89.65 port 36770 Jul 15 23:58:15.983580 sshd-session[5657]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:15.989080 systemd-logind[1854]: Session 9 logged out. Waiting for processes to exit. Jul 15 23:58:15.990332 systemd[1]: sshd@8-172.31.28.113:22-139.178.89.65:36770.service: Deactivated successfully. Jul 15 23:58:15.992995 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 23:58:15.996037 systemd-logind[1854]: Removed session 9. Jul 15 23:58:16.261721 containerd[1896]: time="2025-07-15T23:58:16.261586142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:16.263360 containerd[1896]: time="2025-07-15T23:58:16.263213966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 15 23:58:16.264882 containerd[1896]: time="2025-07-15T23:58:16.264836681Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:16.268379 containerd[1896]: time="2025-07-15T23:58:16.267818752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:16.268379 containerd[1896]: time="2025-07-15T23:58:16.268253035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.311239208s" Jul 15 23:58:16.268379 containerd[1896]: time="2025-07-15T23:58:16.268281728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 15 23:58:16.269834 containerd[1896]: time="2025-07-15T23:58:16.269799953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:58:16.275710 containerd[1896]: time="2025-07-15T23:58:16.275664049Z" level=info msg="CreateContainer within sandbox \"d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 23:58:16.326788 containerd[1896]: time="2025-07-15T23:58:16.326738505Z" level=info msg="Container 63cf5ccb54825a4020f37ed1431cc64c7440f1ae2365569d58252576b558f395: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:16.365958 containerd[1896]: time="2025-07-15T23:58:16.365892185Z" level=info msg="CreateContainer within sandbox \"d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"63cf5ccb54825a4020f37ed1431cc64c7440f1ae2365569d58252576b558f395\"" Jul 15 23:58:16.366640 containerd[1896]: time="2025-07-15T23:58:16.366615817Z" level=info msg="StartContainer for \"63cf5ccb54825a4020f37ed1431cc64c7440f1ae2365569d58252576b558f395\"" Jul 15 23:58:16.368412 containerd[1896]: time="2025-07-15T23:58:16.368331141Z" level=info msg="connecting to shim 63cf5ccb54825a4020f37ed1431cc64c7440f1ae2365569d58252576b558f395" address="unix:///run/containerd/s/2e7dd289ed55024b963a649d8cc02e1d0e3ee7d4e683ce3c382c9846ccf01211" protocol=ttrpc version=3 Jul 15 23:58:16.400878 systemd[1]: Started cri-containerd-63cf5ccb54825a4020f37ed1431cc64c7440f1ae2365569d58252576b558f395.scope - libcontainer container 63cf5ccb54825a4020f37ed1431cc64c7440f1ae2365569d58252576b558f395. Jul 15 23:58:16.455494 containerd[1896]: time="2025-07-15T23:58:16.455451462Z" level=info msg="StartContainer for \"63cf5ccb54825a4020f37ed1431cc64c7440f1ae2365569d58252576b558f395\" returns successfully" Jul 15 23:58:18.945735 containerd[1896]: time="2025-07-15T23:58:18.945688852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:18.947920 containerd[1896]: time="2025-07-15T23:58:18.947791907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 15 23:58:18.950740 containerd[1896]: time="2025-07-15T23:58:18.950700458Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:18.957250 containerd[1896]: time="2025-07-15T23:58:18.956572289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:18.958793 containerd[1896]: time="2025-07-15T23:58:18.958736161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.688758545s" Jul 15 23:58:18.960971 containerd[1896]: time="2025-07-15T23:58:18.958941704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 23:58:18.990846 containerd[1896]: time="2025-07-15T23:58:18.990804165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:58:19.182925 containerd[1896]: time="2025-07-15T23:58:19.182772816Z" level=info msg="CreateContainer within sandbox \"060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:58:19.222120 containerd[1896]: time="2025-07-15T23:58:19.219796446Z" level=info msg="Container 4a6c3c60525a169ad097cffb4d4dca56c5de0daddf28ebcc6c13c1e176e9fac5: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:19.242244 containerd[1896]: time="2025-07-15T23:58:19.242099547Z" level=info msg="CreateContainer within sandbox \"060ce9129f3071cfc4fe32de224b590919a049d9db1bdf765df131597bdd432e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4a6c3c60525a169ad097cffb4d4dca56c5de0daddf28ebcc6c13c1e176e9fac5\"" Jul 15 23:58:19.244005 containerd[1896]: time="2025-07-15T23:58:19.243862943Z" level=info msg="StartContainer for \"4a6c3c60525a169ad097cffb4d4dca56c5de0daddf28ebcc6c13c1e176e9fac5\"" Jul 15 23:58:19.246746 containerd[1896]: time="2025-07-15T23:58:19.246646374Z" level=info msg="connecting to shim 4a6c3c60525a169ad097cffb4d4dca56c5de0daddf28ebcc6c13c1e176e9fac5" address="unix:///run/containerd/s/e13d07f48f22eff53092a4a3eb9daecdb0a6edde83b2d08ba5264aadfa952795" protocol=ttrpc version=3 Jul 15 23:58:19.295810 systemd[1]: Started cri-containerd-4a6c3c60525a169ad097cffb4d4dca56c5de0daddf28ebcc6c13c1e176e9fac5.scope - libcontainer container 4a6c3c60525a169ad097cffb4d4dca56c5de0daddf28ebcc6c13c1e176e9fac5. Jul 15 23:58:19.471804 containerd[1896]: time="2025-07-15T23:58:19.471283907Z" level=info msg="StartContainer for \"4a6c3c60525a169ad097cffb4d4dca56c5de0daddf28ebcc6c13c1e176e9fac5\" returns successfully" Jul 15 23:58:19.503675 containerd[1896]: time="2025-07-15T23:58:19.503546315Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:19.506527 containerd[1896]: time="2025-07-15T23:58:19.506476363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 23:58:19.508482 containerd[1896]: time="2025-07-15T23:58:19.508423881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 517.577959ms" Jul 15 23:58:19.508649 containerd[1896]: time="2025-07-15T23:58:19.508485307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 23:58:19.510131 containerd[1896]: time="2025-07-15T23:58:19.509920621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 23:58:19.516980 containerd[1896]: time="2025-07-15T23:58:19.516939849Z" level=info msg="CreateContainer within sandbox \"71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:58:19.532587 containerd[1896]: time="2025-07-15T23:58:19.532516010Z" level=info msg="Container 6ca149f22be5096dd06cae49809d4bf57ee115b43446efd220e7bf71ae9e8340: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:19.582684 containerd[1896]: time="2025-07-15T23:58:19.582608788Z" level=info msg="CreateContainer within sandbox \"71565146ac844f4314a6d404497a898f7c8aea078c617a122d48ecdd42884966\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6ca149f22be5096dd06cae49809d4bf57ee115b43446efd220e7bf71ae9e8340\"" Jul 15 23:58:19.584286 containerd[1896]: time="2025-07-15T23:58:19.583652800Z" level=info msg="StartContainer for \"6ca149f22be5096dd06cae49809d4bf57ee115b43446efd220e7bf71ae9e8340\"" Jul 15 23:58:19.587935 containerd[1896]: time="2025-07-15T23:58:19.587885315Z" level=info msg="connecting to shim 6ca149f22be5096dd06cae49809d4bf57ee115b43446efd220e7bf71ae9e8340" address="unix:///run/containerd/s/12da6654461edc75b46931ab6cab0012fd3082ba4696365ff9eed58ec546bcd5" protocol=ttrpc version=3 Jul 15 23:58:19.619471 systemd[1]: Started cri-containerd-6ca149f22be5096dd06cae49809d4bf57ee115b43446efd220e7bf71ae9e8340.scope - libcontainer container 6ca149f22be5096dd06cae49809d4bf57ee115b43446efd220e7bf71ae9e8340. Jul 15 23:58:19.728405 containerd[1896]: time="2025-07-15T23:58:19.728365960Z" level=info msg="StartContainer for \"6ca149f22be5096dd06cae49809d4bf57ee115b43446efd220e7bf71ae9e8340\" returns successfully" Jul 15 23:58:20.096498 kubelet[3190]: I0715 23:58:20.087777 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-94bf58f7b-9lt55" podStartSLOduration=28.92228377 podStartE2EDuration="44.078188554s" podCreationTimestamp="2025-07-15 23:57:36 +0000 UTC" firstStartedPulling="2025-07-15 23:58:03.834533438 +0000 UTC m=+43.683808897" lastFinishedPulling="2025-07-15 23:58:18.990438206 +0000 UTC m=+58.839713681" observedRunningTime="2025-07-15 23:58:20.061896683 +0000 UTC m=+59.911172165" watchObservedRunningTime="2025-07-15 23:58:20.078188554 +0000 UTC m=+59.927464035" Jul 15 23:58:20.107003 kubelet[3190]: I0715 23:58:20.106790 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-94bf58f7b-mpkpj" podStartSLOduration=32.439415682 podStartE2EDuration="44.106771743s" podCreationTimestamp="2025-07-15 23:57:36 +0000 UTC" firstStartedPulling="2025-07-15 23:58:07.842343514 +0000 UTC m=+47.691618969" lastFinishedPulling="2025-07-15 23:58:19.509699555 +0000 UTC m=+59.358975030" observedRunningTime="2025-07-15 23:58:20.106106135 +0000 UTC m=+59.955381614" watchObservedRunningTime="2025-07-15 23:58:20.106771743 +0000 UTC m=+59.956047223" Jul 15 23:58:21.016561 kubelet[3190]: I0715 23:58:21.015594 3190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:58:21.026571 systemd[1]: Started sshd@9-172.31.28.113:22-139.178.89.65:59862.service - OpenSSH per-connection server daemon (139.178.89.65:59862). Jul 15 23:58:21.429826 sshd[5791]: Accepted publickey for core from 139.178.89.65 port 59862 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:58:21.441720 sshd-session[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:21.452101 systemd-logind[1854]: New session 10 of user core. Jul 15 23:58:21.457410 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 23:58:22.043038 kubelet[3190]: I0715 23:58:22.042428 3190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:58:22.297184 containerd[1896]: time="2025-07-15T23:58:22.296518965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:22.300333 containerd[1896]: time="2025-07-15T23:58:22.300272777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 15 23:58:22.303483 containerd[1896]: time="2025-07-15T23:58:22.303424153Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:22.307181 containerd[1896]: time="2025-07-15T23:58:22.307127108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:22.310408 containerd[1896]: time="2025-07-15T23:58:22.310158342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.800200887s" Jul 15 23:58:22.310408 containerd[1896]: time="2025-07-15T23:58:22.310365569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 15 23:58:22.320258 containerd[1896]: time="2025-07-15T23:58:22.319317743Z" level=info msg="CreateContainer within sandbox \"d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 23:58:22.371878 containerd[1896]: time="2025-07-15T23:58:22.370392001Z" level=info msg="Container 93e5a18a8cbda21e2057cf3714f68948a0533c5fe6920a4460d63bad9c767b63: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:22.379144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3294495347.mount: Deactivated successfully. Jul 15 23:58:22.390451 containerd[1896]: time="2025-07-15T23:58:22.390375424Z" level=info msg="CreateContainer within sandbox \"d6db75d8446729d2a2e6e1f53cf6e28b88ee41bf5ea91e7820954be6fc542470\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"93e5a18a8cbda21e2057cf3714f68948a0533c5fe6920a4460d63bad9c767b63\"" Jul 15 23:58:22.396101 containerd[1896]: time="2025-07-15T23:58:22.395550741Z" level=info msg="StartContainer for \"93e5a18a8cbda21e2057cf3714f68948a0533c5fe6920a4460d63bad9c767b63\"" Jul 15 23:58:22.404071 containerd[1896]: time="2025-07-15T23:58:22.404018744Z" level=info msg="connecting to shim 93e5a18a8cbda21e2057cf3714f68948a0533c5fe6920a4460d63bad9c767b63" address="unix:///run/containerd/s/2e7dd289ed55024b963a649d8cc02e1d0e3ee7d4e683ce3c382c9846ccf01211" protocol=ttrpc version=3 Jul 15 23:58:22.473758 systemd[1]: Started cri-containerd-93e5a18a8cbda21e2057cf3714f68948a0533c5fe6920a4460d63bad9c767b63.scope - libcontainer container 93e5a18a8cbda21e2057cf3714f68948a0533c5fe6920a4460d63bad9c767b63. Jul 15 23:58:22.761018 containerd[1896]: time="2025-07-15T23:58:22.760680127Z" level=info msg="StartContainer for \"93e5a18a8cbda21e2057cf3714f68948a0533c5fe6920a4460d63bad9c767b63\" returns successfully" Jul 15 23:58:22.970471 sshd[5797]: Connection closed by 139.178.89.65 port 59862 Jul 15 23:58:22.971746 sshd-session[5791]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:22.979372 systemd[1]: sshd@9-172.31.28.113:22-139.178.89.65:59862.service: Deactivated successfully. Jul 15 23:58:22.985508 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 23:58:22.989345 systemd-logind[1854]: Session 10 logged out. Waiting for processes to exit. Jul 15 23:58:23.019552 systemd[1]: Started sshd@10-172.31.28.113:22-139.178.89.65:59868.service - OpenSSH per-connection server daemon (139.178.89.65:59868). Jul 15 23:58:23.022192 systemd-logind[1854]: Removed session 10. Jul 15 23:58:23.094975 kubelet[3190]: I0715 23:58:23.094918 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5b8vh" podStartSLOduration=24.475839717 podStartE2EDuration="43.094900749s" podCreationTimestamp="2025-07-15 23:57:40 +0000 UTC" firstStartedPulling="2025-07-15 23:58:03.693368247 +0000 UTC m=+43.542643711" lastFinishedPulling="2025-07-15 23:58:22.312429285 +0000 UTC m=+62.161704743" observedRunningTime="2025-07-15 23:58:23.092637871 +0000 UTC m=+62.941913329" watchObservedRunningTime="2025-07-15 23:58:23.094900749 +0000 UTC m=+62.944176226" Jul 15 23:58:23.255879 sshd[5852]: Accepted publickey for core from 139.178.89.65 port 59868 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:58:23.257470 sshd-session[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:23.263147 systemd-logind[1854]: New session 11 of user core. Jul 15 23:58:23.268456 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 23:58:23.706026 sshd[5856]: Connection closed by 139.178.89.65 port 59868 Jul 15 23:58:23.711722 sshd-session[5852]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:23.723343 systemd[1]: sshd@10-172.31.28.113:22-139.178.89.65:59868.service: Deactivated successfully. Jul 15 23:58:23.724043 kubelet[3190]: I0715 23:58:23.719775 3190 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 23:58:23.726370 kubelet[3190]: I0715 23:58:23.725786 3190 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 23:58:23.730361 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 23:58:23.734483 systemd-logind[1854]: Session 11 logged out. Waiting for processes to exit. Jul 15 23:58:23.752622 systemd[1]: Started sshd@11-172.31.28.113:22-139.178.89.65:59878.service - OpenSSH per-connection server daemon (139.178.89.65:59878). Jul 15 23:58:23.755287 systemd-logind[1854]: Removed session 11. Jul 15 23:58:23.974655 sshd[5866]: Accepted publickey for core from 139.178.89.65 port 59878 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:58:23.975662 sshd-session[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:23.983460 systemd-logind[1854]: New session 12 of user core. Jul 15 23:58:23.986469 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 23:58:24.365646 sshd[5868]: Connection closed by 139.178.89.65 port 59878 Jul 15 23:58:24.367198 sshd-session[5866]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:24.374589 systemd[1]: sshd@11-172.31.28.113:22-139.178.89.65:59878.service: Deactivated successfully. Jul 15 23:58:24.377315 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 23:58:24.380990 systemd-logind[1854]: Session 12 logged out. Waiting for processes to exit. Jul 15 23:58:24.385099 systemd-logind[1854]: Removed session 12. Jul 15 23:58:29.404542 systemd[1]: Started sshd@12-172.31.28.113:22-139.178.89.65:50342.service - OpenSSH per-connection server daemon (139.178.89.65:50342). Jul 15 23:58:29.707101 sshd[5888]: Accepted publickey for core from 139.178.89.65 port 50342 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:58:29.711950 sshd-session[5888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:29.720260 systemd-logind[1854]: New session 13 of user core. Jul 15 23:58:29.725451 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 23:58:30.379456 sshd[5891]: Connection closed by 139.178.89.65 port 50342 Jul 15 23:58:30.380470 sshd-session[5888]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:30.386038 systemd[1]: sshd@12-172.31.28.113:22-139.178.89.65:50342.service: Deactivated successfully. Jul 15 23:58:30.389235 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 23:58:30.390454 systemd-logind[1854]: Session 13 logged out. Waiting for processes to exit. Jul 15 23:58:30.392470 systemd-logind[1854]: Removed session 13. Jul 15 23:58:31.302768 containerd[1896]: time="2025-07-15T23:58:31.302713949Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a69c13f22a3b28382525da0fdd8e8d911e8cfb351e0334c737a7cab1015f15eb\" id:\"a8122dfc2a692f0f35054a590ec3e347d8c19692c284b989763ef43b3b2924a3\" pid:5914 exited_at:{seconds:1752623911 nanos:284094851}" Jul 15 23:58:35.419368 systemd[1]: Started sshd@13-172.31.28.113:22-139.178.89.65:50348.service - OpenSSH per-connection server daemon (139.178.89.65:50348). Jul 15 23:58:35.665351 sshd[5930]: Accepted publickey for core from 139.178.89.65 port 50348 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:58:35.667116 sshd-session[5930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:35.673058 systemd-logind[1854]: New session 14 of user core. Jul 15 23:58:35.679443 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 23:58:36.352428 sshd[5932]: Connection closed by 139.178.89.65 port 50348 Jul 15 23:58:36.354257 sshd-session[5930]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:36.358702 systemd-logind[1854]: Session 14 logged out. Waiting for processes to exit. Jul 15 23:58:36.359769 systemd[1]: sshd@13-172.31.28.113:22-139.178.89.65:50348.service: Deactivated successfully. Jul 15 23:58:36.363075 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 23:58:36.364983 systemd-logind[1854]: Removed session 14. Jul 15 23:58:38.931117 containerd[1896]: time="2025-07-15T23:58:38.931076893Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68124cd3437dd2203c8296e8142f971c5929c76891de58bdca5dae68d03a52c2\" id:\"904ab9e14322bdd9cb26f17488d13c24520026eaccf027602f8a399f055746f2\" pid:5955 exited_at:{seconds:1752623918 nanos:930657084}" Jul 15 23:58:39.629834 kubelet[3190]: I0715 23:58:39.629586 3190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:58:40.340124 containerd[1896]: time="2025-07-15T23:58:40.340052810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e6190860a6ff26e57706a5feeb6f6220d91ea9725f9099fba132e188bf239f4\" id:\"22876d4dbfa2b806f218656dd339faf16a26e1a89a3b6985542ddc3cd5e3eee9\" pid:5979 exited_at:{seconds:1752623920 nanos:338612778}" Jul 15 23:58:41.389590 systemd[1]: Started sshd@14-172.31.28.113:22-139.178.89.65:41784.service - OpenSSH per-connection server daemon (139.178.89.65:41784). Jul 15 23:58:41.664944 sshd[5994]: Accepted publickey for core from 139.178.89.65 port 41784 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:58:41.668344 sshd-session[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:41.674679 systemd-logind[1854]: New session 15 of user core. Jul 15 23:58:41.681551 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 23:58:42.398938 sshd[5996]: Connection closed by 139.178.89.65 port 41784 Jul 15 23:58:42.399844 sshd-session[5994]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:42.405782 systemd-logind[1854]: Session 15 logged out. Waiting for processes to exit. Jul 15 23:58:42.406665 systemd[1]: sshd@14-172.31.28.113:22-139.178.89.65:41784.service: Deactivated successfully. Jul 15 23:58:42.412614 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 23:58:42.416171 systemd-logind[1854]: Removed session 15. Jul 15 23:58:43.009672 containerd[1896]: time="2025-07-15T23:58:43.009622319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68124cd3437dd2203c8296e8142f971c5929c76891de58bdca5dae68d03a52c2\" id:\"ed50ce37c0b9577bed0f6d844d4d9b958a7b7521e14871fcd612b02a9ed3fdb9\" pid:6027 exited_at:{seconds:1752623923 nanos:9276860}" Jul 15 23:58:47.433514 systemd[1]: Started sshd@15-172.31.28.113:22-139.178.89.65:41800.service - OpenSSH per-connection server daemon (139.178.89.65:41800). Jul 15 23:58:47.659333 sshd[6037]: Accepted publickey for core from 139.178.89.65 port 41800 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:58:47.661768 sshd-session[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:47.673017 systemd-logind[1854]: New session 16 of user core. Jul 15 23:58:47.677481 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 23:58:48.039550 sshd[6041]: Connection closed by 139.178.89.65 port 41800 Jul 15 23:58:48.041781 sshd-session[6037]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:48.048906 systemd-logind[1854]: Session 16 logged out. Waiting for processes to exit. Jul 15 23:58:48.049715 systemd[1]: sshd@15-172.31.28.113:22-139.178.89.65:41800.service: Deactivated successfully. Jul 15 23:58:48.055744 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 23:58:48.077171 systemd-logind[1854]: Removed session 16. Jul 15 23:58:48.079548 systemd[1]: Started sshd@16-172.31.28.113:22-139.178.89.65:41812.service - OpenSSH per-connection server daemon (139.178.89.65:41812). Jul 15 23:58:48.328052 sshd[6053]: Accepted publickey for core from 139.178.89.65 port 41812 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:58:48.329536 sshd-session[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:48.338894 systemd-logind[1854]: New session 17 of user core. Jul 15 23:58:48.348120 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 23:58:49.132842 sshd[6055]: Connection closed by 139.178.89.65 port 41812 Jul 15 23:58:49.144155 sshd-session[6053]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:49.169670 systemd[1]: sshd@16-172.31.28.113:22-139.178.89.65:41812.service: Deactivated successfully. Jul 15 23:58:49.174820 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 23:58:49.177441 systemd-logind[1854]: Session 17 logged out. Waiting for processes to exit. Jul 15 23:58:49.185895 systemd[1]: Started sshd@17-172.31.28.113:22-139.178.89.65:50068.service - OpenSSH per-connection server daemon (139.178.89.65:50068). Jul 15 23:58:49.202207 systemd-logind[1854]: Removed session 17. Jul 15 23:58:49.458559 sshd[6065]: Accepted publickey for core from 139.178.89.65 port 50068 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:58:49.460327 sshd-session[6065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:49.467199 systemd-logind[1854]: New session 18 of user core. Jul 15 23:58:49.475345 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 23:58:50.899121 sshd[6067]: Connection closed by 139.178.89.65 port 50068 Jul 15 23:58:50.906570 sshd-session[6065]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:50.919837 systemd[1]: sshd@17-172.31.28.113:22-139.178.89.65:50068.service: Deactivated successfully. Jul 15 23:58:50.927410 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 23:58:50.935305 systemd-logind[1854]: Session 18 logged out. Waiting for processes to exit. Jul 15 23:58:50.957883 systemd[1]: Started sshd@18-172.31.28.113:22-139.178.89.65:50070.service - OpenSSH per-connection server daemon (139.178.89.65:50070). Jul 15 23:58:50.966028 systemd-logind[1854]: Removed session 18. Jul 15 23:58:51.236489 sshd[6082]: Accepted publickey for core from 139.178.89.65 port 50070 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:58:51.244111 sshd-session[6082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:51.262375 systemd-logind[1854]: New session 19 of user core. Jul 15 23:58:51.271449 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 23:58:53.414782 sshd[6086]: Connection closed by 139.178.89.65 port 50070 Jul 15 23:58:53.413044 sshd-session[6082]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:53.430336 systemd[1]: sshd@18-172.31.28.113:22-139.178.89.65:50070.service: Deactivated successfully. Jul 15 23:58:53.434834 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 23:58:53.436280 systemd[1]: session-19.scope: Consumed 951ms CPU time, 72.4M memory peak. Jul 15 23:58:53.439872 systemd-logind[1854]: Session 19 logged out. Waiting for processes to exit. Jul 15 23:58:53.462416 systemd[1]: Started sshd@19-172.31.28.113:22-139.178.89.65:50074.service - OpenSSH per-connection server daemon (139.178.89.65:50074). Jul 15 23:58:53.465458 systemd-logind[1854]: Removed session 19. Jul 15 23:58:53.768955 sshd[6096]: Accepted publickey for core from 139.178.89.65 port 50074 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:58:53.771133 sshd-session[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:53.777683 systemd-logind[1854]: New session 20 of user core. Jul 15 23:58:53.785493 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 23:58:54.347895 sshd[6098]: Connection closed by 139.178.89.65 port 50074 Jul 15 23:58:54.349676 sshd-session[6096]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:54.355180 systemd-logind[1854]: Session 20 logged out. Waiting for processes to exit. Jul 15 23:58:54.355817 systemd[1]: sshd@19-172.31.28.113:22-139.178.89.65:50074.service: Deactivated successfully. Jul 15 23:58:54.358139 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 23:58:54.360446 systemd-logind[1854]: Removed session 20. Jul 15 23:58:59.387177 systemd[1]: Started sshd@20-172.31.28.113:22-139.178.89.65:38208.service - OpenSSH per-connection server daemon (139.178.89.65:38208). Jul 15 23:58:59.615602 sshd[6117]: Accepted publickey for core from 139.178.89.65 port 38208 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:58:59.626459 sshd-session[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:59.635486 systemd-logind[1854]: New session 21 of user core. Jul 15 23:58:59.641632 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 23:59:00.006549 sshd[6119]: Connection closed by 139.178.89.65 port 38208 Jul 15 23:59:00.007779 sshd-session[6117]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:00.070925 systemd[1]: sshd@20-172.31.28.113:22-139.178.89.65:38208.service: Deactivated successfully. Jul 15 23:59:00.071486 systemd-logind[1854]: Session 21 logged out. Waiting for processes to exit. Jul 15 23:59:00.095821 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 23:59:00.119509 systemd-logind[1854]: Removed session 21. Jul 15 23:59:01.781174 containerd[1896]: time="2025-07-15T23:59:01.754948769Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e6190860a6ff26e57706a5feeb6f6220d91ea9725f9099fba132e188bf239f4\" id:\"6a6fd3d66ecf1947831814f08a18bd0a4ad3245b350cfe6aed9548a60a81d58e\" pid:6160 exited_at:{seconds:1752623941 nanos:727834317}" Jul 15 23:59:01.968254 containerd[1896]: time="2025-07-15T23:59:01.966096659Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a69c13f22a3b28382525da0fdd8e8d911e8cfb351e0334c737a7cab1015f15eb\" id:\"af9b89476250e1a56d2505419ca526778c7cc5c95a83a8d5c4a339287302c105\" pid:6142 exited_at:{seconds:1752623941 nanos:964339926}" Jul 15 23:59:05.040490 systemd[1]: Started sshd@21-172.31.28.113:22-139.178.89.65:38210.service - OpenSSH per-connection server daemon (139.178.89.65:38210). Jul 15 23:59:05.339241 sshd[6179]: Accepted publickey for core from 139.178.89.65 port 38210 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:59:05.344794 sshd-session[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:05.357465 systemd-logind[1854]: New session 22 of user core. Jul 15 23:59:05.362347 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 23:59:06.219771 sshd[6181]: Connection closed by 139.178.89.65 port 38210 Jul 15 23:59:06.222024 sshd-session[6179]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:06.229653 systemd[1]: sshd@21-172.31.28.113:22-139.178.89.65:38210.service: Deactivated successfully. Jul 15 23:59:06.229667 systemd-logind[1854]: Session 22 logged out. Waiting for processes to exit. Jul 15 23:59:06.235421 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 23:59:06.238402 systemd-logind[1854]: Removed session 22. Jul 15 23:59:10.190249 containerd[1896]: time="2025-07-15T23:59:10.189956580Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e6190860a6ff26e57706a5feeb6f6220d91ea9725f9099fba132e188bf239f4\" id:\"0db9daf77e4b0f1854982f573c873799e2d78a4a3f915a17b0312e72b9f4a5fd\" pid:6205 exited_at:{seconds:1752623950 nanos:189448479}" Jul 15 23:59:11.253180 systemd[1]: Started sshd@22-172.31.28.113:22-139.178.89.65:38482.service - OpenSSH per-connection server daemon (139.178.89.65:38482). Jul 15 23:59:11.440264 sshd[6217]: Accepted publickey for core from 139.178.89.65 port 38482 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:59:11.442295 sshd-session[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:11.449257 systemd-logind[1854]: New session 23 of user core. Jul 15 23:59:11.457412 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 23:59:11.715194 sshd[6219]: Connection closed by 139.178.89.65 port 38482 Jul 15 23:59:11.716453 sshd-session[6217]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:11.722502 systemd[1]: sshd@22-172.31.28.113:22-139.178.89.65:38482.service: Deactivated successfully. Jul 15 23:59:11.726052 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 23:59:11.728427 systemd-logind[1854]: Session 23 logged out. Waiting for processes to exit. Jul 15 23:59:11.733466 systemd-logind[1854]: Removed session 23. Jul 15 23:59:13.024826 containerd[1896]: time="2025-07-15T23:59:13.024608592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68124cd3437dd2203c8296e8142f971c5929c76891de58bdca5dae68d03a52c2\" id:\"f80656e127d11d832f2a53ad47505e487e39e5eeb1c97a34f871707ec4b39d82\" pid:6242 exited_at:{seconds:1752623953 nanos:23396200}" Jul 15 23:59:16.752697 systemd[1]: Started sshd@23-172.31.28.113:22-139.178.89.65:38492.service - OpenSSH per-connection server daemon (139.178.89.65:38492). Jul 15 23:59:17.027447 sshd[6252]: Accepted publickey for core from 139.178.89.65 port 38492 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:59:17.029765 sshd-session[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:17.037499 systemd-logind[1854]: New session 24 of user core. Jul 15 23:59:17.046814 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 23:59:17.480119 sshd[6254]: Connection closed by 139.178.89.65 port 38492 Jul 15 23:59:17.481293 sshd-session[6252]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:17.487172 systemd-logind[1854]: Session 24 logged out. Waiting for processes to exit. Jul 15 23:59:17.489846 systemd[1]: sshd@23-172.31.28.113:22-139.178.89.65:38492.service: Deactivated successfully. Jul 15 23:59:17.493092 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 23:59:17.498173 systemd-logind[1854]: Removed session 24. Jul 15 23:59:22.520655 systemd[1]: Started sshd@24-172.31.28.113:22-139.178.89.65:52202.service - OpenSSH per-connection server daemon (139.178.89.65:52202). Jul 15 23:59:22.756876 sshd[6274]: Accepted publickey for core from 139.178.89.65 port 52202 ssh2: RSA SHA256:KxZSjSYoVf3yQ5CJBlOr1bi0nGsymhbHScGevy2ZxGc Jul 15 23:59:22.759351 sshd-session[6274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:22.768947 systemd-logind[1854]: New session 25 of user core. Jul 15 23:59:22.778495 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 23:59:23.060032 sshd[6276]: Connection closed by 139.178.89.65 port 52202 Jul 15 23:59:23.061078 sshd-session[6274]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:23.071044 systemd-logind[1854]: Session 25 logged out. Waiting for processes to exit. Jul 15 23:59:23.073622 systemd[1]: sshd@24-172.31.28.113:22-139.178.89.65:52202.service: Deactivated successfully. Jul 15 23:59:23.078153 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 23:59:23.083316 systemd-logind[1854]: Removed session 25. Jul 15 23:59:30.959341 containerd[1896]: time="2025-07-15T23:59:30.958554917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a69c13f22a3b28382525da0fdd8e8d911e8cfb351e0334c737a7cab1015f15eb\" id:\"d91a46cfd8ed909631c0893fcc14a6a0f24f1f70c1e536c0dad032d8af5eabe3\" pid:6302 exited_at:{seconds:1752623970 nanos:957957156}" Jul 15 23:59:37.678603 systemd[1]: cri-containerd-b4669ddaf852a4a5c1e5a7075a5b8ae6bd25fa6c030701a73acc22f5bf5db7b7.scope: Deactivated successfully. Jul 15 23:59:37.678931 systemd[1]: cri-containerd-b4669ddaf852a4a5c1e5a7075a5b8ae6bd25fa6c030701a73acc22f5bf5db7b7.scope: Consumed 4.338s CPU time, 90.6M memory peak, 120.3M read from disk. Jul 15 23:59:37.884296 containerd[1896]: time="2025-07-15T23:59:37.884249768Z" level=info msg="received exit event container_id:\"b4669ddaf852a4a5c1e5a7075a5b8ae6bd25fa6c030701a73acc22f5bf5db7b7\" id:\"b4669ddaf852a4a5c1e5a7075a5b8ae6bd25fa6c030701a73acc22f5bf5db7b7\" pid:3016 exit_status:1 exited_at:{seconds:1752623977 nanos:834871841}" Jul 15 23:59:37.908880 containerd[1896]: time="2025-07-15T23:59:37.908663931Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4669ddaf852a4a5c1e5a7075a5b8ae6bd25fa6c030701a73acc22f5bf5db7b7\" id:\"b4669ddaf852a4a5c1e5a7075a5b8ae6bd25fa6c030701a73acc22f5bf5db7b7\" pid:3016 exit_status:1 exited_at:{seconds:1752623977 nanos:834871841}" Jul 15 23:59:38.026193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4669ddaf852a4a5c1e5a7075a5b8ae6bd25fa6c030701a73acc22f5bf5db7b7-rootfs.mount: Deactivated successfully. Jul 15 23:59:38.806710 systemd[1]: cri-containerd-15042eb1b7668dcf9420906316a2cd8a00be224662aa7d46f83f1e45a8f0f570.scope: Deactivated successfully. Jul 15 23:59:38.808006 systemd[1]: cri-containerd-15042eb1b7668dcf9420906316a2cd8a00be224662aa7d46f83f1e45a8f0f570.scope: Consumed 14.891s CPU time, 112.5M memory peak, 76.7M read from disk. Jul 15 23:59:38.817198 containerd[1896]: time="2025-07-15T23:59:38.817146250Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15042eb1b7668dcf9420906316a2cd8a00be224662aa7d46f83f1e45a8f0f570\" id:\"15042eb1b7668dcf9420906316a2cd8a00be224662aa7d46f83f1e45a8f0f570\" pid:3516 exit_status:1 exited_at:{seconds:1752623978 nanos:816685052}" Jul 15 23:59:38.817198 containerd[1896]: time="2025-07-15T23:59:38.817187158Z" level=info msg="received exit event container_id:\"15042eb1b7668dcf9420906316a2cd8a00be224662aa7d46f83f1e45a8f0f570\" id:\"15042eb1b7668dcf9420906316a2cd8a00be224662aa7d46f83f1e45a8f0f570\" pid:3516 exit_status:1 exited_at:{seconds:1752623978 nanos:816685052}" Jul 15 23:59:38.846820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15042eb1b7668dcf9420906316a2cd8a00be224662aa7d46f83f1e45a8f0f570-rootfs.mount: Deactivated successfully. Jul 15 23:59:38.898293 containerd[1896]: time="2025-07-15T23:59:38.898194426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68124cd3437dd2203c8296e8142f971c5929c76891de58bdca5dae68d03a52c2\" id:\"ef53d0d9dfe9e6db28bf5ed71a0971a3af3387b7f39094d4591c81148ad8f809\" pid:6371 exit_status:1 exited_at:{seconds:1752623978 nanos:897957199}" Jul 15 23:59:39.105617 kubelet[3190]: I0715 23:59:39.105482 3190 scope.go:117] "RemoveContainer" containerID="b4669ddaf852a4a5c1e5a7075a5b8ae6bd25fa6c030701a73acc22f5bf5db7b7" Jul 15 23:59:39.115553 kubelet[3190]: I0715 23:59:39.115515 3190 scope.go:117] "RemoveContainer" containerID="15042eb1b7668dcf9420906316a2cd8a00be224662aa7d46f83f1e45a8f0f570" Jul 15 23:59:39.191004 containerd[1896]: time="2025-07-15T23:59:39.190934138Z" level=info msg="CreateContainer within sandbox \"2199b3e00990525bfdd7d2033144148c603f75b14841e5342c33f72aedede9bf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 15 23:59:39.192717 containerd[1896]: time="2025-07-15T23:59:39.192678936Z" level=info msg="CreateContainer within sandbox \"e1c83904c06b65d6b4a3edda0dceea2f3c6037af98baa3a505ca83ea40ef5e1b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 15 23:59:39.369847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1506102499.mount: Deactivated successfully. Jul 15 23:59:39.382918 containerd[1896]: time="2025-07-15T23:59:39.379746310Z" level=info msg="Container ad32c73e296dfd0d5c2a58fcae06480913aa0109cffc42611e74352a5d9174b0: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:39.387400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296742203.mount: Deactivated successfully. Jul 15 23:59:39.394305 containerd[1896]: time="2025-07-15T23:59:39.391723147Z" level=info msg="Container 033540b9aaabee39168879d679f7866622941fb7d556e57bf7682c8114b0f1e7: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:39.394499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount91369449.mount: Deactivated successfully. Jul 15 23:59:39.416200 containerd[1896]: time="2025-07-15T23:59:39.416146293Z" level=info msg="CreateContainer within sandbox \"2199b3e00990525bfdd7d2033144148c603f75b14841e5342c33f72aedede9bf\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ad32c73e296dfd0d5c2a58fcae06480913aa0109cffc42611e74352a5d9174b0\"" Jul 15 23:59:39.416820 containerd[1896]: time="2025-07-15T23:59:39.416798089Z" level=info msg="StartContainer for \"ad32c73e296dfd0d5c2a58fcae06480913aa0109cffc42611e74352a5d9174b0\"" Jul 15 23:59:39.417504 containerd[1896]: time="2025-07-15T23:59:39.417383132Z" level=info msg="CreateContainer within sandbox \"e1c83904c06b65d6b4a3edda0dceea2f3c6037af98baa3a505ca83ea40ef5e1b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"033540b9aaabee39168879d679f7866622941fb7d556e57bf7682c8114b0f1e7\"" Jul 15 23:59:39.419137 containerd[1896]: time="2025-07-15T23:59:39.417974918Z" level=info msg="StartContainer for \"033540b9aaabee39168879d679f7866622941fb7d556e57bf7682c8114b0f1e7\"" Jul 15 23:59:39.419137 containerd[1896]: time="2025-07-15T23:59:39.419035027Z" level=info msg="connecting to shim 033540b9aaabee39168879d679f7866622941fb7d556e57bf7682c8114b0f1e7" address="unix:///run/containerd/s/b821b5eeebd6442781a6df43ce9cbc3760810082fcd7d9d6a9b01351c71752bf" protocol=ttrpc version=3 Jul 15 23:59:39.425883 containerd[1896]: time="2025-07-15T23:59:39.424472226Z" level=info msg="connecting to shim ad32c73e296dfd0d5c2a58fcae06480913aa0109cffc42611e74352a5d9174b0" address="unix:///run/containerd/s/05ae5f87b95299e8165c58f9562db5baacb9ace4f16a86dceae9aec5e674c71d" protocol=ttrpc version=3 Jul 15 23:59:39.528524 systemd[1]: Started cri-containerd-ad32c73e296dfd0d5c2a58fcae06480913aa0109cffc42611e74352a5d9174b0.scope - libcontainer container ad32c73e296dfd0d5c2a58fcae06480913aa0109cffc42611e74352a5d9174b0. Jul 15 23:59:39.538743 systemd[1]: Started cri-containerd-033540b9aaabee39168879d679f7866622941fb7d556e57bf7682c8114b0f1e7.scope - libcontainer container 033540b9aaabee39168879d679f7866622941fb7d556e57bf7682c8114b0f1e7. Jul 15 23:59:39.627933 containerd[1896]: time="2025-07-15T23:59:39.627818331Z" level=info msg="StartContainer for \"ad32c73e296dfd0d5c2a58fcae06480913aa0109cffc42611e74352a5d9174b0\" returns successfully" Jul 15 23:59:39.666729 containerd[1896]: time="2025-07-15T23:59:39.666681107Z" level=info msg="StartContainer for \"033540b9aaabee39168879d679f7866622941fb7d556e57bf7682c8114b0f1e7\" returns successfully" Jul 15 23:59:39.898500 containerd[1896]: time="2025-07-15T23:59:39.898349417Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e6190860a6ff26e57706a5feeb6f6220d91ea9725f9099fba132e188bf239f4\" id:\"ed3e4b1fbe1e602bb7c49901895f59a0b18987891a9ef181655deb77fe3b90fa\" pid:6453 exited_at:{seconds:1752623979 nanos:897928488}" Jul 15 23:59:40.370124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount190742532.mount: Deactivated successfully. Jul 15 23:59:42.722503 systemd[1]: cri-containerd-6c6de8fb877ea4225ea0b6ce664ff8facf6b9a2943f3a9b98fc04190e16eb8a1.scope: Deactivated successfully. Jul 15 23:59:42.724426 containerd[1896]: time="2025-07-15T23:59:42.723371663Z" level=info msg="received exit event container_id:\"6c6de8fb877ea4225ea0b6ce664ff8facf6b9a2943f3a9b98fc04190e16eb8a1\" id:\"6c6de8fb877ea4225ea0b6ce664ff8facf6b9a2943f3a9b98fc04190e16eb8a1\" pid:3027 exit_status:1 exited_at:{seconds:1752623982 nanos:722560864}" Jul 15 23:59:42.724426 containerd[1896]: time="2025-07-15T23:59:42.723378724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c6de8fb877ea4225ea0b6ce664ff8facf6b9a2943f3a9b98fc04190e16eb8a1\" id:\"6c6de8fb877ea4225ea0b6ce664ff8facf6b9a2943f3a9b98fc04190e16eb8a1\" pid:3027 exit_status:1 exited_at:{seconds:1752623982 nanos:722560864}" Jul 15 23:59:42.723507 systemd[1]: cri-containerd-6c6de8fb877ea4225ea0b6ce664ff8facf6b9a2943f3a9b98fc04190e16eb8a1.scope: Consumed 2.716s CPU time, 36.8M memory peak, 66.6M read from disk. Jul 15 23:59:42.766909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c6de8fb877ea4225ea0b6ce664ff8facf6b9a2943f3a9b98fc04190e16eb8a1-rootfs.mount: Deactivated successfully. Jul 15 23:59:42.851088 kubelet[3190]: E0715 23:59:42.849588 3190 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-113?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jul 15 23:59:42.857961 containerd[1896]: time="2025-07-15T23:59:42.857904909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68124cd3437dd2203c8296e8142f971c5929c76891de58bdca5dae68d03a52c2\" id:\"98e80ac20494563d195313b4af43e24bbc58fd1f68ca0228567ef3de99be8c7e\" pid:6492 exit_status:1 exited_at:{seconds:1752623982 nanos:852398458}" Jul 15 23:59:43.139214 kubelet[3190]: I0715 23:59:43.138797 3190 scope.go:117] "RemoveContainer" containerID="6c6de8fb877ea4225ea0b6ce664ff8facf6b9a2943f3a9b98fc04190e16eb8a1" Jul 15 23:59:43.141235 containerd[1896]: time="2025-07-15T23:59:43.141198294Z" level=info msg="CreateContainer within sandbox \"ce5d5808313a6b7ec42780404486031177d746347f47f48d30840a684747ec14\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 15 23:59:43.166465 containerd[1896]: time="2025-07-15T23:59:43.166406396Z" level=info msg="Container bea723ddfb888ebf4a6f8523d1e7fd9f5ade622f3951f68db544e061cb531468: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:43.183358 containerd[1896]: time="2025-07-15T23:59:43.183315090Z" level=info msg="CreateContainer within sandbox \"ce5d5808313a6b7ec42780404486031177d746347f47f48d30840a684747ec14\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"bea723ddfb888ebf4a6f8523d1e7fd9f5ade622f3951f68db544e061cb531468\"" Jul 15 23:59:43.183973 containerd[1896]: time="2025-07-15T23:59:43.183931464Z" level=info msg="StartContainer for \"bea723ddfb888ebf4a6f8523d1e7fd9f5ade622f3951f68db544e061cb531468\"" Jul 15 23:59:43.185363 containerd[1896]: time="2025-07-15T23:59:43.185319647Z" level=info msg="connecting to shim bea723ddfb888ebf4a6f8523d1e7fd9f5ade622f3951f68db544e061cb531468" address="unix:///run/containerd/s/82359b0ed59044ae0c4bf581cccb59588c593cdce0165e860c3aeb62f045900b" protocol=ttrpc version=3 Jul 15 23:59:43.216508 systemd[1]: Started cri-containerd-bea723ddfb888ebf4a6f8523d1e7fd9f5ade622f3951f68db544e061cb531468.scope - libcontainer container bea723ddfb888ebf4a6f8523d1e7fd9f5ade622f3951f68db544e061cb531468. Jul 15 23:59:43.278466 containerd[1896]: time="2025-07-15T23:59:43.278426599Z" level=info msg="StartContainer for \"bea723ddfb888ebf4a6f8523d1e7fd9f5ade622f3951f68db544e061cb531468\" returns successfully"