Sep 12 22:55:46.907184 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 20:38:35 -00 2025 Sep 12 22:55:46.907224 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 22:55:46.907239 kernel: BIOS-provided physical RAM map: Sep 12 22:55:46.907252 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 22:55:46.907264 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 12 22:55:46.907277 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 12 22:55:46.907288 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 22:55:46.907298 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 22:55:46.907313 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 22:55:46.907323 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 22:55:46.907333 kernel: NX (Execute Disable) protection: active Sep 12 22:55:46.907344 kernel: APIC: Static calls initialized Sep 12 22:55:46.907357 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 12 22:55:46.907369 kernel: extended physical RAM map: Sep 12 22:55:46.907387 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 22:55:46.907399 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Sep 12 22:55:46.907411 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Sep 12 22:55:46.907422 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Sep 12 22:55:46.907436 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 12 22:55:46.907449 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 22:55:46.907462 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 22:55:46.907476 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 22:55:46.907490 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 22:55:46.907501 kernel: efi: EFI v2.7 by EDK II Sep 12 22:55:46.907518 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 12 22:55:46.907532 kernel: secureboot: Secure boot disabled Sep 12 22:55:46.907545 kernel: SMBIOS 2.7 present. Sep 12 22:55:46.907556 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 12 22:55:46.907569 kernel: DMI: Memory slots populated: 1/1 Sep 12 22:55:46.907581 kernel: Hypervisor detected: KVM Sep 12 22:55:46.907594 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 22:55:46.907606 kernel: kvm-clock: using sched offset of 5181657012 cycles Sep 12 22:55:46.907620 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 22:55:46.907634 kernel: tsc: Detected 2499.996 MHz processor Sep 12 22:55:46.907648 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 22:55:46.907666 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 22:55:46.909722 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 12 22:55:46.909747 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 22:55:46.909761 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 22:55:46.909772 kernel: Using GB pages for direct mapping Sep 12 22:55:46.909793 kernel: ACPI: Early table checksum verification disabled Sep 12 22:55:46.909810 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 12 22:55:46.909824 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 22:55:46.909837 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 22:55:46.909850 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 12 22:55:46.909864 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 12 22:55:46.909877 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 12 22:55:46.909890 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 22:55:46.909904 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 22:55:46.909920 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 12 22:55:46.909933 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 12 22:55:46.909947 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 22:55:46.909960 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 22:55:46.909973 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 12 22:55:46.909985 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 12 22:55:46.909999 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 12 22:55:46.910012 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 12 22:55:46.910027 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 12 22:55:46.910040 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 12 22:55:46.910053 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 12 22:55:46.910066 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 12 22:55:46.910079 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 12 22:55:46.910092 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 12 22:55:46.910105 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 12 22:55:46.910117 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 12 22:55:46.910130 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 12 22:55:46.910144 kernel: NUMA: Initialized distance table, cnt=1 Sep 12 22:55:46.910159 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Sep 12 22:55:46.910172 kernel: Zone ranges: Sep 12 22:55:46.910185 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 22:55:46.910198 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 12 22:55:46.910211 kernel: Normal empty Sep 12 22:55:46.910224 kernel: Device empty Sep 12 22:55:46.910236 kernel: Movable zone start for each node Sep 12 22:55:46.910250 kernel: Early memory node ranges Sep 12 22:55:46.910262 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 22:55:46.910278 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 12 22:55:46.910291 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 12 22:55:46.910304 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 12 22:55:46.910317 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 22:55:46.910330 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 22:55:46.910344 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 12 22:55:46.910357 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 12 22:55:46.910370 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 12 22:55:46.910384 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 22:55:46.910399 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 12 22:55:46.910412 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 22:55:46.910425 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 22:55:46.910437 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 22:55:46.910450 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 22:55:46.910463 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 22:55:46.910477 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 22:55:46.910490 kernel: TSC deadline timer available Sep 12 22:55:46.910503 kernel: CPU topo: Max. logical packages: 1 Sep 12 22:55:46.910516 kernel: CPU topo: Max. logical dies: 1 Sep 12 22:55:46.910532 kernel: CPU topo: Max. dies per package: 1 Sep 12 22:55:46.910544 kernel: CPU topo: Max. threads per core: 2 Sep 12 22:55:46.910557 kernel: CPU topo: Num. cores per package: 1 Sep 12 22:55:46.910570 kernel: CPU topo: Num. threads per package: 2 Sep 12 22:55:46.910583 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 12 22:55:46.910595 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 22:55:46.910608 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 12 22:55:46.910621 kernel: Booting paravirtualized kernel on KVM Sep 12 22:55:46.910634 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 22:55:46.910650 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 22:55:46.910663 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 12 22:55:46.910695 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 12 22:55:46.910708 kernel: pcpu-alloc: [0] 0 1 Sep 12 22:55:46.910721 kernel: kvm-guest: PV spinlocks enabled Sep 12 22:55:46.910735 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 22:55:46.910750 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 22:55:46.910764 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 22:55:46.910780 kernel: random: crng init done Sep 12 22:55:46.910793 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 22:55:46.910806 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 22:55:46.910819 kernel: Fallback order for Node 0: 0 Sep 12 22:55:46.910832 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Sep 12 22:55:46.910845 kernel: Policy zone: DMA32 Sep 12 22:55:46.910869 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 22:55:46.910886 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 22:55:46.910900 kernel: Kernel/User page tables isolation: enabled Sep 12 22:55:46.910913 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 22:55:46.910926 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 22:55:46.910940 kernel: Dynamic Preempt: voluntary Sep 12 22:55:46.910956 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 22:55:46.910971 kernel: rcu: RCU event tracing is enabled. Sep 12 22:55:46.910985 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 22:55:46.910998 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 22:55:46.911012 kernel: Rude variant of Tasks RCU enabled. Sep 12 22:55:46.911028 kernel: Tracing variant of Tasks RCU enabled. Sep 12 22:55:46.911043 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 22:55:46.911057 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 22:55:46.911071 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 22:55:46.911085 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 22:55:46.911099 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 22:55:46.911113 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 22:55:46.911127 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 22:55:46.911141 kernel: Console: colour dummy device 80x25 Sep 12 22:55:46.911158 kernel: printk: legacy console [tty0] enabled Sep 12 22:55:46.911172 kernel: printk: legacy console [ttyS0] enabled Sep 12 22:55:46.911186 kernel: ACPI: Core revision 20240827 Sep 12 22:55:46.911200 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 12 22:55:46.911214 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 22:55:46.911227 kernel: x2apic enabled Sep 12 22:55:46.911241 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 22:55:46.911255 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 12 22:55:46.911270 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Sep 12 22:55:46.911286 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 22:55:46.911300 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 22:55:46.911312 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 22:55:46.911326 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 22:55:46.911338 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 22:55:46.911351 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 12 22:55:46.911367 kernel: RETBleed: Vulnerable Sep 12 22:55:46.911382 kernel: Speculative Store Bypass: Vulnerable Sep 12 22:55:46.911396 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 22:55:46.911411 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 22:55:46.911429 kernel: GDS: Unknown: Dependent on hypervisor status Sep 12 22:55:46.911444 kernel: active return thunk: its_return_thunk Sep 12 22:55:46.911458 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 22:55:46.911474 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 22:55:46.911488 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 22:55:46.911504 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 22:55:46.911518 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 12 22:55:46.911533 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 12 22:55:46.911548 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 12 22:55:46.911563 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 12 22:55:46.911578 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 12 22:55:46.911596 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 12 22:55:46.911611 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 22:55:46.911626 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 12 22:55:46.911641 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 12 22:55:46.911656 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 12 22:55:46.911671 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 12 22:55:46.912033 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 12 22:55:46.912047 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 12 22:55:46.912060 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 12 22:55:46.912073 kernel: Freeing SMP alternatives memory: 32K Sep 12 22:55:46.912087 kernel: pid_max: default: 32768 minimum: 301 Sep 12 22:55:46.912107 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 22:55:46.912121 kernel: landlock: Up and running. Sep 12 22:55:46.912135 kernel: SELinux: Initializing. Sep 12 22:55:46.912149 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 22:55:46.912162 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 22:55:46.912176 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 12 22:55:46.912192 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 12 22:55:46.912206 kernel: signal: max sigframe size: 3632 Sep 12 22:55:46.912221 kernel: rcu: Hierarchical SRCU implementation. Sep 12 22:55:46.912236 kernel: rcu: Max phase no-delay instances is 400. Sep 12 22:55:46.912251 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 22:55:46.912269 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 22:55:46.912284 kernel: smp: Bringing up secondary CPUs ... Sep 12 22:55:46.912299 kernel: smpboot: x86: Booting SMP configuration: Sep 12 22:55:46.912314 kernel: .... node #0, CPUs: #1 Sep 12 22:55:46.912330 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 12 22:55:46.912347 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 22:55:46.912362 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 22:55:46.912377 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Sep 12 22:55:46.912396 kernel: Memory: 1908060K/2037804K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54084K init, 2880K bss, 125188K reserved, 0K cma-reserved) Sep 12 22:55:46.912411 kernel: devtmpfs: initialized Sep 12 22:55:46.912426 kernel: x86/mm: Memory block size: 128MB Sep 12 22:55:46.912442 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 12 22:55:46.912457 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 22:55:46.912473 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 22:55:46.912488 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 22:55:46.912503 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 22:55:46.912519 kernel: audit: initializing netlink subsys (disabled) Sep 12 22:55:46.912537 kernel: audit: type=2000 audit(1757717744.827:1): state=initialized audit_enabled=0 res=1 Sep 12 22:55:46.912552 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 22:55:46.912567 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 22:55:46.912582 kernel: cpuidle: using governor menu Sep 12 22:55:46.912597 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 22:55:46.912612 kernel: dca service started, version 1.12.1 Sep 12 22:55:46.912628 kernel: PCI: Using configuration type 1 for base access Sep 12 22:55:46.912644 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 22:55:46.912659 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 22:55:46.912782 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 22:55:46.912798 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 22:55:46.912813 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 22:55:46.912828 kernel: ACPI: Added _OSI(Module Device) Sep 12 22:55:46.912847 kernel: ACPI: Added _OSI(Processor Device) Sep 12 22:55:46.912859 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 22:55:46.912873 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 12 22:55:46.912886 kernel: ACPI: Interpreter enabled Sep 12 22:55:46.912900 kernel: ACPI: PM: (supports S0 S5) Sep 12 22:55:46.912920 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 22:55:46.912934 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 22:55:46.912948 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 22:55:46.912962 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 22:55:46.912976 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 22:55:46.913208 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 22:55:46.913342 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 22:55:46.913473 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 22:55:46.913490 kernel: acpiphp: Slot [3] registered Sep 12 22:55:46.913505 kernel: acpiphp: Slot [4] registered Sep 12 22:55:46.913519 kernel: acpiphp: Slot [5] registered Sep 12 22:55:46.913533 kernel: acpiphp: Slot [6] registered Sep 12 22:55:46.913546 kernel: acpiphp: Slot [7] registered Sep 12 22:55:46.913560 kernel: acpiphp: Slot [8] registered Sep 12 22:55:46.913574 kernel: acpiphp: Slot [9] registered Sep 12 22:55:46.913587 kernel: acpiphp: Slot [10] registered Sep 12 22:55:46.913605 kernel: acpiphp: Slot [11] registered Sep 12 22:55:46.913619 kernel: acpiphp: Slot [12] registered Sep 12 22:55:46.913633 kernel: acpiphp: Slot [13] registered Sep 12 22:55:46.913646 kernel: acpiphp: Slot [14] registered Sep 12 22:55:46.913660 kernel: acpiphp: Slot [15] registered Sep 12 22:55:46.913687 kernel: acpiphp: Slot [16] registered Sep 12 22:55:46.913701 kernel: acpiphp: Slot [17] registered Sep 12 22:55:46.913715 kernel: acpiphp: Slot [18] registered Sep 12 22:55:46.913729 kernel: acpiphp: Slot [19] registered Sep 12 22:55:46.913742 kernel: acpiphp: Slot [20] registered Sep 12 22:55:46.913759 kernel: acpiphp: Slot [21] registered Sep 12 22:55:46.913773 kernel: acpiphp: Slot [22] registered Sep 12 22:55:46.913786 kernel: acpiphp: Slot [23] registered Sep 12 22:55:46.913801 kernel: acpiphp: Slot [24] registered Sep 12 22:55:46.913814 kernel: acpiphp: Slot [25] registered Sep 12 22:55:46.913828 kernel: acpiphp: Slot [26] registered Sep 12 22:55:46.913841 kernel: acpiphp: Slot [27] registered Sep 12 22:55:46.913855 kernel: acpiphp: Slot [28] registered Sep 12 22:55:46.913869 kernel: acpiphp: Slot [29] registered Sep 12 22:55:46.913886 kernel: acpiphp: Slot [30] registered Sep 12 22:55:46.913899 kernel: acpiphp: Slot [31] registered Sep 12 22:55:46.913913 kernel: PCI host bridge to bus 0000:00 Sep 12 22:55:46.914041 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 22:55:46.914158 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 22:55:46.914274 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 22:55:46.914391 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 22:55:46.914507 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 12 22:55:46.916745 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 22:55:46.916957 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 12 22:55:46.917108 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 12 22:55:46.917248 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Sep 12 22:55:46.917377 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 12 22:55:46.917538 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 12 22:55:46.918176 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 12 22:55:46.918352 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 12 22:55:46.918490 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 12 22:55:46.918621 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 12 22:55:46.918847 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 12 22:55:46.919007 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 22:55:46.919141 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Sep 12 22:55:46.919280 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 12 22:55:46.919407 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 22:55:46.919552 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Sep 12 22:55:46.919710 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Sep 12 22:55:46.919882 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Sep 12 22:55:46.920033 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Sep 12 22:55:46.920060 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 22:55:46.920078 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 22:55:46.920095 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 22:55:46.920112 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 22:55:46.920128 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 22:55:46.920145 kernel: iommu: Default domain type: Translated Sep 12 22:55:46.920162 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 22:55:46.920178 kernel: efivars: Registered efivars operations Sep 12 22:55:46.920194 kernel: PCI: Using ACPI for IRQ routing Sep 12 22:55:46.920215 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 22:55:46.920230 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Sep 12 22:55:46.920246 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 12 22:55:46.920263 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 12 22:55:46.920413 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 12 22:55:46.920562 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 12 22:55:46.921822 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 22:55:46.921855 kernel: vgaarb: loaded Sep 12 22:55:46.921877 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 12 22:55:46.921894 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 12 22:55:46.921910 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 22:55:46.921927 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 22:55:46.921945 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 22:55:46.921961 kernel: pnp: PnP ACPI init Sep 12 22:55:46.921978 kernel: pnp: PnP ACPI: found 5 devices Sep 12 22:55:46.921995 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 22:55:46.922012 kernel: NET: Registered PF_INET protocol family Sep 12 22:55:46.922031 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 22:55:46.922046 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 22:55:46.922062 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 22:55:46.922078 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 22:55:46.922093 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 22:55:46.922108 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 22:55:46.922122 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 22:55:46.922138 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 22:55:46.922154 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 22:55:46.922171 kernel: NET: Registered PF_XDP protocol family Sep 12 22:55:46.922338 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 22:55:46.922455 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 22:55:46.922568 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 22:55:46.923747 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 22:55:46.923999 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 12 22:55:46.924150 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 22:55:46.924170 kernel: PCI: CLS 0 bytes, default 64 Sep 12 22:55:46.924191 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 22:55:46.924206 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 12 22:55:46.924220 kernel: clocksource: Switched to clocksource tsc Sep 12 22:55:46.924234 kernel: Initialise system trusted keyrings Sep 12 22:55:46.924248 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 22:55:46.924262 kernel: Key type asymmetric registered Sep 12 22:55:46.924275 kernel: Asymmetric key parser 'x509' registered Sep 12 22:55:46.924289 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 22:55:46.924304 kernel: io scheduler mq-deadline registered Sep 12 22:55:46.924320 kernel: io scheduler kyber registered Sep 12 22:55:46.924335 kernel: io scheduler bfq registered Sep 12 22:55:46.924349 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 22:55:46.924363 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 22:55:46.924377 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 22:55:46.924392 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 22:55:46.924406 kernel: i8042: Warning: Keylock active Sep 12 22:55:46.924420 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 22:55:46.924434 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 22:55:46.924574 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 12 22:55:46.926395 kernel: rtc_cmos 00:00: registered as rtc0 Sep 12 22:55:46.926538 kernel: rtc_cmos 00:00: setting system clock to 2025-09-12T22:55:46 UTC (1757717746) Sep 12 22:55:46.926659 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 12 22:55:46.926776 kernel: intel_pstate: CPU model not supported Sep 12 22:55:46.926795 kernel: efifb: probing for efifb Sep 12 22:55:46.926812 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 12 22:55:46.926828 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 12 22:55:46.926846 kernel: efifb: scrolling: redraw Sep 12 22:55:46.926862 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 22:55:46.926878 kernel: Console: switching to colour frame buffer device 100x37 Sep 12 22:55:46.926894 kernel: fb0: EFI VGA frame buffer device Sep 12 22:55:46.926910 kernel: pstore: Using crash dump compression: deflate Sep 12 22:55:46.926926 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 22:55:46.926942 kernel: NET: Registered PF_INET6 protocol family Sep 12 22:55:46.926958 kernel: Segment Routing with IPv6 Sep 12 22:55:46.926974 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 22:55:46.926993 kernel: NET: Registered PF_PACKET protocol family Sep 12 22:55:46.927009 kernel: Key type dns_resolver registered Sep 12 22:55:46.927025 kernel: IPI shorthand broadcast: enabled Sep 12 22:55:46.927041 kernel: sched_clock: Marking stable (2641002201, 148150185)->(2864105159, -74952773) Sep 12 22:55:46.927057 kernel: registered taskstats version 1 Sep 12 22:55:46.927073 kernel: Loading compiled-in X.509 certificates Sep 12 22:55:46.927089 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: c3297a5801573420030c321362a802da1fd49c4e' Sep 12 22:55:46.927105 kernel: Demotion targets for Node 0: null Sep 12 22:55:46.927121 kernel: Key type .fscrypt registered Sep 12 22:55:46.927139 kernel: Key type fscrypt-provisioning registered Sep 12 22:55:46.927155 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 22:55:46.927170 kernel: ima: Allocated hash algorithm: sha1 Sep 12 22:55:46.927185 kernel: ima: No architecture policies found Sep 12 22:55:46.927201 kernel: clk: Disabling unused clocks Sep 12 22:55:46.927216 kernel: Warning: unable to open an initial console. Sep 12 22:55:46.927232 kernel: Freeing unused kernel image (initmem) memory: 54084K Sep 12 22:55:46.927248 kernel: Write protecting the kernel read-only data: 24576k Sep 12 22:55:46.927268 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 12 22:55:46.927286 kernel: Run /init as init process Sep 12 22:55:46.927302 kernel: with arguments: Sep 12 22:55:46.927318 kernel: /init Sep 12 22:55:46.927332 kernel: with environment: Sep 12 22:55:46.927349 kernel: HOME=/ Sep 12 22:55:46.927367 kernel: TERM=linux Sep 12 22:55:46.927383 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 22:55:46.927401 systemd[1]: Successfully made /usr/ read-only. Sep 12 22:55:46.927421 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 22:55:46.927438 systemd[1]: Detected virtualization amazon. Sep 12 22:55:46.927453 systemd[1]: Detected architecture x86-64. Sep 12 22:55:46.927468 systemd[1]: Running in initrd. Sep 12 22:55:46.927486 systemd[1]: No hostname configured, using default hostname. Sep 12 22:55:46.927502 systemd[1]: Hostname set to . Sep 12 22:55:46.927517 systemd[1]: Initializing machine ID from VM UUID. Sep 12 22:55:46.927537 systemd[1]: Queued start job for default target initrd.target. Sep 12 22:55:46.927553 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:55:46.927570 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:55:46.927588 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 22:55:46.927605 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 22:55:46.927625 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 22:55:46.927643 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 22:55:46.927662 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 22:55:46.927694 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 22:55:46.927712 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:55:46.927728 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:55:46.927744 systemd[1]: Reached target paths.target - Path Units. Sep 12 22:55:46.927766 systemd[1]: Reached target slices.target - Slice Units. Sep 12 22:55:46.927781 systemd[1]: Reached target swap.target - Swaps. Sep 12 22:55:46.927806 systemd[1]: Reached target timers.target - Timer Units. Sep 12 22:55:46.927822 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 22:55:46.927836 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 22:55:46.927852 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 22:55:46.927868 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 22:55:46.927884 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:55:46.927900 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 22:55:46.927921 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:55:46.927940 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 22:55:46.927958 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 22:55:46.927974 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 22:55:46.927990 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 22:55:46.928008 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 22:55:46.928023 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 22:55:46.928039 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 22:55:46.928059 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 22:55:46.928074 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:55:46.928120 systemd-journald[207]: Collecting audit messages is disabled. Sep 12 22:55:46.928155 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 22:55:46.928175 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:55:46.928191 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 22:55:46.930163 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 22:55:46.930194 systemd-journald[207]: Journal started Sep 12 22:55:46.930236 systemd-journald[207]: Runtime Journal (/run/log/journal/ec24b60d55e8f79e2727edf56dcb746e) is 4.8M, max 38.4M, 33.6M free. Sep 12 22:55:46.932749 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 22:55:46.943882 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 22:55:46.956671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:55:46.958041 systemd-modules-load[209]: Inserted module 'overlay' Sep 12 22:55:46.962836 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 22:55:46.966671 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 22:55:46.971999 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 22:55:46.978752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:55:46.989864 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 22:55:47.005714 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 22:55:47.010718 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 22:55:47.015702 kernel: Bridge firewalling registered Sep 12 22:55:47.015814 systemd-modules-load[209]: Inserted module 'br_netfilter' Sep 12 22:55:47.018061 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 22:55:47.020164 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 22:55:47.022235 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:55:47.024556 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 22:55:47.027932 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:55:47.042944 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:55:47.047908 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 22:55:47.051466 dracut-cmdline[241]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 22:55:47.107180 systemd-resolved[253]: Positive Trust Anchors: Sep 12 22:55:47.108147 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 22:55:47.108206 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 22:55:47.115724 systemd-resolved[253]: Defaulting to hostname 'linux'. Sep 12 22:55:47.119232 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 22:55:47.119955 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:55:47.157723 kernel: SCSI subsystem initialized Sep 12 22:55:47.167712 kernel: Loading iSCSI transport class v2.0-870. Sep 12 22:55:47.178770 kernel: iscsi: registered transport (tcp) Sep 12 22:55:47.200820 kernel: iscsi: registered transport (qla4xxx) Sep 12 22:55:47.200899 kernel: QLogic iSCSI HBA Driver Sep 12 22:55:47.220134 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 22:55:47.244891 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:55:47.248443 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 22:55:47.293946 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 22:55:47.296847 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 22:55:47.349726 kernel: raid6: avx512x4 gen() 17951 MB/s Sep 12 22:55:47.367710 kernel: raid6: avx512x2 gen() 17721 MB/s Sep 12 22:55:47.385708 kernel: raid6: avx512x1 gen() 17830 MB/s Sep 12 22:55:47.403709 kernel: raid6: avx2x4 gen() 17559 MB/s Sep 12 22:55:47.421723 kernel: raid6: avx2x2 gen() 17652 MB/s Sep 12 22:55:47.439971 kernel: raid6: avx2x1 gen() 13489 MB/s Sep 12 22:55:47.440042 kernel: raid6: using algorithm avx512x4 gen() 17951 MB/s Sep 12 22:55:47.458930 kernel: raid6: .... xor() 7589 MB/s, rmw enabled Sep 12 22:55:47.459019 kernel: raid6: using avx512x2 recovery algorithm Sep 12 22:55:47.479711 kernel: xor: automatically using best checksumming function avx Sep 12 22:55:47.651721 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 22:55:47.658412 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 22:55:47.660669 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:55:47.686957 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 12 22:55:47.693724 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:55:47.697876 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 22:55:47.723125 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Sep 12 22:55:47.751387 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 22:55:47.753865 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 22:55:47.814376 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:55:47.817859 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 22:55:47.904479 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 22:55:47.904779 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 22:55:47.911847 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 12 22:55:47.916066 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 22:55:47.918793 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 22:55:47.927703 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:02:c7:7d:c3:47 Sep 12 22:55:47.934697 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 22:55:47.937714 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 22:55:47.954013 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 22:55:47.954083 kernel: GPT:9289727 != 16777215 Sep 12 22:55:47.954103 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 22:55:47.954121 kernel: GPT:9289727 != 16777215 Sep 12 22:55:47.954138 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 22:55:47.954156 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 22:55:47.953516 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 22:55:47.953604 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:55:47.960844 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:55:47.963863 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:55:47.965398 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 22:55:47.967195 (udev-worker)[516]: Network interface NamePolicy= disabled on kernel command line. Sep 12 22:55:47.981293 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 22:55:47.982537 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:55:47.987959 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:55:47.994416 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 22:55:47.994458 kernel: AES CTR mode by8 optimization enabled Sep 12 22:55:48.037712 kernel: nvme nvme0: using unchecked data buffer Sep 12 22:55:48.039531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:55:48.130203 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 22:55:48.172497 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 22:55:48.184287 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 22:55:48.194529 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 22:55:48.195110 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 22:55:48.206984 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 22:55:48.207640 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 22:55:48.208949 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:55:48.210091 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 22:55:48.211885 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 22:55:48.215886 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 22:55:48.234797 disk-uuid[695]: Primary Header is updated. Sep 12 22:55:48.234797 disk-uuid[695]: Secondary Entries is updated. Sep 12 22:55:48.234797 disk-uuid[695]: Secondary Header is updated. Sep 12 22:55:48.242032 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 22:55:48.244375 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 22:55:49.256456 disk-uuid[698]: The operation has completed successfully. Sep 12 22:55:49.257190 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 22:55:49.404577 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 22:55:49.404728 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 22:55:49.433600 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 22:55:49.452247 sh[963]: Success Sep 12 22:55:49.480230 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 22:55:49.480299 kernel: device-mapper: uevent: version 1.0.3 Sep 12 22:55:49.480314 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 22:55:49.491707 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 12 22:55:49.592974 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 22:55:49.595527 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 22:55:49.611870 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 22:55:49.631707 kernel: BTRFS: device fsid 5d2ab445-1154-4e47-9d7e-ff4b81d84474 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (986) Sep 12 22:55:49.634819 kernel: BTRFS info (device dm-0): first mount of filesystem 5d2ab445-1154-4e47-9d7e-ff4b81d84474 Sep 12 22:55:49.634887 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 22:55:49.756014 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 22:55:49.756110 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 22:55:49.756131 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 22:55:49.772462 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 22:55:49.773402 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 22:55:49.774316 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 22:55:49.775088 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 22:55:49.776739 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 22:55:49.826718 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1019) Sep 12 22:55:49.830707 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:55:49.833959 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 22:55:49.850107 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 22:55:49.850184 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 22:55:49.859858 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:55:49.861036 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 22:55:49.865887 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 22:55:49.898127 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 22:55:49.900712 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 22:55:49.940055 systemd-networkd[1155]: lo: Link UP Sep 12 22:55:49.940067 systemd-networkd[1155]: lo: Gained carrier Sep 12 22:55:49.941758 systemd-networkd[1155]: Enumeration completed Sep 12 22:55:49.942489 systemd-networkd[1155]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:55:49.942499 systemd-networkd[1155]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 22:55:49.942785 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 22:55:49.944373 systemd[1]: Reached target network.target - Network. Sep 12 22:55:49.945418 systemd-networkd[1155]: eth0: Link UP Sep 12 22:55:49.945424 systemd-networkd[1155]: eth0: Gained carrier Sep 12 22:55:49.945439 systemd-networkd[1155]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:55:49.958783 systemd-networkd[1155]: eth0: DHCPv4 address 172.31.30.120/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 22:55:50.383706 ignition[1115]: Ignition 2.22.0 Sep 12 22:55:50.384544 ignition[1115]: Stage: fetch-offline Sep 12 22:55:50.384812 ignition[1115]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:55:50.384823 ignition[1115]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 22:55:50.385140 ignition[1115]: Ignition finished successfully Sep 12 22:55:50.387842 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 22:55:50.389385 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 22:55:50.422499 ignition[1164]: Ignition 2.22.0 Sep 12 22:55:50.422515 ignition[1164]: Stage: fetch Sep 12 22:55:50.422831 ignition[1164]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:55:50.422844 ignition[1164]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 22:55:50.422938 ignition[1164]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 22:55:50.506988 ignition[1164]: PUT result: OK Sep 12 22:55:50.522505 ignition[1164]: parsed url from cmdline: "" Sep 12 22:55:50.522516 ignition[1164]: no config URL provided Sep 12 22:55:50.522526 ignition[1164]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 22:55:50.522538 ignition[1164]: no config at "/usr/lib/ignition/user.ign" Sep 12 22:55:50.522563 ignition[1164]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 22:55:50.524169 ignition[1164]: PUT result: OK Sep 12 22:55:50.524234 ignition[1164]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 22:55:50.528392 ignition[1164]: GET result: OK Sep 12 22:55:50.528477 ignition[1164]: parsing config with SHA512: 0c519159c6a29330e640247626e4d821e69d118dffe791f4d7e8dda81c2bce683c2e2e3190fe25f35bbe35667b0036860ff17e6acb6816efd8b1b7eddd5d5e77 Sep 12 22:55:50.534018 unknown[1164]: fetched base config from "system" Sep 12 22:55:50.534056 unknown[1164]: fetched base config from "system" Sep 12 22:55:50.534064 unknown[1164]: fetched user config from "aws" Sep 12 22:55:50.536558 ignition[1164]: fetch: fetch complete Sep 12 22:55:50.536567 ignition[1164]: fetch: fetch passed Sep 12 22:55:50.536642 ignition[1164]: Ignition finished successfully Sep 12 22:55:50.540094 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 22:55:50.541992 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 22:55:50.579387 ignition[1171]: Ignition 2.22.0 Sep 12 22:55:50.579405 ignition[1171]: Stage: kargs Sep 12 22:55:50.579956 ignition[1171]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:55:50.579972 ignition[1171]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 22:55:50.580095 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 22:55:50.581149 ignition[1171]: PUT result: OK Sep 12 22:55:50.584071 ignition[1171]: kargs: kargs passed Sep 12 22:55:50.584146 ignition[1171]: Ignition finished successfully Sep 12 22:55:50.586232 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 22:55:50.588117 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 22:55:50.627560 ignition[1177]: Ignition 2.22.0 Sep 12 22:55:50.627575 ignition[1177]: Stage: disks Sep 12 22:55:50.628190 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:55:50.628205 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 22:55:50.628318 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 22:55:50.629167 ignition[1177]: PUT result: OK Sep 12 22:55:50.632296 ignition[1177]: disks: disks passed Sep 12 22:55:50.632365 ignition[1177]: Ignition finished successfully Sep 12 22:55:50.633614 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 22:55:50.634729 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 22:55:50.635246 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 22:55:50.635472 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 22:55:50.635714 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 22:55:50.636088 systemd[1]: Reached target basic.target - Basic System. Sep 12 22:55:50.637530 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 22:55:50.677159 systemd-fsck[1186]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 22:55:50.679979 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 22:55:50.681701 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 22:55:50.835697 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d027afc5-396a-49bf-a5be-60ddd42cb089 r/w with ordered data mode. Quota mode: none. Sep 12 22:55:50.836798 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 22:55:50.837637 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 22:55:50.839219 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 22:55:50.841776 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 22:55:50.842839 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 22:55:50.843217 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 22:55:50.843240 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 22:55:50.851257 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 22:55:50.853568 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 22:55:50.866722 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1205) Sep 12 22:55:50.869720 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:55:50.869779 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 22:55:50.880770 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 22:55:50.880841 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 22:55:50.882754 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 22:55:51.215447 initrd-setup-root[1229]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 22:55:51.234470 initrd-setup-root[1236]: cut: /sysroot/etc/group: No such file or directory Sep 12 22:55:51.249758 initrd-setup-root[1243]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 22:55:51.269985 initrd-setup-root[1250]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 22:55:51.362824 systemd-networkd[1155]: eth0: Gained IPv6LL Sep 12 22:55:51.547463 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 22:55:51.549959 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 22:55:51.553839 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 22:55:51.567764 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 22:55:51.569940 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:55:51.604560 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 22:55:51.609548 ignition[1318]: INFO : Ignition 2.22.0 Sep 12 22:55:51.609548 ignition[1318]: INFO : Stage: mount Sep 12 22:55:51.611137 ignition[1318]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:55:51.611137 ignition[1318]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 22:55:51.611137 ignition[1318]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 22:55:51.611137 ignition[1318]: INFO : PUT result: OK Sep 12 22:55:51.614958 ignition[1318]: INFO : mount: mount passed Sep 12 22:55:51.614958 ignition[1318]: INFO : Ignition finished successfully Sep 12 22:55:51.617393 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 22:55:51.618970 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 22:55:51.838447 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 22:55:51.879696 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1331) Sep 12 22:55:51.882766 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:55:51.882824 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 22:55:51.894011 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 22:55:51.894088 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 22:55:51.897033 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 22:55:51.929630 ignition[1347]: INFO : Ignition 2.22.0 Sep 12 22:55:51.929630 ignition[1347]: INFO : Stage: files Sep 12 22:55:51.931081 ignition[1347]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:55:51.931081 ignition[1347]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 22:55:51.931081 ignition[1347]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 22:55:51.931081 ignition[1347]: INFO : PUT result: OK Sep 12 22:55:51.933523 ignition[1347]: DEBUG : files: compiled without relabeling support, skipping Sep 12 22:55:51.934837 ignition[1347]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 22:55:51.934837 ignition[1347]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 22:55:51.938281 ignition[1347]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 22:55:51.938925 ignition[1347]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 22:55:51.938925 ignition[1347]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 22:55:51.938796 unknown[1347]: wrote ssh authorized keys file for user: core Sep 12 22:55:51.951010 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 22:55:51.951937 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 22:55:52.034837 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 22:55:52.440747 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 22:55:52.440747 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 22:55:52.440747 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 22:55:52.440747 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 22:55:52.440747 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 22:55:52.440747 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 22:55:52.440747 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 22:55:52.440747 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 22:55:52.440747 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 22:55:52.447093 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 22:55:52.447093 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 22:55:52.447093 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 22:55:52.449817 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 22:55:52.449817 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 22:55:52.449817 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 22:55:52.950637 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 22:55:53.989551 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 22:55:53.989551 ignition[1347]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 22:55:53.992165 ignition[1347]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 22:55:53.997089 ignition[1347]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 22:55:53.997089 ignition[1347]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 22:55:53.997089 ignition[1347]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 22:55:54.000476 ignition[1347]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 22:55:54.000476 ignition[1347]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 22:55:54.000476 ignition[1347]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 22:55:54.000476 ignition[1347]: INFO : files: files passed Sep 12 22:55:54.000476 ignition[1347]: INFO : Ignition finished successfully Sep 12 22:55:53.999698 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 22:55:54.002900 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 22:55:54.009859 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 22:55:54.019239 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 22:55:54.019390 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 22:55:54.026047 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:55:54.026047 initrd-setup-root-after-ignition[1377]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:55:54.030319 initrd-setup-root-after-ignition[1381]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:55:54.032882 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 22:55:54.033626 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 22:55:54.035724 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 22:55:54.090458 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 22:55:54.090623 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 22:55:54.091973 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 22:55:54.093153 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 22:55:54.094055 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 22:55:54.095244 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 22:55:54.133301 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 22:55:54.135466 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 22:55:54.155080 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:55:54.155765 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:55:54.156906 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 22:55:54.157784 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 22:55:54.157966 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 22:55:54.159245 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 22:55:54.160319 systemd[1]: Stopped target basic.target - Basic System. Sep 12 22:55:54.161138 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 22:55:54.161952 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 22:55:54.162751 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 22:55:54.163495 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 22:55:54.164461 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 22:55:54.165252 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 22:55:54.166067 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 22:55:54.167158 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 22:55:54.168107 systemd[1]: Stopped target swap.target - Swaps. Sep 12 22:55:54.168839 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 22:55:54.169070 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 22:55:54.170095 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:55:54.170938 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:55:54.171557 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 22:55:54.171707 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:55:54.172514 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 22:55:54.172756 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 22:55:54.174054 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 22:55:54.174312 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 22:55:54.175020 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 22:55:54.175216 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 22:55:54.177079 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 22:55:54.181356 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 22:55:54.183161 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 22:55:54.183372 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:55:54.185537 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 22:55:54.186367 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 22:55:54.193797 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 22:55:54.193936 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 22:55:54.218445 ignition[1401]: INFO : Ignition 2.22.0 Sep 12 22:55:54.218445 ignition[1401]: INFO : Stage: umount Sep 12 22:55:54.222895 ignition[1401]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:55:54.222895 ignition[1401]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 22:55:54.222895 ignition[1401]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 22:55:54.222895 ignition[1401]: INFO : PUT result: OK Sep 12 22:55:54.222052 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 22:55:54.229134 ignition[1401]: INFO : umount: umount passed Sep 12 22:55:54.230268 ignition[1401]: INFO : Ignition finished successfully Sep 12 22:55:54.231070 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 22:55:54.231233 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 22:55:54.232738 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 22:55:54.232855 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 22:55:54.233322 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 22:55:54.233387 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 22:55:54.234040 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 22:55:54.234103 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 22:55:54.234702 systemd[1]: Stopped target network.target - Network. Sep 12 22:55:54.235298 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 22:55:54.235370 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 22:55:54.236118 systemd[1]: Stopped target paths.target - Path Units. Sep 12 22:55:54.236845 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 22:55:54.240808 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:55:54.241288 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 22:55:54.242334 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 22:55:54.243126 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 22:55:54.243188 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 22:55:54.243875 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 22:55:54.243926 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 22:55:54.244651 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 22:55:54.244749 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 22:55:54.245343 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 22:55:54.245405 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 22:55:54.246168 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 22:55:54.246790 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 22:55:54.250362 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 22:55:54.250506 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 22:55:54.256131 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 22:55:54.256526 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 22:55:54.256665 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 22:55:54.258910 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 22:55:54.260462 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 22:55:54.261118 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 22:55:54.261175 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:55:54.262914 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 22:55:54.263431 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 22:55:54.263506 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 22:55:54.264208 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 22:55:54.264272 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:55:54.266797 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 22:55:54.266875 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 22:55:54.267472 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 22:55:54.267544 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:55:54.268599 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:55:54.274740 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 22:55:54.274835 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 22:55:54.282580 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 22:55:54.282786 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:55:54.284504 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 22:55:54.284570 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 22:55:54.286224 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 22:55:54.286271 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:55:54.286993 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 22:55:54.287060 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 22:55:54.288283 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 22:55:54.288351 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 22:55:54.289465 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 22:55:54.289531 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 22:55:54.291990 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 22:55:54.292742 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 22:55:54.292822 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:55:54.295251 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 22:55:54.295317 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:55:54.298868 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 22:55:54.298934 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 22:55:54.300020 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 22:55:54.300088 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:55:54.300761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 22:55:54.300821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:55:54.304564 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 22:55:54.304654 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 12 22:55:54.308784 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 22:55:54.308866 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 22:55:54.315799 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 22:55:54.315955 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 22:55:54.324698 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 22:55:54.324951 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 22:55:54.376023 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 22:55:54.376143 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 22:55:54.377303 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 22:55:54.377841 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 22:55:54.377907 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 22:55:54.380933 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 22:55:54.400771 systemd[1]: Switching root. Sep 12 22:55:54.443239 systemd-journald[207]: Journal stopped Sep 12 22:55:56.293427 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Sep 12 22:55:56.293530 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 22:55:56.293555 kernel: SELinux: policy capability open_perms=1 Sep 12 22:55:56.293581 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 22:55:56.293601 kernel: SELinux: policy capability always_check_network=0 Sep 12 22:55:56.293621 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 22:55:56.293642 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 22:55:56.293669 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 22:55:56.293709 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 22:55:56.293729 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 22:55:56.293752 kernel: audit: type=1403 audit(1757717754.808:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 22:55:56.293774 systemd[1]: Successfully loaded SELinux policy in 92.099ms. Sep 12 22:55:56.293803 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.220ms. Sep 12 22:55:56.293825 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 22:55:56.293848 systemd[1]: Detected virtualization amazon. Sep 12 22:55:56.293869 systemd[1]: Detected architecture x86-64. Sep 12 22:55:56.293889 systemd[1]: Detected first boot. Sep 12 22:55:56.293914 systemd[1]: Initializing machine ID from VM UUID. Sep 12 22:55:56.293935 zram_generator::config[1445]: No configuration found. Sep 12 22:55:56.293963 kernel: Guest personality initialized and is inactive Sep 12 22:55:56.293982 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 22:55:56.294002 kernel: Initialized host personality Sep 12 22:55:56.294026 kernel: NET: Registered PF_VSOCK protocol family Sep 12 22:55:56.294046 systemd[1]: Populated /etc with preset unit settings. Sep 12 22:55:56.294069 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 22:55:56.294094 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 22:55:56.294115 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 22:55:56.294136 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 22:55:56.294158 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 22:55:56.294179 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 22:55:56.294201 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 22:55:56.294222 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 22:55:56.294244 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 22:55:56.294266 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 22:55:56.294291 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 22:55:56.294316 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 22:55:56.294338 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:55:56.294359 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:55:56.294380 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 22:55:56.294401 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 22:55:56.294423 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 22:55:56.294447 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 22:55:56.294468 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 22:55:56.294490 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:55:56.294512 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:55:56.294533 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 22:55:56.294554 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 22:55:56.294575 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 22:55:56.294596 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 22:55:56.294617 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:55:56.294641 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 22:55:56.294662 systemd[1]: Reached target slices.target - Slice Units. Sep 12 22:55:56.296844 systemd[1]: Reached target swap.target - Swaps. Sep 12 22:55:56.296881 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 22:55:56.296904 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 22:55:56.296925 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 22:55:56.296947 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:55:56.296969 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 22:55:56.296990 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:55:56.297011 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 22:55:56.297039 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 22:55:56.297061 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 22:55:56.297082 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 22:55:56.297104 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:55:56.297125 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 22:55:56.297146 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 22:55:56.297168 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 22:55:56.297191 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 22:55:56.297215 systemd[1]: Reached target machines.target - Containers. Sep 12 22:55:56.297236 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 22:55:56.297257 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:55:56.297279 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 22:55:56.297300 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 22:55:56.297322 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:55:56.297344 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 22:55:56.297365 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:55:56.297386 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 22:55:56.297410 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:55:56.297432 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 22:55:56.297454 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 22:55:56.297475 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 22:55:56.297495 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 22:55:56.297516 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 22:55:56.297538 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:55:56.297560 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 22:55:56.297584 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 22:55:56.297609 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 22:55:56.297632 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 22:55:56.297656 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 22:55:56.297691 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 22:55:56.297717 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 22:55:56.297739 systemd[1]: Stopped verity-setup.service. Sep 12 22:55:56.297762 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:55:56.297783 kernel: loop: module loaded Sep 12 22:55:56.297805 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 22:55:56.297830 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 22:55:56.297851 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 22:55:56.297875 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 22:55:56.297896 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 22:55:56.297917 kernel: fuse: init (API version 7.41) Sep 12 22:55:56.297938 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 22:55:56.297960 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:55:56.297981 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 22:55:56.298003 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 22:55:56.298027 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:55:56.298049 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:55:56.298071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:55:56.298095 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:55:56.298117 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 22:55:56.298139 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 22:55:56.298161 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:55:56.298182 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:55:56.298204 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 22:55:56.298230 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:55:56.298251 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 22:55:56.298321 systemd-journald[1531]: Collecting audit messages is disabled. Sep 12 22:55:56.298365 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 22:55:56.298387 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 22:55:56.298411 systemd-journald[1531]: Journal started Sep 12 22:55:56.298456 systemd-journald[1531]: Runtime Journal (/run/log/journal/ec24b60d55e8f79e2727edf56dcb746e) is 4.8M, max 38.4M, 33.6M free. Sep 12 22:55:56.315199 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 22:55:56.315279 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 22:55:56.315304 kernel: ACPI: bus type drm_connector registered Sep 12 22:55:56.315327 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 22:55:55.885028 systemd[1]: Queued start job for default target multi-user.target. Sep 12 22:55:55.905304 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 22:55:55.906225 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 22:55:56.322722 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 22:55:56.330696 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 22:55:56.330774 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:55:56.339465 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 22:55:56.343696 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 22:55:56.354532 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 22:55:56.354637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 22:55:56.361739 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:55:56.371804 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 22:55:56.381920 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 22:55:56.389707 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 22:55:56.396766 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 22:55:56.398139 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 22:55:56.398430 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 22:55:56.401284 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 22:55:56.403226 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 22:55:56.404902 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 22:55:56.422710 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:55:56.436653 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 22:55:56.446153 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 22:55:56.452858 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 22:55:56.457904 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 22:55:56.471774 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:55:56.474427 kernel: loop0: detected capacity change from 0 to 110984 Sep 12 22:55:56.491322 systemd-tmpfiles[1561]: ACLs are not supported, ignoring. Sep 12 22:55:56.491354 systemd-tmpfiles[1561]: ACLs are not supported, ignoring. Sep 12 22:55:56.493248 systemd-journald[1531]: Time spent on flushing to /var/log/journal/ec24b60d55e8f79e2727edf56dcb746e is 52.800ms for 1030 entries. Sep 12 22:55:56.493248 systemd-journald[1531]: System Journal (/var/log/journal/ec24b60d55e8f79e2727edf56dcb746e) is 8M, max 195.6M, 187.6M free. Sep 12 22:55:56.562441 systemd-journald[1531]: Received client request to flush runtime journal. Sep 12 22:55:56.497902 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 22:55:56.501248 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 22:55:56.507451 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 22:55:56.565191 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 22:55:56.600260 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 22:55:56.607932 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 22:55:56.611947 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 22:55:56.619713 kernel: loop1: detected capacity change from 0 to 221472 Sep 12 22:55:56.648078 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Sep 12 22:55:56.648109 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Sep 12 22:55:56.653370 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:55:56.743709 kernel: loop2: detected capacity change from 0 to 128016 Sep 12 22:55:56.861832 kernel: loop3: detected capacity change from 0 to 72368 Sep 12 22:55:56.908463 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 22:55:56.981722 kernel: loop4: detected capacity change from 0 to 110984 Sep 12 22:55:57.016828 kernel: loop5: detected capacity change from 0 to 221472 Sep 12 22:55:57.048710 kernel: loop6: detected capacity change from 0 to 128016 Sep 12 22:55:57.068716 kernel: loop7: detected capacity change from 0 to 72368 Sep 12 22:55:57.086379 (sd-merge)[1605]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 22:55:57.086910 (sd-merge)[1605]: Merged extensions into '/usr'. Sep 12 22:55:57.092911 systemd[1]: Reload requested from client PID 1560 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 22:55:57.092931 systemd[1]: Reloading... Sep 12 22:55:57.196361 zram_generator::config[1634]: No configuration found. Sep 12 22:55:57.516236 systemd[1]: Reloading finished in 422 ms. Sep 12 22:55:57.531362 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 22:55:57.532618 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 22:55:57.541845 systemd[1]: Starting ensure-sysext.service... Sep 12 22:55:57.547119 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 22:55:57.550686 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:55:57.574601 systemd[1]: Reload requested from client PID 1683 ('systemctl') (unit ensure-sysext.service)... Sep 12 22:55:57.574627 systemd[1]: Reloading... Sep 12 22:55:57.586582 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 22:55:57.588165 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 22:55:57.588640 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 22:55:57.589055 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 22:55:57.594529 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 22:55:57.598609 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Sep 12 22:55:57.600398 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Sep 12 22:55:57.622556 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Sep 12 22:55:57.630280 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 22:55:57.630295 systemd-tmpfiles[1684]: Skipping /boot Sep 12 22:55:57.670927 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 22:55:57.671083 systemd-tmpfiles[1684]: Skipping /boot Sep 12 22:55:57.676033 zram_generator::config[1709]: No configuration found. Sep 12 22:55:58.055968 (udev-worker)[1772]: Network interface NamePolicy= disabled on kernel command line. Sep 12 22:55:58.072707 ldconfig[1556]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 22:55:58.166160 systemd[1]: Reloading finished in 590 ms. Sep 12 22:55:58.175757 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 22:55:58.180715 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:55:58.182498 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 22:55:58.185911 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:55:58.229701 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 22:55:58.238731 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 22:55:58.248481 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 12 22:55:58.246112 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 22:55:58.254709 kernel: ACPI: button: Power Button [PWRF] Sep 12 22:55:58.262125 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 12 22:55:58.271226 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 22:55:58.276833 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 22:55:58.284993 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 22:55:58.296120 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 22:55:58.300925 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 22:55:58.308882 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:55:58.309203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:55:58.312794 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:55:58.317982 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:55:58.320946 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:55:58.321994 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:55:58.322845 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:55:58.327043 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 22:55:58.327625 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:55:58.333798 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:55:58.334120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:55:58.334366 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:55:58.334493 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:55:58.334614 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:55:58.345190 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:55:58.345581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:55:58.353710 kernel: ACPI: button: Sleep Button [SLPF] Sep 12 22:55:58.355125 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 22:55:58.356951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:55:58.357146 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:55:58.357424 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 22:55:58.358146 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:55:58.374248 systemd[1]: Finished ensure-sysext.service. Sep 12 22:55:58.384322 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 22:55:58.420072 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:55:58.421384 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:55:58.441742 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 22:55:58.449890 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 22:55:58.451518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:55:58.451975 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:55:58.456280 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:55:58.457772 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:55:58.459650 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 22:55:58.461822 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 22:55:58.471354 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 22:55:58.471636 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 22:55:58.512732 augenrules[1853]: No rules Sep 12 22:55:58.515305 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 22:55:58.515598 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 22:55:58.517348 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 22:55:58.562228 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 22:55:58.563670 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 22:55:58.578396 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 22:55:58.774463 systemd-resolved[1819]: Positive Trust Anchors: Sep 12 22:55:58.774481 systemd-resolved[1819]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 22:55:58.774547 systemd-resolved[1819]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 22:55:58.775625 systemd-networkd[1815]: lo: Link UP Sep 12 22:55:58.775631 systemd-networkd[1815]: lo: Gained carrier Sep 12 22:55:58.779644 systemd-networkd[1815]: Enumeration completed Sep 12 22:55:58.781194 systemd-resolved[1819]: Defaulting to hostname 'linux'. Sep 12 22:55:58.781207 systemd-networkd[1815]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:55:58.781213 systemd-networkd[1815]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 22:55:58.782121 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 22:55:58.786403 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 22:55:58.791957 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 22:55:58.793232 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 22:55:58.793950 systemd[1]: Reached target network.target - Network. Sep 12 22:55:58.794841 systemd-networkd[1815]: eth0: Link UP Sep 12 22:55:58.795030 systemd-networkd[1815]: eth0: Gained carrier Sep 12 22:55:58.795073 systemd-networkd[1815]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:55:58.795304 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:55:58.795899 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 22:55:58.796898 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 22:55:58.797814 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 22:55:58.798753 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 22:55:58.799934 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 22:55:58.801447 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 22:55:58.803084 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 22:55:58.803586 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 22:55:58.803634 systemd[1]: Reached target paths.target - Path Units. Sep 12 22:55:58.804562 systemd[1]: Reached target timers.target - Timer Units. Sep 12 22:55:58.806215 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 22:55:58.806783 systemd-networkd[1815]: eth0: DHCPv4 address 172.31.30.120/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 22:55:58.810593 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 22:55:58.818171 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 22:55:58.821069 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 22:55:58.821675 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 22:55:58.833368 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 22:55:58.834521 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 22:55:58.836395 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 22:55:58.883774 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 22:55:58.897888 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 22:55:58.903126 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 22:55:58.903664 systemd[1]: Reached target basic.target - Basic System. Sep 12 22:55:58.904223 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 22:55:58.904263 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 22:55:58.905496 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 22:55:58.908142 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 22:55:58.914037 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 22:55:58.918759 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 22:55:58.922021 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 22:55:58.927910 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 22:55:58.928887 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 22:55:58.932991 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 22:55:58.939004 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 22:55:58.947874 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 22:55:58.955045 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 22:55:58.962592 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 22:55:58.971021 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 22:55:58.985244 jq[1959]: false Sep 12 22:55:58.982034 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 22:55:58.986952 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 22:55:58.998144 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 22:55:59.013886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:55:59.015779 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 22:55:59.017056 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 22:55:59.020111 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 22:55:59.030537 google_oslogin_nss_cache[1961]: oslogin_cache_refresh[1961]: Refreshing passwd entry cache Sep 12 22:55:59.023367 oslogin_cache_refresh[1961]: Refreshing passwd entry cache Sep 12 22:55:59.035512 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 22:55:59.047847 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 22:55:59.049204 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 22:55:59.050773 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 22:55:59.060714 google_oslogin_nss_cache[1961]: oslogin_cache_refresh[1961]: Failure getting users, quitting Sep 12 22:55:59.060714 google_oslogin_nss_cache[1961]: oslogin_cache_refresh[1961]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 22:55:59.060714 google_oslogin_nss_cache[1961]: oslogin_cache_refresh[1961]: Refreshing group entry cache Sep 12 22:55:59.057950 oslogin_cache_refresh[1961]: Failure getting users, quitting Sep 12 22:55:59.057975 oslogin_cache_refresh[1961]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 22:55:59.058039 oslogin_cache_refresh[1961]: Refreshing group entry cache Sep 12 22:55:59.063424 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 22:55:59.067332 google_oslogin_nss_cache[1961]: oslogin_cache_refresh[1961]: Failure getting groups, quitting Sep 12 22:55:59.067332 google_oslogin_nss_cache[1961]: oslogin_cache_refresh[1961]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 22:55:59.066123 oslogin_cache_refresh[1961]: Failure getting groups, quitting Sep 12 22:55:59.066140 oslogin_cache_refresh[1961]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 22:55:59.085574 extend-filesystems[1960]: Found /dev/nvme0n1p6 Sep 12 22:55:59.088993 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 22:55:59.090240 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 22:55:59.091771 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 22:55:59.104705 jq[1977]: true Sep 12 22:55:59.158261 ntpd[1963]: ntpd 4.2.8p18@1.4062-o Fri Sep 12 20:09:42 UTC 2025 (1): Starting Sep 12 22:55:59.158348 ntpd[1963]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 22:55:59.158713 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: ntpd 4.2.8p18@1.4062-o Fri Sep 12 20:09:42 UTC 2025 (1): Starting Sep 12 22:55:59.158713 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 22:55:59.158713 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: ---------------------------------------------------- Sep 12 22:55:59.158713 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: ntp-4 is maintained by Network Time Foundation, Sep 12 22:55:59.158713 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 22:55:59.158713 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: corporation. Support and training for ntp-4 are Sep 12 22:55:59.158713 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: available at https://www.nwtime.org/support Sep 12 22:55:59.158713 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: ---------------------------------------------------- Sep 12 22:55:59.158360 ntpd[1963]: ---------------------------------------------------- Sep 12 22:55:59.158369 ntpd[1963]: ntp-4 is maintained by Network Time Foundation, Sep 12 22:55:59.158378 ntpd[1963]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 22:55:59.158389 ntpd[1963]: corporation. Support and training for ntp-4 are Sep 12 22:55:59.158398 ntpd[1963]: available at https://www.nwtime.org/support Sep 12 22:55:59.158409 ntpd[1963]: ---------------------------------------------------- Sep 12 22:55:59.168710 jq[1990]: true Sep 12 22:55:59.169023 extend-filesystems[1960]: Found /dev/nvme0n1p9 Sep 12 22:55:59.192203 kernel: ntpd[1963]: segfault at 24 ip 000056151623baeb sp 00007fffccf25510 error 4 in ntpd[68aeb,5615161d9000+80000] likely on CPU 1 (core 0, socket 0) Sep 12 22:55:59.192277 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Sep 12 22:55:59.192556 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: proto: precision = 0.065 usec (-24) Sep 12 22:55:59.192556 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: basedate set to 2025-08-31 Sep 12 22:55:59.192556 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: gps base set to 2025-08-31 (week 2382) Sep 12 22:55:59.192556 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 22:55:59.192556 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 22:55:59.192556 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 22:55:59.192556 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: Listen normally on 3 eth0 172.31.30.120:123 Sep 12 22:55:59.192556 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: Listen normally on 4 lo [::1]:123 Sep 12 22:55:59.192556 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: bind(21) AF_INET6 [fe80::402:c7ff:fe7d:c347%2]:123 flags 0x811 failed: Cannot assign requested address Sep 12 22:55:59.192556 ntpd[1963]: 12 Sep 22:55:59 ntpd[1963]: unable to create socket on eth0 (5) for [fe80::402:c7ff:fe7d:c347%2]:123 Sep 12 22:55:59.181822 ntpd[1963]: proto: precision = 0.065 usec (-24) Sep 12 22:55:59.178817 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 22:55:59.182783 ntpd[1963]: basedate set to 2025-08-31 Sep 12 22:55:59.196217 extend-filesystems[1960]: Checking size of /dev/nvme0n1p9 Sep 12 22:55:59.182803 ntpd[1963]: gps base set to 2025-08-31 (week 2382) Sep 12 22:55:59.197199 (ntainerd)[2001]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 22:55:59.182959 ntpd[1963]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 22:55:59.182992 ntpd[1963]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 22:55:59.183213 ntpd[1963]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 22:55:59.183241 ntpd[1963]: Listen normally on 3 eth0 172.31.30.120:123 Sep 12 22:55:59.183273 ntpd[1963]: Listen normally on 4 lo [::1]:123 Sep 12 22:55:59.183310 ntpd[1963]: bind(21) AF_INET6 [fe80::402:c7ff:fe7d:c347%2]:123 flags 0x811 failed: Cannot assign requested address Sep 12 22:55:59.183332 ntpd[1963]: unable to create socket on eth0 (5) for [fe80::402:c7ff:fe7d:c347%2]:123 Sep 12 22:55:59.230771 tar[1980]: linux-amd64/helm Sep 12 22:55:59.249399 systemd-coredump[2017]: Process 1963 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Sep 12 22:55:59.254863 update_engine[1974]: I20250912 22:55:59.254087 1974 main.cc:92] Flatcar Update Engine starting Sep 12 22:55:59.262129 dbus-daemon[1957]: [system] SELinux support is enabled Sep 12 22:55:59.260286 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Sep 12 22:55:59.267982 systemd[1]: Started systemd-coredump@0-2017-0.service - Process Core Dump (PID 2017/UID 0). Sep 12 22:55:59.272812 dbus-daemon[1957]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1815 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 22:55:59.276172 update_engine[1974]: I20250912 22:55:59.275961 1974 update_check_scheduler.cc:74] Next update check in 4m35s Sep 12 22:55:59.269357 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 22:55:59.276488 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 22:55:59.276524 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 22:55:59.277211 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 22:55:59.277232 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 22:55:59.281675 extend-filesystems[1960]: Resized partition /dev/nvme0n1p9 Sep 12 22:55:59.305376 dbus-daemon[1957]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 22:55:59.304870 systemd[1]: Started update-engine.service - Update Engine. Sep 12 22:55:59.308147 extend-filesystems[2020]: resize2fs 1.47.3 (8-Jul-2025) Sep 12 22:55:59.312973 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 22:55:59.330712 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 22:55:59.349482 coreos-metadata[1956]: Sep 12 22:55:59.349 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 22:55:59.350963 systemd-logind[1971]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 22:55:59.351595 systemd-logind[1971]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 12 22:55:59.351997 systemd-logind[1971]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 22:55:59.354209 systemd-logind[1971]: New seat seat0. Sep 12 22:55:59.359751 coreos-metadata[1956]: Sep 12 22:55:59.358 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 22:55:59.362852 coreos-metadata[1956]: Sep 12 22:55:59.360 INFO Fetch successful Sep 12 22:55:59.362852 coreos-metadata[1956]: Sep 12 22:55:59.360 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 22:55:59.370650 coreos-metadata[1956]: Sep 12 22:55:59.368 INFO Fetch successful Sep 12 22:55:59.370650 coreos-metadata[1956]: Sep 12 22:55:59.368 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 22:55:59.372152 coreos-metadata[1956]: Sep 12 22:55:59.371 INFO Fetch successful Sep 12 22:55:59.372152 coreos-metadata[1956]: Sep 12 22:55:59.371 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 22:55:59.390234 coreos-metadata[1956]: Sep 12 22:55:59.389 INFO Fetch successful Sep 12 22:55:59.390234 coreos-metadata[1956]: Sep 12 22:55:59.390 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 22:55:59.393516 coreos-metadata[1956]: Sep 12 22:55:59.393 INFO Fetch failed with 404: resource not found Sep 12 22:55:59.393516 coreos-metadata[1956]: Sep 12 22:55:59.393 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 22:55:59.398451 coreos-metadata[1956]: Sep 12 22:55:59.396 INFO Fetch successful Sep 12 22:55:59.398451 coreos-metadata[1956]: Sep 12 22:55:59.396 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 22:55:59.400055 coreos-metadata[1956]: Sep 12 22:55:59.399 INFO Fetch successful Sep 12 22:55:59.400170 coreos-metadata[1956]: Sep 12 22:55:59.400 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 22:55:59.400468 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 22:55:59.401603 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 22:55:59.403106 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 22:55:59.404769 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 22:55:59.406135 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 22:55:59.408478 coreos-metadata[1956]: Sep 12 22:55:59.406 INFO Fetch successful Sep 12 22:55:59.408478 coreos-metadata[1956]: Sep 12 22:55:59.406 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 22:55:59.410453 coreos-metadata[1956]: Sep 12 22:55:59.410 INFO Fetch successful Sep 12 22:55:59.410453 coreos-metadata[1956]: Sep 12 22:55:59.410 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 22:55:59.413716 coreos-metadata[1956]: Sep 12 22:55:59.412 INFO Fetch successful Sep 12 22:55:59.473318 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:55:59.508229 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 22:55:59.521699 extend-filesystems[2020]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 22:55:59.521699 extend-filesystems[2020]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 22:55:59.521699 extend-filesystems[2020]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 22:55:59.527976 bash[2044]: Updated "/home/core/.ssh/authorized_keys" Sep 12 22:55:59.523093 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 22:55:59.528371 extend-filesystems[1960]: Resized filesystem in /dev/nvme0n1p9 Sep 12 22:55:59.524693 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 22:55:59.529482 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 22:55:59.541245 systemd[1]: Starting sshkeys.service... Sep 12 22:55:59.544250 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 22:55:59.545621 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 22:55:59.601388 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 22:55:59.618935 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 22:55:59.817654 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 22:55:59.821009 dbus-daemon[1957]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 22:55:59.821809 dbus-daemon[1957]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2027 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 22:55:59.831998 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 22:55:59.905924 systemd-coredump[2019]: Process 1963 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1963: #0 0x000056151623baeb n/a (ntpd + 0x68aeb) #1 0x00005615161e4cdf n/a (ntpd + 0x11cdf) #2 0x00005615161e5575 n/a (ntpd + 0x12575) #3 0x00005615161e0d8a n/a (ntpd + 0xdd8a) #4 0x00005615161e25d3 n/a (ntpd + 0xf5d3) #5 0x00005615161eafd1 n/a (ntpd + 0x17fd1) #6 0x00005615161dbc2d n/a (ntpd + 0x8c2d) #7 0x00007f4a3dd9016c n/a (libc.so.6 + 0x2716c) #8 0x00007f4a3dd90229 __libc_start_main (libc.so.6 + 0x27229) #9 0x00005615161dbc55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Sep 12 22:55:59.909921 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Sep 12 22:55:59.910121 systemd[1]: ntpd.service: Failed with result 'core-dump'. Sep 12 22:55:59.932231 systemd[1]: systemd-coredump@0-2017-0.service: Deactivated successfully. Sep 12 22:56:00.002161 coreos-metadata[2060]: Sep 12 22:56:00.002 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 22:56:00.004734 coreos-metadata[2060]: Sep 12 22:56:00.004 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 22:56:00.009422 coreos-metadata[2060]: Sep 12 22:56:00.009 INFO Fetch successful Sep 12 22:56:00.009422 coreos-metadata[2060]: Sep 12 22:56:00.009 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 22:56:00.013708 coreos-metadata[2060]: Sep 12 22:56:00.013 INFO Fetch successful Sep 12 22:56:00.015881 unknown[2060]: wrote ssh authorized keys file for user: core Sep 12 22:56:00.030644 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Sep 12 22:56:00.039838 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 22:56:00.111155 locksmithd[2029]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 22:56:00.125586 update-ssh-keys[2138]: Updated "/home/core/.ssh/authorized_keys" Sep 12 22:56:00.129255 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 22:56:00.138662 systemd[1]: Finished sshkeys.service. Sep 12 22:56:00.181800 ntpd[2133]: ntpd 4.2.8p18@1.4062-o Fri Sep 12 20:09:42 UTC 2025 (1): Starting Sep 12 22:56:00.182313 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: ntpd 4.2.8p18@1.4062-o Fri Sep 12 20:09:42 UTC 2025 (1): Starting Sep 12 22:56:00.182496 ntpd[2133]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 22:56:00.183697 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 22:56:00.183697 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: ---------------------------------------------------- Sep 12 22:56:00.183697 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: ntp-4 is maintained by Network Time Foundation, Sep 12 22:56:00.183697 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 22:56:00.183697 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: corporation. Support and training for ntp-4 are Sep 12 22:56:00.183697 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: available at https://www.nwtime.org/support Sep 12 22:56:00.183697 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: ---------------------------------------------------- Sep 12 22:56:00.183697 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: proto: precision = 0.089 usec (-23) Sep 12 22:56:00.182672 ntpd[2133]: ---------------------------------------------------- Sep 12 22:56:00.182709 ntpd[2133]: ntp-4 is maintained by Network Time Foundation, Sep 12 22:56:00.182718 ntpd[2133]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 22:56:00.182727 ntpd[2133]: corporation. Support and training for ntp-4 are Sep 12 22:56:00.182736 ntpd[2133]: available at https://www.nwtime.org/support Sep 12 22:56:00.182745 ntpd[2133]: ---------------------------------------------------- Sep 12 22:56:00.183446 ntpd[2133]: proto: precision = 0.089 usec (-23) Sep 12 22:56:00.186113 ntpd[2133]: basedate set to 2025-08-31 Sep 12 22:56:00.186350 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: basedate set to 2025-08-31 Sep 12 22:56:00.186350 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: gps base set to 2025-08-31 (week 2382) Sep 12 22:56:00.194871 kernel: ntpd[2133]: segfault at 24 ip 0000559ae7552aeb sp 00007ffef28233f0 error 4 in ntpd[68aeb,559ae74f0000+80000] likely on CPU 1 (core 0, socket 0) Sep 12 22:56:00.194919 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Sep 12 22:56:00.186139 ntpd[2133]: gps base set to 2025-08-31 (week 2382) Sep 12 22:56:00.195013 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 22:56:00.195013 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 22:56:00.195013 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 22:56:00.195013 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: Listen normally on 3 eth0 172.31.30.120:123 Sep 12 22:56:00.195013 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: Listen normally on 4 lo [::1]:123 Sep 12 22:56:00.195013 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: bind(21) AF_INET6 [fe80::402:c7ff:fe7d:c347%2]:123 flags 0x811 failed: Cannot assign requested address Sep 12 22:56:00.195013 ntpd[2133]: 12 Sep 22:56:00 ntpd[2133]: unable to create socket on eth0 (5) for [fe80::402:c7ff:fe7d:c347%2]:123 Sep 12 22:56:00.187031 ntpd[2133]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 22:56:00.187068 ntpd[2133]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 22:56:00.187326 ntpd[2133]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 22:56:00.187354 ntpd[2133]: Listen normally on 3 eth0 172.31.30.120:123 Sep 12 22:56:00.187380 ntpd[2133]: Listen normally on 4 lo [::1]:123 Sep 12 22:56:00.187409 ntpd[2133]: bind(21) AF_INET6 [fe80::402:c7ff:fe7d:c347%2]:123 flags 0x811 failed: Cannot assign requested address Sep 12 22:56:00.187428 ntpd[2133]: unable to create socket on eth0 (5) for [fe80::402:c7ff:fe7d:c347%2]:123 Sep 12 22:56:00.220391 systemd-coredump[2160]: Process 2133 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Sep 12 22:56:00.230074 systemd[1]: Started systemd-coredump@1-2160-0.service - Process Core Dump (PID 2160/UID 0). Sep 12 22:56:00.330444 containerd[2001]: time="2025-09-12T22:56:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 22:56:00.342129 sshd_keygen[1998]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 22:56:00.347982 containerd[2001]: time="2025-09-12T22:56:00.343657374Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 22:56:00.405475 polkitd[2087]: Started polkitd version 126 Sep 12 22:56:00.417167 containerd[2001]: time="2025-09-12T22:56:00.417017689Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.179µs" Sep 12 22:56:00.417167 containerd[2001]: time="2025-09-12T22:56:00.417061537Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 22:56:00.417167 containerd[2001]: time="2025-09-12T22:56:00.417088105Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 22:56:00.417362 containerd[2001]: time="2025-09-12T22:56:00.417297477Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 22:56:00.417362 containerd[2001]: time="2025-09-12T22:56:00.417324645Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 22:56:00.417362 containerd[2001]: time="2025-09-12T22:56:00.417358257Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 22:56:00.417473 containerd[2001]: time="2025-09-12T22:56:00.417427696Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 22:56:00.417473 containerd[2001]: time="2025-09-12T22:56:00.417443328Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 22:56:00.420907 containerd[2001]: time="2025-09-12T22:56:00.420856585Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 22:56:00.420907 containerd[2001]: time="2025-09-12T22:56:00.420903351Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 22:56:00.421066 containerd[2001]: time="2025-09-12T22:56:00.420931450Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 22:56:00.421066 containerd[2001]: time="2025-09-12T22:56:00.420943385Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 22:56:00.421156 containerd[2001]: time="2025-09-12T22:56:00.421095919Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 22:56:00.422205 containerd[2001]: time="2025-09-12T22:56:00.421349195Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 22:56:00.422205 containerd[2001]: time="2025-09-12T22:56:00.421393403Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 22:56:00.422205 containerd[2001]: time="2025-09-12T22:56:00.421410303Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 22:56:00.422205 containerd[2001]: time="2025-09-12T22:56:00.421617259Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 22:56:00.428340 containerd[2001]: time="2025-09-12T22:56:00.428289038Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 22:56:00.428463 containerd[2001]: time="2025-09-12T22:56:00.428440557Z" level=info msg="metadata content store policy set" policy=shared Sep 12 22:56:00.438074 polkitd[2087]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 22:56:00.438611 polkitd[2087]: Loading rules from directory /run/polkit-1/rules.d Sep 12 22:56:00.438671 polkitd[2087]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 12 22:56:00.439096 polkitd[2087]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 12 22:56:00.439144 polkitd[2087]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 12 22:56:00.439192 polkitd[2087]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 22:56:00.444702 containerd[2001]: time="2025-09-12T22:56:00.443709353Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 22:56:00.444702 containerd[2001]: time="2025-09-12T22:56:00.443819682Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 22:56:00.444702 containerd[2001]: time="2025-09-12T22:56:00.443843266Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 22:56:00.444702 containerd[2001]: time="2025-09-12T22:56:00.443860043Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 22:56:00.444702 containerd[2001]: time="2025-09-12T22:56:00.443876665Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 22:56:00.444702 containerd[2001]: time="2025-09-12T22:56:00.443891265Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 22:56:00.444702 containerd[2001]: time="2025-09-12T22:56:00.443909073Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 22:56:00.444702 containerd[2001]: time="2025-09-12T22:56:00.443925130Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 22:56:00.444702 containerd[2001]: time="2025-09-12T22:56:00.443940614Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 22:56:00.444702 containerd[2001]: time="2025-09-12T22:56:00.443954323Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 22:56:00.444702 containerd[2001]: time="2025-09-12T22:56:00.443967431Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 22:56:00.444702 containerd[2001]: time="2025-09-12T22:56:00.443984072Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 22:56:00.444702 containerd[2001]: time="2025-09-12T22:56:00.444142927Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 22:56:00.444702 containerd[2001]: time="2025-09-12T22:56:00.444168436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 22:56:00.445255 containerd[2001]: time="2025-09-12T22:56:00.444189057Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 22:56:00.445255 containerd[2001]: time="2025-09-12T22:56:00.444213474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 22:56:00.445255 containerd[2001]: time="2025-09-12T22:56:00.444229416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 22:56:00.445255 containerd[2001]: time="2025-09-12T22:56:00.444244998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 22:56:00.445255 containerd[2001]: time="2025-09-12T22:56:00.444260402Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 22:56:00.445255 containerd[2001]: time="2025-09-12T22:56:00.444275272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 22:56:00.445255 containerd[2001]: time="2025-09-12T22:56:00.444292306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 22:56:00.445255 containerd[2001]: time="2025-09-12T22:56:00.444310295Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 22:56:00.445255 containerd[2001]: time="2025-09-12T22:56:00.444331458Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 22:56:00.445255 containerd[2001]: time="2025-09-12T22:56:00.444424444Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 22:56:00.445255 containerd[2001]: time="2025-09-12T22:56:00.444445039Z" level=info msg="Start snapshots syncer" Sep 12 22:56:00.445255 containerd[2001]: time="2025-09-12T22:56:00.444478380Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 22:56:00.445669 containerd[2001]: time="2025-09-12T22:56:00.444806569Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 22:56:00.445669 containerd[2001]: time="2025-09-12T22:56:00.444869806Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 22:56:00.445856 containerd[2001]: time="2025-09-12T22:56:00.444953557Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 22:56:00.445856 containerd[2001]: time="2025-09-12T22:56:00.445078268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 22:56:00.445856 containerd[2001]: time="2025-09-12T22:56:00.445108896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 22:56:00.445856 containerd[2001]: time="2025-09-12T22:56:00.445125249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 22:56:00.445856 containerd[2001]: time="2025-09-12T22:56:00.445141677Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 22:56:00.445856 containerd[2001]: time="2025-09-12T22:56:00.445159957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 22:56:00.445856 containerd[2001]: time="2025-09-12T22:56:00.445175890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 22:56:00.445856 containerd[2001]: time="2025-09-12T22:56:00.445195070Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 22:56:00.445856 containerd[2001]: time="2025-09-12T22:56:00.445231713Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 22:56:00.445856 containerd[2001]: time="2025-09-12T22:56:00.445253718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 22:56:00.445856 containerd[2001]: time="2025-09-12T22:56:00.445268847Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 22:56:00.445856 containerd[2001]: time="2025-09-12T22:56:00.445299104Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 22:56:00.445856 containerd[2001]: time="2025-09-12T22:56:00.445319382Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 22:56:00.445856 containerd[2001]: time="2025-09-12T22:56:00.445333514Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 22:56:00.446364 containerd[2001]: time="2025-09-12T22:56:00.445348514Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 22:56:00.446364 containerd[2001]: time="2025-09-12T22:56:00.445361294Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 22:56:00.446364 containerd[2001]: time="2025-09-12T22:56:00.445375383Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 22:56:00.446364 containerd[2001]: time="2025-09-12T22:56:00.445390203Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 22:56:00.446364 containerd[2001]: time="2025-09-12T22:56:00.445411731Z" level=info msg="runtime interface created" Sep 12 22:56:00.446364 containerd[2001]: time="2025-09-12T22:56:00.445420045Z" level=info msg="created NRI interface" Sep 12 22:56:00.446364 containerd[2001]: time="2025-09-12T22:56:00.445433441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 22:56:00.446364 containerd[2001]: time="2025-09-12T22:56:00.445451473Z" level=info msg="Connect containerd service" Sep 12 22:56:00.446364 containerd[2001]: time="2025-09-12T22:56:00.445483092Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 22:56:00.451849 containerd[2001]: time="2025-09-12T22:56:00.451532810Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 22:56:00.448894 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 22:56:00.448541 polkitd[2087]: Finished loading, compiling and executing 2 rules Sep 12 22:56:00.451575 dbus-daemon[1957]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 22:56:00.453769 polkitd[2087]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 22:56:00.477560 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 22:56:00.488074 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 22:56:00.531326 systemd-coredump[2163]: Process 2133 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2133: #0 0x0000559ae7552aeb n/a (ntpd + 0x68aeb) #1 0x0000559ae74fbcdf n/a (ntpd + 0x11cdf) #2 0x0000559ae74fc575 n/a (ntpd + 0x12575) #3 0x0000559ae74f7d8a n/a (ntpd + 0xdd8a) #4 0x0000559ae74f95d3 n/a (ntpd + 0xf5d3) #5 0x0000559ae7501fd1 n/a (ntpd + 0x17fd1) #6 0x0000559ae74f2c2d n/a (ntpd + 0x8c2d) #7 0x00007f99d292916c n/a (libc.so.6 + 0x2716c) #8 0x00007f99d2929229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000559ae74f2c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Sep 12 22:56:00.535280 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Sep 12 22:56:00.535473 systemd[1]: ntpd.service: Failed with result 'core-dump'. Sep 12 22:56:00.542474 systemd[1]: systemd-coredump@1-2160-0.service: Deactivated successfully. Sep 12 22:56:00.598804 systemd-networkd[1815]: eth0: Gained IPv6LL Sep 12 22:56:00.638229 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 22:56:00.638575 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 22:56:00.640994 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 22:56:00.666346 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Sep 12 22:56:00.681237 systemd-hostnamed[2027]: Hostname set to (transient) Sep 12 22:56:00.683181 systemd-resolved[1819]: System hostname changed to 'ip-172-31-30-120'. Sep 12 22:56:00.683868 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 22:56:00.697541 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 22:56:00.706733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:56:00.715305 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 22:56:00.727441 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 22:56:00.743378 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 22:56:00.786627 ntpd[2223]: ntpd 4.2.8p18@1.4062-o Fri Sep 12 20:09:42 UTC 2025 (1): Starting Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: ntpd 4.2.8p18@1.4062-o Fri Sep 12 20:09:42 UTC 2025 (1): Starting Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: ---------------------------------------------------- Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: ntp-4 is maintained by Network Time Foundation, Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: corporation. Support and training for ntp-4 are Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: available at https://www.nwtime.org/support Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: ---------------------------------------------------- Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: proto: precision = 0.092 usec (-23) Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: basedate set to 2025-08-31 Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: gps base set to 2025-08-31 (week 2382) Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: Listen normally on 3 eth0 172.31.30.120:123 Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: Listen normally on 4 lo [::1]:123 Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: Listen normally on 5 eth0 [fe80::402:c7ff:fe7d:c347%2]:123 Sep 12 22:56:00.792969 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: Listening on routing socket on fd #22 for interface updates Sep 12 22:56:00.786719 ntpd[2223]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 22:56:00.786730 ntpd[2223]: ---------------------------------------------------- Sep 12 22:56:00.786739 ntpd[2223]: ntp-4 is maintained by Network Time Foundation, Sep 12 22:56:00.786748 ntpd[2223]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 22:56:00.786757 ntpd[2223]: corporation. Support and training for ntp-4 are Sep 12 22:56:00.786767 ntpd[2223]: available at https://www.nwtime.org/support Sep 12 22:56:00.786777 ntpd[2223]: ---------------------------------------------------- Sep 12 22:56:00.787479 ntpd[2223]: proto: precision = 0.092 usec (-23) Sep 12 22:56:00.791963 ntpd[2223]: basedate set to 2025-08-31 Sep 12 22:56:00.791989 ntpd[2223]: gps base set to 2025-08-31 (week 2382) Sep 12 22:56:00.792555 ntpd[2223]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 22:56:00.792591 ntpd[2223]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 22:56:00.792845 ntpd[2223]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 22:56:00.792875 ntpd[2223]: Listen normally on 3 eth0 172.31.30.120:123 Sep 12 22:56:00.792906 ntpd[2223]: Listen normally on 4 lo [::1]:123 Sep 12 22:56:00.792933 ntpd[2223]: Listen normally on 5 eth0 [fe80::402:c7ff:fe7d:c347%2]:123 Sep 12 22:56:00.792962 ntpd[2223]: Listening on routing socket on fd #22 for interface updates Sep 12 22:56:00.794672 ntpd[2223]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 22:56:00.807457 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 22:56:00.819867 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 22:56:00.819867 ntpd[2223]: 12 Sep 22:56:00 ntpd[2223]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 22:56:00.819192 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 22:56:00.812794 ntpd[2223]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 22:56:00.824447 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 22:56:00.826533 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 22:56:00.863452 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 22:56:00.887498 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 22:56:00.890997 systemd[1]: Started sshd@0-172.31.30.120:22-139.178.89.65:42510.service - OpenSSH per-connection server daemon (139.178.89.65:42510). Sep 12 22:56:00.954605 tar[1980]: linux-amd64/LICENSE Sep 12 22:56:00.954605 tar[1980]: linux-amd64/README.md Sep 12 22:56:00.986963 containerd[2001]: time="2025-09-12T22:56:00.986350746Z" level=info msg="Start subscribing containerd event" Sep 12 22:56:00.986963 containerd[2001]: time="2025-09-12T22:56:00.986419671Z" level=info msg="Start recovering state" Sep 12 22:56:00.986963 containerd[2001]: time="2025-09-12T22:56:00.986536449Z" level=info msg="Start event monitor" Sep 12 22:56:00.986963 containerd[2001]: time="2025-09-12T22:56:00.986552035Z" level=info msg="Start cni network conf syncer for default" Sep 12 22:56:00.986963 containerd[2001]: time="2025-09-12T22:56:00.986563815Z" level=info msg="Start streaming server" Sep 12 22:56:00.986963 containerd[2001]: time="2025-09-12T22:56:00.986576597Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 22:56:00.986963 containerd[2001]: time="2025-09-12T22:56:00.986586732Z" level=info msg="runtime interface starting up..." Sep 12 22:56:00.986963 containerd[2001]: time="2025-09-12T22:56:00.986595594Z" level=info msg="starting plugins..." Sep 12 22:56:00.986963 containerd[2001]: time="2025-09-12T22:56:00.986611193Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 22:56:00.987421 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 22:56:00.989701 containerd[2001]: time="2025-09-12T22:56:00.989646096Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 22:56:00.989946 containerd[2001]: time="2025-09-12T22:56:00.989751534Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 22:56:00.991814 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 22:56:00.993083 containerd[2001]: time="2025-09-12T22:56:00.992984203Z" level=info msg="containerd successfully booted in 0.665429s" Sep 12 22:56:00.994700 amazon-ssm-agent[2221]: Initializing new seelog logger Sep 12 22:56:00.995128 amazon-ssm-agent[2221]: New Seelog Logger Creation Complete Sep 12 22:56:00.995270 amazon-ssm-agent[2221]: 2025/09/12 22:56:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 22:56:00.995662 amazon-ssm-agent[2221]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 22:56:00.998596 amazon-ssm-agent[2221]: 2025/09/12 22:56:00 processing appconfig overrides Sep 12 22:56:00.999576 amazon-ssm-agent[2221]: 2025/09/12 22:56:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 22:56:00.999576 amazon-ssm-agent[2221]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 22:56:00.999576 amazon-ssm-agent[2221]: 2025/09/12 22:56:00 processing appconfig overrides Sep 12 22:56:00.999997 amazon-ssm-agent[2221]: 2025/09/12 22:56:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 22:56:01.000055 amazon-ssm-agent[2221]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 22:56:01.000187 amazon-ssm-agent[2221]: 2025/09/12 22:56:01 processing appconfig overrides Sep 12 22:56:01.000901 amazon-ssm-agent[2221]: 2025-09-12 22:56:00.9990 INFO Proxy environment variables: Sep 12 22:56:01.007715 amazon-ssm-agent[2221]: 2025/09/12 22:56:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 22:56:01.007715 amazon-ssm-agent[2221]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 22:56:01.007715 amazon-ssm-agent[2221]: 2025/09/12 22:56:01 processing appconfig overrides Sep 12 22:56:01.101303 amazon-ssm-agent[2221]: 2025-09-12 22:56:00.9992 INFO https_proxy: Sep 12 22:56:01.201845 amazon-ssm-agent[2221]: 2025-09-12 22:56:00.9992 INFO http_proxy: Sep 12 22:56:01.269750 sshd[2241]: Accepted publickey for core from 139.178.89.65 port 42510 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:56:01.271821 sshd-session[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:01.306988 amazon-ssm-agent[2221]: 2025-09-12 22:56:00.9992 INFO no_proxy: Sep 12 22:56:01.346848 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 22:56:01.363017 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 22:56:01.390282 systemd-logind[1971]: New session 1 of user core. Sep 12 22:56:01.408783 amazon-ssm-agent[2221]: 2025-09-12 22:56:00.9993 INFO Checking if agent identity type OnPrem can be assumed Sep 12 22:56:01.427871 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 22:56:01.438034 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 22:56:01.469740 (systemd)[2260]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 22:56:01.476259 systemd-logind[1971]: New session c1 of user core. Sep 12 22:56:01.508990 amazon-ssm-agent[2221]: 2025-09-12 22:56:00.9998 INFO Checking if agent identity type EC2 can be assumed Sep 12 22:56:01.607758 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.0719 INFO Agent will take identity from EC2 Sep 12 22:56:01.707766 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.0732 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 12 22:56:01.740003 amazon-ssm-agent[2221]: 2025/09/12 22:56:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 22:56:01.740003 amazon-ssm-agent[2221]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 22:56:01.740003 amazon-ssm-agent[2221]: 2025/09/12 22:56:01 processing appconfig overrides Sep 12 22:56:01.772113 systemd[2260]: Queued start job for default target default.target. Sep 12 22:56:01.779264 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.0733 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 12 22:56:01.779264 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.0733 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 22:56:01.779443 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.0733 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 12 22:56:01.779443 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.0733 INFO [Registrar] Starting registrar module Sep 12 22:56:01.779443 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.0744 INFO [EC2Identity] Checking disk for registration info Sep 12 22:56:01.779443 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.0744 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 12 22:56:01.779443 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.0744 INFO [EC2Identity] Generating registration keypair Sep 12 22:56:01.779443 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.6870 INFO [EC2Identity] Checking write access before registering Sep 12 22:56:01.779443 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.6875 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 12 22:56:01.779443 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.7376 INFO [EC2Identity] EC2 registration was successful. Sep 12 22:56:01.779443 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.7376 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 12 22:56:01.779443 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.7378 INFO [CredentialRefresher] credentialRefresher has started Sep 12 22:56:01.779443 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.7378 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 22:56:01.779443 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.7788 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 22:56:01.779443 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.7791 INFO [CredentialRefresher] Credentials ready Sep 12 22:56:01.782071 systemd[2260]: Created slice app.slice - User Application Slice. Sep 12 22:56:01.782119 systemd[2260]: Reached target paths.target - Paths. Sep 12 22:56:01.782178 systemd[2260]: Reached target timers.target - Timers. Sep 12 22:56:01.784284 systemd[2260]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 22:56:01.803424 systemd[2260]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 22:56:01.803580 systemd[2260]: Reached target sockets.target - Sockets. Sep 12 22:56:01.803650 systemd[2260]: Reached target basic.target - Basic System. Sep 12 22:56:01.803769 systemd[2260]: Reached target default.target - Main User Target. Sep 12 22:56:01.803810 systemd[2260]: Startup finished in 316ms. Sep 12 22:56:01.804727 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 22:56:01.807047 amazon-ssm-agent[2221]: 2025-09-12 22:56:01.7793 INFO [CredentialRefresher] Next credential rotation will be in 29.99999153335 minutes Sep 12 22:56:01.813037 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 22:56:01.999114 systemd[1]: Started sshd@1-172.31.30.120:22-139.178.89.65:42524.service - OpenSSH per-connection server daemon (139.178.89.65:42524). Sep 12 22:56:02.203432 sshd[2272]: Accepted publickey for core from 139.178.89.65 port 42524 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:56:02.209809 sshd-session[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:02.223755 systemd-logind[1971]: New session 2 of user core. Sep 12 22:56:02.229917 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 22:56:02.365514 sshd[2275]: Connection closed by 139.178.89.65 port 42524 Sep 12 22:56:02.366312 sshd-session[2272]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:02.372541 systemd[1]: sshd@1-172.31.30.120:22-139.178.89.65:42524.service: Deactivated successfully. Sep 12 22:56:02.374877 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 22:56:02.378419 systemd-logind[1971]: Session 2 logged out. Waiting for processes to exit. Sep 12 22:56:02.381562 systemd-logind[1971]: Removed session 2. Sep 12 22:56:02.406418 systemd[1]: Started sshd@2-172.31.30.120:22-139.178.89.65:42538.service - OpenSSH per-connection server daemon (139.178.89.65:42538). Sep 12 22:56:02.608942 sshd[2281]: Accepted publickey for core from 139.178.89.65 port 42538 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:56:02.611084 sshd-session[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:02.618007 systemd-logind[1971]: New session 3 of user core. Sep 12 22:56:02.632133 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 22:56:02.757915 sshd[2284]: Connection closed by 139.178.89.65 port 42538 Sep 12 22:56:02.759323 sshd-session[2281]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:02.764489 systemd[1]: sshd@2-172.31.30.120:22-139.178.89.65:42538.service: Deactivated successfully. Sep 12 22:56:02.767303 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 22:56:02.769160 systemd-logind[1971]: Session 3 logged out. Waiting for processes to exit. Sep 12 22:56:02.770889 systemd-logind[1971]: Removed session 3. Sep 12 22:56:02.799077 amazon-ssm-agent[2221]: 2025-09-12 22:56:02.7989 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 22:56:02.900099 amazon-ssm-agent[2221]: 2025-09-12 22:56:02.8013 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2291) started Sep 12 22:56:03.000192 amazon-ssm-agent[2221]: 2025-09-12 22:56:02.8013 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 22:56:03.727488 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:56:03.729615 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 22:56:03.731446 systemd[1]: Startup finished in 2.753s (kernel) + 8.088s (initrd) + 9.012s (userspace) = 19.854s. Sep 12 22:56:03.741041 (kubelet)[2308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:56:04.976590 kubelet[2308]: E0912 22:56:04.976501 2308 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:56:04.979174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:56:04.979331 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:56:04.979636 systemd[1]: kubelet.service: Consumed 1.091s CPU time, 264.7M memory peak. Sep 12 22:56:08.896093 systemd-resolved[1819]: Clock change detected. Flushing caches. Sep 12 22:56:13.910708 systemd[1]: Started sshd@3-172.31.30.120:22-139.178.89.65:49030.service - OpenSSH per-connection server daemon (139.178.89.65:49030). Sep 12 22:56:14.083154 sshd[2320]: Accepted publickey for core from 139.178.89.65 port 49030 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:56:14.084729 sshd-session[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:14.090697 systemd-logind[1971]: New session 4 of user core. Sep 12 22:56:14.096695 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 22:56:14.215563 sshd[2323]: Connection closed by 139.178.89.65 port 49030 Sep 12 22:56:14.216612 sshd-session[2320]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:14.221359 systemd[1]: sshd@3-172.31.30.120:22-139.178.89.65:49030.service: Deactivated successfully. Sep 12 22:56:14.223248 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 22:56:14.224213 systemd-logind[1971]: Session 4 logged out. Waiting for processes to exit. Sep 12 22:56:14.226094 systemd-logind[1971]: Removed session 4. Sep 12 22:56:14.254203 systemd[1]: Started sshd@4-172.31.30.120:22-139.178.89.65:49042.service - OpenSSH per-connection server daemon (139.178.89.65:49042). Sep 12 22:56:14.415634 sshd[2329]: Accepted publickey for core from 139.178.89.65 port 49042 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:56:14.416997 sshd-session[2329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:14.422698 systemd-logind[1971]: New session 5 of user core. Sep 12 22:56:14.432765 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 22:56:14.546766 sshd[2332]: Connection closed by 139.178.89.65 port 49042 Sep 12 22:56:14.547992 sshd-session[2329]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:14.553052 systemd[1]: sshd@4-172.31.30.120:22-139.178.89.65:49042.service: Deactivated successfully. Sep 12 22:56:14.555303 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 22:56:14.556243 systemd-logind[1971]: Session 5 logged out. Waiting for processes to exit. Sep 12 22:56:14.558069 systemd-logind[1971]: Removed session 5. Sep 12 22:56:14.579884 systemd[1]: Started sshd@5-172.31.30.120:22-139.178.89.65:49058.service - OpenSSH per-connection server daemon (139.178.89.65:49058). Sep 12 22:56:14.755556 sshd[2338]: Accepted publickey for core from 139.178.89.65 port 49058 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:56:14.756986 sshd-session[2338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:14.762093 systemd-logind[1971]: New session 6 of user core. Sep 12 22:56:14.768711 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 22:56:14.886948 sshd[2341]: Connection closed by 139.178.89.65 port 49058 Sep 12 22:56:14.887509 sshd-session[2338]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:14.891271 systemd[1]: sshd@5-172.31.30.120:22-139.178.89.65:49058.service: Deactivated successfully. Sep 12 22:56:14.893158 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 22:56:14.893924 systemd-logind[1971]: Session 6 logged out. Waiting for processes to exit. Sep 12 22:56:14.895073 systemd-logind[1971]: Removed session 6. Sep 12 22:56:14.926187 systemd[1]: Started sshd@6-172.31.30.120:22-139.178.89.65:49064.service - OpenSSH per-connection server daemon (139.178.89.65:49064). Sep 12 22:56:15.109162 sshd[2347]: Accepted publickey for core from 139.178.89.65 port 49064 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:56:15.110535 sshd-session[2347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:15.116974 systemd-logind[1971]: New session 7 of user core. Sep 12 22:56:15.122628 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 22:56:15.234073 sudo[2351]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 22:56:15.234349 sudo[2351]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:56:15.250945 sudo[2351]: pam_unix(sudo:session): session closed for user root Sep 12 22:56:15.273153 sshd[2350]: Connection closed by 139.178.89.65 port 49064 Sep 12 22:56:15.274049 sshd-session[2347]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:15.278305 systemd[1]: sshd@6-172.31.30.120:22-139.178.89.65:49064.service: Deactivated successfully. Sep 12 22:56:15.283996 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 22:56:15.297346 systemd-logind[1971]: Session 7 logged out. Waiting for processes to exit. Sep 12 22:56:15.312750 systemd[1]: Started sshd@7-172.31.30.120:22-139.178.89.65:49072.service - OpenSSH per-connection server daemon (139.178.89.65:49072). Sep 12 22:56:15.314277 systemd-logind[1971]: Removed session 7. Sep 12 22:56:15.476340 sshd[2357]: Accepted publickey for core from 139.178.89.65 port 49072 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:56:15.478074 sshd-session[2357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:15.484043 systemd-logind[1971]: New session 8 of user core. Sep 12 22:56:15.490615 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 22:56:15.587948 sudo[2362]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 22:56:15.588319 sudo[2362]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:56:15.593892 sudo[2362]: pam_unix(sudo:session): session closed for user root Sep 12 22:56:15.599671 sudo[2361]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 22:56:15.600040 sudo[2361]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:56:15.611043 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 22:56:15.650550 augenrules[2384]: No rules Sep 12 22:56:15.651999 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 22:56:15.652259 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 22:56:15.653578 sudo[2361]: pam_unix(sudo:session): session closed for user root Sep 12 22:56:15.675759 sshd[2360]: Connection closed by 139.178.89.65 port 49072 Sep 12 22:56:15.676283 sshd-session[2357]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:15.683093 systemd[1]: sshd@7-172.31.30.120:22-139.178.89.65:49072.service: Deactivated successfully. Sep 12 22:56:15.684821 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 22:56:15.686679 systemd-logind[1971]: Session 8 logged out. Waiting for processes to exit. Sep 12 22:56:15.687831 systemd-logind[1971]: Removed session 8. Sep 12 22:56:15.710042 systemd[1]: Started sshd@8-172.31.30.120:22-139.178.89.65:49088.service - OpenSSH per-connection server daemon (139.178.89.65:49088). Sep 12 22:56:15.895288 sshd[2393]: Accepted publickey for core from 139.178.89.65 port 49088 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:56:15.896971 sshd-session[2393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:15.903033 systemd-logind[1971]: New session 9 of user core. Sep 12 22:56:15.910619 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 22:56:16.010262 sudo[2397]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 22:56:16.010662 sudo[2397]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:56:16.132057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 22:56:16.134786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:56:16.467589 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 22:56:16.470499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:56:16.475871 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:56:16.476239 (dockerd)[2423]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 22:56:16.534417 kubelet[2424]: E0912 22:56:16.528800 2424 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:56:16.532765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:56:16.532902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:56:16.533214 systemd[1]: kubelet.service: Consumed 184ms CPU time, 110.7M memory peak. Sep 12 22:56:16.797622 dockerd[2423]: time="2025-09-12T22:56:16.797490821Z" level=info msg="Starting up" Sep 12 22:56:16.798534 dockerd[2423]: time="2025-09-12T22:56:16.798298063Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 22:56:16.811324 dockerd[2423]: time="2025-09-12T22:56:16.811260964Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 22:56:16.864039 dockerd[2423]: time="2025-09-12T22:56:16.863992976Z" level=info msg="Loading containers: start." Sep 12 22:56:16.875395 kernel: Initializing XFRM netlink socket Sep 12 22:56:17.091700 (udev-worker)[2454]: Network interface NamePolicy= disabled on kernel command line. Sep 12 22:56:17.137395 systemd-networkd[1815]: docker0: Link UP Sep 12 22:56:17.142580 dockerd[2423]: time="2025-09-12T22:56:17.142519639Z" level=info msg="Loading containers: done." Sep 12 22:56:17.161034 dockerd[2423]: time="2025-09-12T22:56:17.160977675Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 22:56:17.161219 dockerd[2423]: time="2025-09-12T22:56:17.161060509Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 22:56:17.161219 dockerd[2423]: time="2025-09-12T22:56:17.161144114Z" level=info msg="Initializing buildkit" Sep 12 22:56:17.195579 dockerd[2423]: time="2025-09-12T22:56:17.195513082Z" level=info msg="Completed buildkit initialization" Sep 12 22:56:17.205396 dockerd[2423]: time="2025-09-12T22:56:17.205252379Z" level=info msg="Daemon has completed initialization" Sep 12 22:56:17.205595 dockerd[2423]: time="2025-09-12T22:56:17.205535757Z" level=info msg="API listen on /run/docker.sock" Sep 12 22:56:17.205807 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 22:56:18.424093 containerd[2001]: time="2025-09-12T22:56:18.424035402Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 22:56:19.034080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount75781329.mount: Deactivated successfully. Sep 12 22:56:21.132865 containerd[2001]: time="2025-09-12T22:56:21.132786249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:21.133945 containerd[2001]: time="2025-09-12T22:56:21.133743717Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 22:56:21.134808 containerd[2001]: time="2025-09-12T22:56:21.134778221Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:21.137422 containerd[2001]: time="2025-09-12T22:56:21.137385117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:21.138526 containerd[2001]: time="2025-09-12T22:56:21.138332265Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.714259011s" Sep 12 22:56:21.138526 containerd[2001]: time="2025-09-12T22:56:21.138365625Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 22:56:21.138802 containerd[2001]: time="2025-09-12T22:56:21.138781877Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 22:56:23.226295 containerd[2001]: time="2025-09-12T22:56:23.226237579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:23.227414 containerd[2001]: time="2025-09-12T22:56:23.227109484Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 22:56:23.228528 containerd[2001]: time="2025-09-12T22:56:23.228343983Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:23.230968 containerd[2001]: time="2025-09-12T22:56:23.230937204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:23.231766 containerd[2001]: time="2025-09-12T22:56:23.231731548Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 2.092919422s" Sep 12 22:56:23.231840 containerd[2001]: time="2025-09-12T22:56:23.231768574Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 22:56:23.232291 containerd[2001]: time="2025-09-12T22:56:23.232254958Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 22:56:25.210078 containerd[2001]: time="2025-09-12T22:56:25.210015950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:25.213540 containerd[2001]: time="2025-09-12T22:56:25.213428279Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 22:56:25.216262 containerd[2001]: time="2025-09-12T22:56:25.215592454Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:25.218321 containerd[2001]: time="2025-09-12T22:56:25.218286272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:25.219110 containerd[2001]: time="2025-09-12T22:56:25.219066128Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.986780996s" Sep 12 22:56:25.219110 containerd[2001]: time="2025-09-12T22:56:25.219110976Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 22:56:25.219604 containerd[2001]: time="2025-09-12T22:56:25.219574002Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 22:56:26.223766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount378856283.mount: Deactivated successfully. Sep 12 22:56:26.632468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 22:56:26.636521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:56:26.950820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:56:26.964096 (kubelet)[2726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:56:27.044180 kubelet[2726]: E0912 22:56:27.043598 2726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:56:27.047979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:56:27.048167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:56:27.048917 systemd[1]: kubelet.service: Consumed 209ms CPU time, 110.6M memory peak. Sep 12 22:56:27.241313 containerd[2001]: time="2025-09-12T22:56:27.240966125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:27.242043 containerd[2001]: time="2025-09-12T22:56:27.241999106Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 22:56:27.244721 containerd[2001]: time="2025-09-12T22:56:27.244690559Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:27.247611 containerd[2001]: time="2025-09-12T22:56:27.247579826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:27.247990 containerd[2001]: time="2025-09-12T22:56:27.247957032Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.028352789s" Sep 12 22:56:27.247990 containerd[2001]: time="2025-09-12T22:56:27.247991997Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 22:56:27.248486 containerd[2001]: time="2025-09-12T22:56:27.248457481Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 22:56:27.725676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4247163910.mount: Deactivated successfully. Sep 12 22:56:28.720816 containerd[2001]: time="2025-09-12T22:56:28.720743501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:28.722049 containerd[2001]: time="2025-09-12T22:56:28.721857303Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 22:56:28.723040 containerd[2001]: time="2025-09-12T22:56:28.723004422Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:28.726404 containerd[2001]: time="2025-09-12T22:56:28.725650017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:28.726674 containerd[2001]: time="2025-09-12T22:56:28.726642605Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.478152505s" Sep 12 22:56:28.726725 containerd[2001]: time="2025-09-12T22:56:28.726680636Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 22:56:28.727216 containerd[2001]: time="2025-09-12T22:56:28.727189222Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 22:56:29.192064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount411584422.mount: Deactivated successfully. Sep 12 22:56:29.199547 containerd[2001]: time="2025-09-12T22:56:29.199477477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:56:29.200451 containerd[2001]: time="2025-09-12T22:56:29.200247824Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 22:56:29.201823 containerd[2001]: time="2025-09-12T22:56:29.201756699Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:56:29.204014 containerd[2001]: time="2025-09-12T22:56:29.203933431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:56:29.205068 containerd[2001]: time="2025-09-12T22:56:29.204879266Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 477.658419ms" Sep 12 22:56:29.205068 containerd[2001]: time="2025-09-12T22:56:29.204920723Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 22:56:29.205717 containerd[2001]: time="2025-09-12T22:56:29.205520905Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 22:56:29.662957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1323352806.mount: Deactivated successfully. Sep 12 22:56:31.765889 containerd[2001]: time="2025-09-12T22:56:31.765813778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:31.767008 containerd[2001]: time="2025-09-12T22:56:31.766849029Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 22:56:31.768586 containerd[2001]: time="2025-09-12T22:56:31.768549305Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:31.774624 containerd[2001]: time="2025-09-12T22:56:31.774098882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:31.775758 containerd[2001]: time="2025-09-12T22:56:31.775565998Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.570012966s" Sep 12 22:56:31.775758 containerd[2001]: time="2025-09-12T22:56:31.775602924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 22:56:31.802396 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 22:56:34.378900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:56:34.379642 systemd[1]: kubelet.service: Consumed 209ms CPU time, 110.6M memory peak. Sep 12 22:56:34.382344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:56:34.417406 systemd[1]: Reload requested from client PID 2875 ('systemctl') (unit session-9.scope)... Sep 12 22:56:34.417426 systemd[1]: Reloading... Sep 12 22:56:34.566395 zram_generator::config[2922]: No configuration found. Sep 12 22:56:34.845358 systemd[1]: Reloading finished in 427 ms. Sep 12 22:56:34.903972 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 22:56:34.904086 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 22:56:34.904668 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:56:34.904728 systemd[1]: kubelet.service: Consumed 145ms CPU time, 98M memory peak. Sep 12 22:56:34.908099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:56:35.186126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:56:35.193863 (kubelet)[2983]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 22:56:35.258401 kubelet[2983]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:56:35.258401 kubelet[2983]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 22:56:35.258401 kubelet[2983]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:56:35.258401 kubelet[2983]: I0912 22:56:35.257687 2983 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 22:56:35.674296 kubelet[2983]: I0912 22:56:35.674235 2983 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 22:56:35.674296 kubelet[2983]: I0912 22:56:35.674270 2983 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 22:56:35.674557 kubelet[2983]: I0912 22:56:35.674526 2983 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 22:56:35.727945 kubelet[2983]: I0912 22:56:35.727819 2983 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 22:56:35.734214 kubelet[2983]: E0912 22:56:35.734150 2983 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:56:35.740567 kubelet[2983]: I0912 22:56:35.740530 2983 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 22:56:35.748865 kubelet[2983]: I0912 22:56:35.748829 2983 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 22:56:35.753122 kubelet[2983]: I0912 22:56:35.753061 2983 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 22:56:35.753333 kubelet[2983]: I0912 22:56:35.753302 2983 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 22:56:35.753546 kubelet[2983]: I0912 22:56:35.753331 2983 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-120","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 22:56:35.753546 kubelet[2983]: I0912 22:56:35.753535 2983 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 22:56:35.753546 kubelet[2983]: I0912 22:56:35.753545 2983 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 22:56:35.754705 kubelet[2983]: I0912 22:56:35.754678 2983 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:56:35.758635 kubelet[2983]: I0912 22:56:35.758434 2983 kubelet.go:408] "Attempting to sync node with API server" Sep 12 22:56:35.758635 kubelet[2983]: I0912 22:56:35.758475 2983 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 22:56:35.758635 kubelet[2983]: I0912 22:56:35.758514 2983 kubelet.go:314] "Adding apiserver pod source" Sep 12 22:56:35.758635 kubelet[2983]: I0912 22:56:35.758534 2983 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 22:56:35.768938 kubelet[2983]: W0912 22:56:35.768869 2983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-120&limit=500&resourceVersion=0": dial tcp 172.31.30.120:6443: connect: connection refused Sep 12 22:56:35.768938 kubelet[2983]: E0912 22:56:35.768995 2983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-120&limit=500&resourceVersion=0\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:56:35.769314 kubelet[2983]: W0912 22:56:35.769213 2983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.120:6443: connect: connection refused Sep 12 22:56:35.769314 kubelet[2983]: E0912 22:56:35.769270 2983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:56:35.770381 kubelet[2983]: I0912 22:56:35.770320 2983 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 22:56:35.775077 kubelet[2983]: I0912 22:56:35.774911 2983 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 22:56:35.775842 kubelet[2983]: W0912 22:56:35.775798 2983 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 22:56:35.776564 kubelet[2983]: I0912 22:56:35.776523 2983 server.go:1274] "Started kubelet" Sep 12 22:56:35.778345 kubelet[2983]: I0912 22:56:35.777977 2983 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 22:56:35.778618 kubelet[2983]: I0912 22:56:35.778585 2983 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 22:56:35.778707 kubelet[2983]: I0912 22:56:35.778680 2983 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 22:56:35.784746 kubelet[2983]: I0912 22:56:35.784135 2983 server.go:449] "Adding debug handlers to kubelet server" Sep 12 22:56:35.788570 kubelet[2983]: I0912 22:56:35.788193 2983 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 22:56:35.788716 kubelet[2983]: I0912 22:56:35.788652 2983 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 22:56:35.790529 kubelet[2983]: I0912 22:56:35.790484 2983 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 22:56:35.790939 kubelet[2983]: E0912 22:56:35.790914 2983 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-30-120\" not found" Sep 12 22:56:35.794071 kubelet[2983]: I0912 22:56:35.794050 2983 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 22:56:35.794240 kubelet[2983]: I0912 22:56:35.794230 2983 reconciler.go:26] "Reconciler: start to sync state" Sep 12 22:56:35.799762 kubelet[2983]: E0912 22:56:35.799724 2983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-120?timeout=10s\": dial tcp 172.31.30.120:6443: connect: connection refused" interval="200ms" Sep 12 22:56:35.800233 kubelet[2983]: I0912 22:56:35.800213 2983 factory.go:221] Registration of the systemd container factory successfully Sep 12 22:56:35.800624 kubelet[2983]: I0912 22:56:35.800608 2983 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 22:56:35.805893 kubelet[2983]: I0912 22:56:35.805825 2983 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 22:56:35.807948 kubelet[2983]: I0912 22:56:35.807821 2983 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 22:56:35.807948 kubelet[2983]: I0912 22:56:35.807846 2983 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 22:56:35.807948 kubelet[2983]: I0912 22:56:35.807868 2983 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 22:56:35.807948 kubelet[2983]: E0912 22:56:35.807906 2983 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 22:56:35.809604 kubelet[2983]: E0912 22:56:35.805119 2983 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.120:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.120:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-120.1864ab05c0a409a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-120,UID:ip-172-31-30-120,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-120,},FirstTimestamp:2025-09-12 22:56:35.776498081 +0000 UTC m=+0.577883000,LastTimestamp:2025-09-12 22:56:35.776498081 +0000 UTC m=+0.577883000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-120,}" Sep 12 22:56:35.809604 kubelet[2983]: W0912 22:56:35.809046 2983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.120:6443: connect: connection refused Sep 12 22:56:35.809604 kubelet[2983]: E0912 22:56:35.809091 2983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:56:35.815916 kubelet[2983]: I0912 22:56:35.815893 2983 factory.go:221] Registration of the containerd container factory successfully Sep 12 22:56:35.817127 kubelet[2983]: W0912 22:56:35.817081 2983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.120:6443: connect: connection refused Sep 12 22:56:35.817227 kubelet[2983]: E0912 22:56:35.817134 2983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:56:35.837588 kubelet[2983]: I0912 22:56:35.837288 2983 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 22:56:35.837588 kubelet[2983]: I0912 22:56:35.837309 2983 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 22:56:35.837588 kubelet[2983]: I0912 22:56:35.837352 2983 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:56:35.842656 kubelet[2983]: I0912 22:56:35.842622 2983 policy_none.go:49] "None policy: Start" Sep 12 22:56:35.843569 kubelet[2983]: I0912 22:56:35.843538 2983 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 22:56:35.843569 kubelet[2983]: I0912 22:56:35.843566 2983 state_mem.go:35] "Initializing new in-memory state store" Sep 12 22:56:35.857646 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 22:56:35.871674 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 22:56:35.875897 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 22:56:35.886057 kubelet[2983]: I0912 22:56:35.885554 2983 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 22:56:35.886057 kubelet[2983]: I0912 22:56:35.885740 2983 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 22:56:35.886057 kubelet[2983]: I0912 22:56:35.885751 2983 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 22:56:35.886949 kubelet[2983]: I0912 22:56:35.886923 2983 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 22:56:35.889579 kubelet[2983]: E0912 22:56:35.889413 2983 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-120\" not found" Sep 12 22:56:35.920499 systemd[1]: Created slice kubepods-burstable-pod46d7dc3fe74f24abe56112ca03dc5462.slice - libcontainer container kubepods-burstable-pod46d7dc3fe74f24abe56112ca03dc5462.slice. Sep 12 22:56:35.935455 systemd[1]: Created slice kubepods-burstable-podf4736f221eb478806a0a76323059f870.slice - libcontainer container kubepods-burstable-podf4736f221eb478806a0a76323059f870.slice. Sep 12 22:56:35.942667 systemd[1]: Created slice kubepods-burstable-pod8592b9fcc753de54d7bf95d3b58f41a4.slice - libcontainer container kubepods-burstable-pod8592b9fcc753de54d7bf95d3b58f41a4.slice. Sep 12 22:56:35.988010 kubelet[2983]: I0912 22:56:35.987949 2983 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-120" Sep 12 22:56:35.988539 kubelet[2983]: E0912 22:56:35.988505 2983 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.120:6443/api/v1/nodes\": dial tcp 172.31.30.120:6443: connect: connection refused" node="ip-172-31-30-120" Sep 12 22:56:35.995888 kubelet[2983]: I0912 22:56:35.995818 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46d7dc3fe74f24abe56112ca03dc5462-ca-certs\") pod \"kube-apiserver-ip-172-31-30-120\" (UID: \"46d7dc3fe74f24abe56112ca03dc5462\") " pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 12 22:56:35.995888 kubelet[2983]: I0912 22:56:35.995865 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46d7dc3fe74f24abe56112ca03dc5462-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-120\" (UID: \"46d7dc3fe74f24abe56112ca03dc5462\") " pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 12 22:56:35.995888 kubelet[2983]: I0912 22:56:35.995885 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4736f221eb478806a0a76323059f870-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"f4736f221eb478806a0a76323059f870\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 12 22:56:35.996080 kubelet[2983]: I0912 22:56:35.995903 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4736f221eb478806a0a76323059f870-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"f4736f221eb478806a0a76323059f870\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 12 22:56:35.996080 kubelet[2983]: I0912 22:56:35.995921 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4736f221eb478806a0a76323059f870-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"f4736f221eb478806a0a76323059f870\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 12 22:56:35.996080 kubelet[2983]: I0912 22:56:35.995935 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8592b9fcc753de54d7bf95d3b58f41a4-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-120\" (UID: \"8592b9fcc753de54d7bf95d3b58f41a4\") " pod="kube-system/kube-scheduler-ip-172-31-30-120" Sep 12 22:56:35.996080 kubelet[2983]: I0912 22:56:35.995951 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46d7dc3fe74f24abe56112ca03dc5462-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-120\" (UID: \"46d7dc3fe74f24abe56112ca03dc5462\") " pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 12 22:56:35.996080 kubelet[2983]: I0912 22:56:35.995965 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f4736f221eb478806a0a76323059f870-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"f4736f221eb478806a0a76323059f870\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 12 22:56:35.996216 kubelet[2983]: I0912 22:56:35.995985 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4736f221eb478806a0a76323059f870-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"f4736f221eb478806a0a76323059f870\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 12 22:56:36.002752 kubelet[2983]: E0912 22:56:36.002686 2983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-120?timeout=10s\": dial tcp 172.31.30.120:6443: connect: connection refused" interval="400ms" Sep 12 22:56:36.191326 kubelet[2983]: I0912 22:56:36.191221 2983 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-120" Sep 12 22:56:36.192248 kubelet[2983]: E0912 22:56:36.192211 2983 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.120:6443/api/v1/nodes\": dial tcp 172.31.30.120:6443: connect: connection refused" node="ip-172-31-30-120" Sep 12 22:56:36.235158 containerd[2001]: time="2025-09-12T22:56:36.235110214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-120,Uid:46d7dc3fe74f24abe56112ca03dc5462,Namespace:kube-system,Attempt:0,}" Sep 12 22:56:36.244080 containerd[2001]: time="2025-09-12T22:56:36.244038861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-120,Uid:f4736f221eb478806a0a76323059f870,Namespace:kube-system,Attempt:0,}" Sep 12 22:56:36.246155 containerd[2001]: time="2025-09-12T22:56:36.246109199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-120,Uid:8592b9fcc753de54d7bf95d3b58f41a4,Namespace:kube-system,Attempt:0,}" Sep 12 22:56:36.403946 kubelet[2983]: E0912 22:56:36.403895 2983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-120?timeout=10s\": dial tcp 172.31.30.120:6443: connect: connection refused" interval="800ms" Sep 12 22:56:36.411211 containerd[2001]: time="2025-09-12T22:56:36.411161978Z" level=info msg="connecting to shim 15ea8bd19c6e693dee068e7ab08975576eb2ee2ef3cf6eed00ff451b827b340a" address="unix:///run/containerd/s/ccb67c0aab55444b5ffe5f9c33dc4df4c823ee07655d49b6c1b849b94d167206" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:56:36.415921 containerd[2001]: time="2025-09-12T22:56:36.415514678Z" level=info msg="connecting to shim b8e9a16f5aee9a8afab05485c924ba8d3a33b1f49c7baf173d4be13f3ef782e2" address="unix:///run/containerd/s/ee11275b3823b08237a4626aeaa50f2b3111084d39ac2e668248813d03bb9f33" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:56:36.423282 containerd[2001]: time="2025-09-12T22:56:36.421278457Z" level=info msg="connecting to shim 12c262e8b633d9b66e8659e24b9eda96ede998eb43ec7c12a3637d87080d6f5f" address="unix:///run/containerd/s/23de4462a0df37a1754cc193b853f37fd08e2cd548680994b70966f0eb928648" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:56:36.546889 systemd[1]: Started cri-containerd-12c262e8b633d9b66e8659e24b9eda96ede998eb43ec7c12a3637d87080d6f5f.scope - libcontainer container 12c262e8b633d9b66e8659e24b9eda96ede998eb43ec7c12a3637d87080d6f5f. Sep 12 22:56:36.558506 systemd[1]: Started cri-containerd-15ea8bd19c6e693dee068e7ab08975576eb2ee2ef3cf6eed00ff451b827b340a.scope - libcontainer container 15ea8bd19c6e693dee068e7ab08975576eb2ee2ef3cf6eed00ff451b827b340a. Sep 12 22:56:36.560614 systemd[1]: Started cri-containerd-b8e9a16f5aee9a8afab05485c924ba8d3a33b1f49c7baf173d4be13f3ef782e2.scope - libcontainer container b8e9a16f5aee9a8afab05485c924ba8d3a33b1f49c7baf173d4be13f3ef782e2. Sep 12 22:56:36.597511 kubelet[2983]: I0912 22:56:36.596514 2983 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-120" Sep 12 22:56:36.597511 kubelet[2983]: E0912 22:56:36.596880 2983 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.120:6443/api/v1/nodes\": dial tcp 172.31.30.120:6443: connect: connection refused" node="ip-172-31-30-120" Sep 12 22:56:36.677915 containerd[2001]: time="2025-09-12T22:56:36.677866082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-120,Uid:46d7dc3fe74f24abe56112ca03dc5462,Namespace:kube-system,Attempt:0,} returns sandbox id \"12c262e8b633d9b66e8659e24b9eda96ede998eb43ec7c12a3637d87080d6f5f\"" Sep 12 22:56:36.682268 containerd[2001]: time="2025-09-12T22:56:36.682223531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-120,Uid:f4736f221eb478806a0a76323059f870,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8e9a16f5aee9a8afab05485c924ba8d3a33b1f49c7baf173d4be13f3ef782e2\"" Sep 12 22:56:36.688867 containerd[2001]: time="2025-09-12T22:56:36.688827165Z" level=info msg="CreateContainer within sandbox \"12c262e8b633d9b66e8659e24b9eda96ede998eb43ec7c12a3637d87080d6f5f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 22:56:36.696394 containerd[2001]: time="2025-09-12T22:56:36.696046419Z" level=info msg="CreateContainer within sandbox \"b8e9a16f5aee9a8afab05485c924ba8d3a33b1f49c7baf173d4be13f3ef782e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 22:56:36.709901 containerd[2001]: time="2025-09-12T22:56:36.709861536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-120,Uid:8592b9fcc753de54d7bf95d3b58f41a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"15ea8bd19c6e693dee068e7ab08975576eb2ee2ef3cf6eed00ff451b827b340a\"" Sep 12 22:56:36.713653 containerd[2001]: time="2025-09-12T22:56:36.713610469Z" level=info msg="Container 05412954ef3cbff8dd19349e94e2eb444ea47cc711a063fab6b9b0e382f72d51: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:56:36.714015 containerd[2001]: time="2025-09-12T22:56:36.713968860Z" level=info msg="CreateContainer within sandbox \"15ea8bd19c6e693dee068e7ab08975576eb2ee2ef3cf6eed00ff451b827b340a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 22:56:36.718404 containerd[2001]: time="2025-09-12T22:56:36.718331715Z" level=info msg="Container 5a0d7058ffef0b9641e7f325e059ed4e47b45ba23aa0d4c5a623b8fda4bd956b: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:56:36.726251 containerd[2001]: time="2025-09-12T22:56:36.726190773Z" level=info msg="Container 196d2a6772fc0edf237c02594bca49b97aca8985ad4abfdef9ebc6c2c70092e6: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:56:36.741201 containerd[2001]: time="2025-09-12T22:56:36.741138422Z" level=info msg="CreateContainer within sandbox \"12c262e8b633d9b66e8659e24b9eda96ede998eb43ec7c12a3637d87080d6f5f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"05412954ef3cbff8dd19349e94e2eb444ea47cc711a063fab6b9b0e382f72d51\"" Sep 12 22:56:36.742143 containerd[2001]: time="2025-09-12T22:56:36.742072281Z" level=info msg="CreateContainer within sandbox \"b8e9a16f5aee9a8afab05485c924ba8d3a33b1f49c7baf173d4be13f3ef782e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5a0d7058ffef0b9641e7f325e059ed4e47b45ba23aa0d4c5a623b8fda4bd956b\"" Sep 12 22:56:36.742655 containerd[2001]: time="2025-09-12T22:56:36.742637781Z" level=info msg="StartContainer for \"05412954ef3cbff8dd19349e94e2eb444ea47cc711a063fab6b9b0e382f72d51\"" Sep 12 22:56:36.743025 containerd[2001]: time="2025-09-12T22:56:36.743000503Z" level=info msg="StartContainer for \"5a0d7058ffef0b9641e7f325e059ed4e47b45ba23aa0d4c5a623b8fda4bd956b\"" Sep 12 22:56:36.743456 containerd[2001]: time="2025-09-12T22:56:36.743393712Z" level=info msg="CreateContainer within sandbox \"15ea8bd19c6e693dee068e7ab08975576eb2ee2ef3cf6eed00ff451b827b340a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"196d2a6772fc0edf237c02594bca49b97aca8985ad4abfdef9ebc6c2c70092e6\"" Sep 12 22:56:36.744843 containerd[2001]: time="2025-09-12T22:56:36.744811179Z" level=info msg="connecting to shim 05412954ef3cbff8dd19349e94e2eb444ea47cc711a063fab6b9b0e382f72d51" address="unix:///run/containerd/s/23de4462a0df37a1754cc193b853f37fd08e2cd548680994b70966f0eb928648" protocol=ttrpc version=3 Sep 12 22:56:36.745125 containerd[2001]: time="2025-09-12T22:56:36.744872682Z" level=info msg="connecting to shim 5a0d7058ffef0b9641e7f325e059ed4e47b45ba23aa0d4c5a623b8fda4bd956b" address="unix:///run/containerd/s/ee11275b3823b08237a4626aeaa50f2b3111084d39ac2e668248813d03bb9f33" protocol=ttrpc version=3 Sep 12 22:56:36.746652 containerd[2001]: time="2025-09-12T22:56:36.746538195Z" level=info msg="StartContainer for \"196d2a6772fc0edf237c02594bca49b97aca8985ad4abfdef9ebc6c2c70092e6\"" Sep 12 22:56:36.748775 containerd[2001]: time="2025-09-12T22:56:36.748697198Z" level=info msg="connecting to shim 196d2a6772fc0edf237c02594bca49b97aca8985ad4abfdef9ebc6c2c70092e6" address="unix:///run/containerd/s/ccb67c0aab55444b5ffe5f9c33dc4df4c823ee07655d49b6c1b849b94d167206" protocol=ttrpc version=3 Sep 12 22:56:36.770007 kubelet[2983]: W0912 22:56:36.769918 2983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.120:6443: connect: connection refused Sep 12 22:56:36.770149 kubelet[2983]: E0912 22:56:36.770022 2983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:56:36.787617 systemd[1]: Started cri-containerd-05412954ef3cbff8dd19349e94e2eb444ea47cc711a063fab6b9b0e382f72d51.scope - libcontainer container 05412954ef3cbff8dd19349e94e2eb444ea47cc711a063fab6b9b0e382f72d51. Sep 12 22:56:36.789043 systemd[1]: Started cri-containerd-196d2a6772fc0edf237c02594bca49b97aca8985ad4abfdef9ebc6c2c70092e6.scope - libcontainer container 196d2a6772fc0edf237c02594bca49b97aca8985ad4abfdef9ebc6c2c70092e6. Sep 12 22:56:36.790834 systemd[1]: Started cri-containerd-5a0d7058ffef0b9641e7f325e059ed4e47b45ba23aa0d4c5a623b8fda4bd956b.scope - libcontainer container 5a0d7058ffef0b9641e7f325e059ed4e47b45ba23aa0d4c5a623b8fda4bd956b. Sep 12 22:56:36.822025 kubelet[2983]: W0912 22:56:36.821804 2983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.120:6443: connect: connection refused Sep 12 22:56:36.822025 kubelet[2983]: E0912 22:56:36.821991 2983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:56:36.833479 kubelet[2983]: W0912 22:56:36.833137 2983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-120&limit=500&resourceVersion=0": dial tcp 172.31.30.120:6443: connect: connection refused Sep 12 22:56:36.833479 kubelet[2983]: E0912 22:56:36.833323 2983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-120&limit=500&resourceVersion=0\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:56:36.905596 containerd[2001]: time="2025-09-12T22:56:36.905386819Z" level=info msg="StartContainer for \"05412954ef3cbff8dd19349e94e2eb444ea47cc711a063fab6b9b0e382f72d51\" returns successfully" Sep 12 22:56:36.910964 containerd[2001]: time="2025-09-12T22:56:36.910559702Z" level=info msg="StartContainer for \"5a0d7058ffef0b9641e7f325e059ed4e47b45ba23aa0d4c5a623b8fda4bd956b\" returns successfully" Sep 12 22:56:36.948943 containerd[2001]: time="2025-09-12T22:56:36.948874435Z" level=info msg="StartContainer for \"196d2a6772fc0edf237c02594bca49b97aca8985ad4abfdef9ebc6c2c70092e6\" returns successfully" Sep 12 22:56:37.205435 kubelet[2983]: E0912 22:56:37.205358 2983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-120?timeout=10s\": dial tcp 172.31.30.120:6443: connect: connection refused" interval="1.6s" Sep 12 22:56:37.342906 kubelet[2983]: W0912 22:56:37.342776 2983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.120:6443: connect: connection refused Sep 12 22:56:37.342906 kubelet[2983]: E0912 22:56:37.342865 2983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:56:37.401051 kubelet[2983]: I0912 22:56:37.400634 2983 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-120" Sep 12 22:56:37.401051 kubelet[2983]: E0912 22:56:37.400981 2983 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.120:6443/api/v1/nodes\": dial tcp 172.31.30.120:6443: connect: connection refused" node="ip-172-31-30-120" Sep 12 22:56:37.935980 kubelet[2983]: E0912 22:56:37.935931 2983 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:56:39.004741 kubelet[2983]: I0912 22:56:39.004081 2983 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-120" Sep 12 22:56:39.721410 kubelet[2983]: E0912 22:56:39.720921 2983 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-120\" not found" node="ip-172-31-30-120" Sep 12 22:56:39.770832 kubelet[2983]: I0912 22:56:39.768752 2983 apiserver.go:52] "Watching apiserver" Sep 12 22:56:39.795636 kubelet[2983]: I0912 22:56:39.794413 2983 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 22:56:39.812196 kubelet[2983]: I0912 22:56:39.810927 2983 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-30-120" Sep 12 22:56:39.812196 kubelet[2983]: E0912 22:56:39.810966 2983 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-30-120\": node \"ip-172-31-30-120\" not found" Sep 12 22:56:39.882425 kubelet[2983]: E0912 22:56:39.882390 2983 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-30-120\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-30-120" Sep 12 22:56:39.882728 kubelet[2983]: E0912 22:56:39.882692 2983 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-120\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 12 22:56:41.662045 systemd[1]: Reload requested from client PID 3251 ('systemctl') (unit session-9.scope)... Sep 12 22:56:41.662065 systemd[1]: Reloading... Sep 12 22:56:41.782516 zram_generator::config[3298]: No configuration found. Sep 12 22:56:42.056163 systemd[1]: Reloading finished in 393 ms. Sep 12 22:56:42.096740 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:56:42.112769 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 22:56:42.113038 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:56:42.113107 systemd[1]: kubelet.service: Consumed 937ms CPU time, 127.5M memory peak. Sep 12 22:56:42.115954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:56:42.419940 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:56:42.433910 (kubelet)[3355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 22:56:42.509201 kubelet[3355]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:56:42.509201 kubelet[3355]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 22:56:42.509201 kubelet[3355]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:56:42.509201 kubelet[3355]: I0912 22:56:42.507616 3355 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 22:56:42.521136 kubelet[3355]: I0912 22:56:42.521094 3355 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 22:56:42.521136 kubelet[3355]: I0912 22:56:42.521125 3355 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 22:56:42.521489 kubelet[3355]: I0912 22:56:42.521465 3355 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 22:56:42.522785 kubelet[3355]: I0912 22:56:42.522756 3355 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 22:56:42.532161 kubelet[3355]: I0912 22:56:42.531962 3355 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 22:56:42.536752 kubelet[3355]: I0912 22:56:42.536730 3355 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 22:56:42.539969 kubelet[3355]: I0912 22:56:42.539898 3355 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 22:56:42.540436 kubelet[3355]: I0912 22:56:42.540071 3355 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 22:56:42.540562 kubelet[3355]: I0912 22:56:42.540534 3355 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 22:56:42.541047 kubelet[3355]: I0912 22:56:42.540623 3355 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-120","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 22:56:42.541258 kubelet[3355]: I0912 22:56:42.541243 3355 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 22:56:42.542154 kubelet[3355]: I0912 22:56:42.541327 3355 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 22:56:42.542154 kubelet[3355]: I0912 22:56:42.541399 3355 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:56:42.542154 kubelet[3355]: I0912 22:56:42.541604 3355 kubelet.go:408] "Attempting to sync node with API server" Sep 12 22:56:42.542154 kubelet[3355]: I0912 22:56:42.541619 3355 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 22:56:42.542154 kubelet[3355]: I0912 22:56:42.541668 3355 kubelet.go:314] "Adding apiserver pod source" Sep 12 22:56:42.542154 kubelet[3355]: I0912 22:56:42.541681 3355 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 22:56:42.548541 kubelet[3355]: I0912 22:56:42.548510 3355 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 22:56:42.549319 kubelet[3355]: I0912 22:56:42.549298 3355 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 22:56:42.552130 kubelet[3355]: I0912 22:56:42.552109 3355 server.go:1274] "Started kubelet" Sep 12 22:56:42.558599 kubelet[3355]: I0912 22:56:42.558532 3355 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 22:56:42.559122 kubelet[3355]: I0912 22:56:42.559102 3355 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 22:56:42.560251 kubelet[3355]: I0912 22:56:42.560232 3355 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 22:56:42.575397 kubelet[3355]: I0912 22:56:42.573432 3355 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 22:56:42.575397 kubelet[3355]: I0912 22:56:42.575256 3355 server.go:449] "Adding debug handlers to kubelet server" Sep 12 22:56:42.578406 kubelet[3355]: I0912 22:56:42.576798 3355 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 22:56:42.579025 kubelet[3355]: I0912 22:56:42.579001 3355 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 22:56:42.579269 kubelet[3355]: E0912 22:56:42.579246 3355 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-30-120\" not found" Sep 12 22:56:42.579749 kubelet[3355]: I0912 22:56:42.579728 3355 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 22:56:42.579887 kubelet[3355]: I0912 22:56:42.579872 3355 reconciler.go:26] "Reconciler: start to sync state" Sep 12 22:56:42.585205 kubelet[3355]: I0912 22:56:42.585168 3355 factory.go:221] Registration of the systemd container factory successfully Sep 12 22:56:42.585320 kubelet[3355]: I0912 22:56:42.585302 3355 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 22:56:42.588788 kubelet[3355]: I0912 22:56:42.588756 3355 factory.go:221] Registration of the containerd container factory successfully Sep 12 22:56:42.593136 kubelet[3355]: I0912 22:56:42.592997 3355 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 22:56:42.594610 kubelet[3355]: I0912 22:56:42.594591 3355 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 22:56:42.594972 kubelet[3355]: I0912 22:56:42.594689 3355 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 22:56:42.594972 kubelet[3355]: I0912 22:56:42.594705 3355 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 22:56:42.594972 kubelet[3355]: E0912 22:56:42.594745 3355 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 22:56:42.595942 kubelet[3355]: E0912 22:56:42.595919 3355 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 22:56:42.660435 kubelet[3355]: I0912 22:56:42.660246 3355 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 22:56:42.660435 kubelet[3355]: I0912 22:56:42.660261 3355 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 22:56:42.660583 kubelet[3355]: I0912 22:56:42.660477 3355 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:56:42.660666 kubelet[3355]: I0912 22:56:42.660641 3355 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 22:56:42.660702 kubelet[3355]: I0912 22:56:42.660659 3355 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 22:56:42.660702 kubelet[3355]: I0912 22:56:42.660679 3355 policy_none.go:49] "None policy: Start" Sep 12 22:56:42.662267 kubelet[3355]: I0912 22:56:42.661280 3355 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 22:56:42.662267 kubelet[3355]: I0912 22:56:42.661299 3355 state_mem.go:35] "Initializing new in-memory state store" Sep 12 22:56:42.662267 kubelet[3355]: I0912 22:56:42.661446 3355 state_mem.go:75] "Updated machine memory state" Sep 12 22:56:42.666322 kubelet[3355]: I0912 22:56:42.666297 3355 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 22:56:42.667132 kubelet[3355]: I0912 22:56:42.666471 3355 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 22:56:42.667132 kubelet[3355]: I0912 22:56:42.666484 3355 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 22:56:42.667132 kubelet[3355]: I0912 22:56:42.667059 3355 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 22:56:42.771574 kubelet[3355]: I0912 22:56:42.771494 3355 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-120" Sep 12 22:56:42.784562 kubelet[3355]: I0912 22:56:42.784516 3355 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-30-120" Sep 12 22:56:42.784793 kubelet[3355]: I0912 22:56:42.784692 3355 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-30-120" Sep 12 22:56:42.882385 kubelet[3355]: I0912 22:56:42.882148 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8592b9fcc753de54d7bf95d3b58f41a4-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-120\" (UID: \"8592b9fcc753de54d7bf95d3b58f41a4\") " pod="kube-system/kube-scheduler-ip-172-31-30-120" Sep 12 22:56:42.882385 kubelet[3355]: I0912 22:56:42.882196 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46d7dc3fe74f24abe56112ca03dc5462-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-120\" (UID: \"46d7dc3fe74f24abe56112ca03dc5462\") " pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 12 22:56:42.882385 kubelet[3355]: I0912 22:56:42.882221 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4736f221eb478806a0a76323059f870-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"f4736f221eb478806a0a76323059f870\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 12 22:56:42.882385 kubelet[3355]: I0912 22:56:42.882243 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4736f221eb478806a0a76323059f870-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"f4736f221eb478806a0a76323059f870\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 12 22:56:42.882385 kubelet[3355]: I0912 22:56:42.882261 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4736f221eb478806a0a76323059f870-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"f4736f221eb478806a0a76323059f870\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 12 22:56:42.882686 kubelet[3355]: I0912 22:56:42.882288 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46d7dc3fe74f24abe56112ca03dc5462-ca-certs\") pod \"kube-apiserver-ip-172-31-30-120\" (UID: \"46d7dc3fe74f24abe56112ca03dc5462\") " pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 12 22:56:42.882686 kubelet[3355]: I0912 22:56:42.882301 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46d7dc3fe74f24abe56112ca03dc5462-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-120\" (UID: \"46d7dc3fe74f24abe56112ca03dc5462\") " pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 12 22:56:42.882686 kubelet[3355]: I0912 22:56:42.882317 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f4736f221eb478806a0a76323059f870-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"f4736f221eb478806a0a76323059f870\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 12 22:56:42.882686 kubelet[3355]: I0912 22:56:42.882333 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4736f221eb478806a0a76323059f870-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"f4736f221eb478806a0a76323059f870\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 12 22:56:43.558864 kubelet[3355]: I0912 22:56:43.558597 3355 apiserver.go:52] "Watching apiserver" Sep 12 22:56:43.580646 kubelet[3355]: I0912 22:56:43.580598 3355 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 22:56:43.654426 kubelet[3355]: E0912 22:56:43.654386 3355 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-120\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 12 22:56:43.787047 kubelet[3355]: I0912 22:56:43.786906 3355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-120" podStartSLOduration=1.786870942 podStartE2EDuration="1.786870942s" podCreationTimestamp="2025-09-12 22:56:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:56:43.752462183 +0000 UTC m=+1.310509965" watchObservedRunningTime="2025-09-12 22:56:43.786870942 +0000 UTC m=+1.344918721" Sep 12 22:56:43.810059 kubelet[3355]: I0912 22:56:43.809905 3355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-120" podStartSLOduration=1.809864005 podStartE2EDuration="1.809864005s" podCreationTimestamp="2025-09-12 22:56:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:56:43.788140105 +0000 UTC m=+1.346187887" watchObservedRunningTime="2025-09-12 22:56:43.809864005 +0000 UTC m=+1.367911784" Sep 12 22:56:43.833929 kubelet[3355]: I0912 22:56:43.833571 3355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-120" podStartSLOduration=1.833553331 podStartE2EDuration="1.833553331s" podCreationTimestamp="2025-09-12 22:56:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:56:43.812043011 +0000 UTC m=+1.370090794" watchObservedRunningTime="2025-09-12 22:56:43.833553331 +0000 UTC m=+1.391601114" Sep 12 22:56:45.987117 update_engine[1974]: I20250912 22:56:45.987030 1974 update_attempter.cc:509] Updating boot flags... Sep 12 22:56:48.100092 kubelet[3355]: I0912 22:56:48.100044 3355 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 22:56:48.101068 containerd[2001]: time="2025-09-12T22:56:48.100998672Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 22:56:48.102610 kubelet[3355]: I0912 22:56:48.102594 3355 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 22:56:48.970541 systemd[1]: Created slice kubepods-besteffort-poda4156d2c_c061_41ec_b8d4_f55445054f1d.slice - libcontainer container kubepods-besteffort-poda4156d2c_c061_41ec_b8d4_f55445054f1d.slice. Sep 12 22:56:49.031252 kubelet[3355]: I0912 22:56:49.031217 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a4156d2c-c061-41ec-b8d4-f55445054f1d-kube-proxy\") pod \"kube-proxy-4qlht\" (UID: \"a4156d2c-c061-41ec-b8d4-f55445054f1d\") " pod="kube-system/kube-proxy-4qlht" Sep 12 22:56:49.031252 kubelet[3355]: I0912 22:56:49.031251 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4156d2c-c061-41ec-b8d4-f55445054f1d-lib-modules\") pod \"kube-proxy-4qlht\" (UID: \"a4156d2c-c061-41ec-b8d4-f55445054f1d\") " pod="kube-system/kube-proxy-4qlht" Sep 12 22:56:49.031429 kubelet[3355]: I0912 22:56:49.031273 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqz2g\" (UniqueName: \"kubernetes.io/projected/a4156d2c-c061-41ec-b8d4-f55445054f1d-kube-api-access-fqz2g\") pod \"kube-proxy-4qlht\" (UID: \"a4156d2c-c061-41ec-b8d4-f55445054f1d\") " pod="kube-system/kube-proxy-4qlht" Sep 12 22:56:49.031429 kubelet[3355]: I0912 22:56:49.031289 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4156d2c-c061-41ec-b8d4-f55445054f1d-xtables-lock\") pod \"kube-proxy-4qlht\" (UID: \"a4156d2c-c061-41ec-b8d4-f55445054f1d\") " pod="kube-system/kube-proxy-4qlht" Sep 12 22:56:49.135524 systemd[1]: Created slice kubepods-besteffort-pod9481e996_e136_4856_9ef3_76f13b0dd132.slice - libcontainer container kubepods-besteffort-pod9481e996_e136_4856_9ef3_76f13b0dd132.slice. Sep 12 22:56:49.233439 kubelet[3355]: I0912 22:56:49.233265 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9481e996-e136-4856-9ef3-76f13b0dd132-var-lib-calico\") pod \"tigera-operator-58fc44c59b-k9s7g\" (UID: \"9481e996-e136-4856-9ef3-76f13b0dd132\") " pod="tigera-operator/tigera-operator-58fc44c59b-k9s7g" Sep 12 22:56:49.233439 kubelet[3355]: I0912 22:56:49.233314 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhtmq\" (UniqueName: \"kubernetes.io/projected/9481e996-e136-4856-9ef3-76f13b0dd132-kube-api-access-dhtmq\") pod \"tigera-operator-58fc44c59b-k9s7g\" (UID: \"9481e996-e136-4856-9ef3-76f13b0dd132\") " pod="tigera-operator/tigera-operator-58fc44c59b-k9s7g" Sep 12 22:56:49.282690 containerd[2001]: time="2025-09-12T22:56:49.282645019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4qlht,Uid:a4156d2c-c061-41ec-b8d4-f55445054f1d,Namespace:kube-system,Attempt:0,}" Sep 12 22:56:49.312973 containerd[2001]: time="2025-09-12T22:56:49.312869089Z" level=info msg="connecting to shim 3383950b865d532a1e69d8a5484d6ae5a820154320071499c5346be1b8f0909b" address="unix:///run/containerd/s/f7d830ac2c4d2c7ca3ac87408490f358212dbe795cc70b9629df557c971c2caf" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:56:49.339630 systemd[1]: Started cri-containerd-3383950b865d532a1e69d8a5484d6ae5a820154320071499c5346be1b8f0909b.scope - libcontainer container 3383950b865d532a1e69d8a5484d6ae5a820154320071499c5346be1b8f0909b. Sep 12 22:56:49.389194 containerd[2001]: time="2025-09-12T22:56:49.389118623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4qlht,Uid:a4156d2c-c061-41ec-b8d4-f55445054f1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3383950b865d532a1e69d8a5484d6ae5a820154320071499c5346be1b8f0909b\"" Sep 12 22:56:49.392981 containerd[2001]: time="2025-09-12T22:56:49.392940449Z" level=info msg="CreateContainer within sandbox \"3383950b865d532a1e69d8a5484d6ae5a820154320071499c5346be1b8f0909b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 22:56:49.411936 containerd[2001]: time="2025-09-12T22:56:49.411270726Z" level=info msg="Container 3bcde7cb835404bb38ac2dff7e7b19173fec67d609ba0a131a318afd2393130d: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:56:49.425470 containerd[2001]: time="2025-09-12T22:56:49.425431436Z" level=info msg="CreateContainer within sandbox \"3383950b865d532a1e69d8a5484d6ae5a820154320071499c5346be1b8f0909b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3bcde7cb835404bb38ac2dff7e7b19173fec67d609ba0a131a318afd2393130d\"" Sep 12 22:56:49.427138 containerd[2001]: time="2025-09-12T22:56:49.426074219Z" level=info msg="StartContainer for \"3bcde7cb835404bb38ac2dff7e7b19173fec67d609ba0a131a318afd2393130d\"" Sep 12 22:56:49.427827 containerd[2001]: time="2025-09-12T22:56:49.427798112Z" level=info msg="connecting to shim 3bcde7cb835404bb38ac2dff7e7b19173fec67d609ba0a131a318afd2393130d" address="unix:///run/containerd/s/f7d830ac2c4d2c7ca3ac87408490f358212dbe795cc70b9629df557c971c2caf" protocol=ttrpc version=3 Sep 12 22:56:49.446605 systemd[1]: Started cri-containerd-3bcde7cb835404bb38ac2dff7e7b19173fec67d609ba0a131a318afd2393130d.scope - libcontainer container 3bcde7cb835404bb38ac2dff7e7b19173fec67d609ba0a131a318afd2393130d. Sep 12 22:56:49.448160 containerd[2001]: time="2025-09-12T22:56:49.448130333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-k9s7g,Uid:9481e996-e136-4856-9ef3-76f13b0dd132,Namespace:tigera-operator,Attempt:0,}" Sep 12 22:56:49.491561 containerd[2001]: time="2025-09-12T22:56:49.491438285Z" level=info msg="connecting to shim f291e8cdd89b4c8bfd4759bfafe01bf37c55f53b04b3f25d87c078686b484141" address="unix:///run/containerd/s/32fd309852600d6f87715f32264bb720dfdc537d805c9a6e3044d5320d4c156a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:56:49.524838 containerd[2001]: time="2025-09-12T22:56:49.524726662Z" level=info msg="StartContainer for \"3bcde7cb835404bb38ac2dff7e7b19173fec67d609ba0a131a318afd2393130d\" returns successfully" Sep 12 22:56:49.535671 systemd[1]: Started cri-containerd-f291e8cdd89b4c8bfd4759bfafe01bf37c55f53b04b3f25d87c078686b484141.scope - libcontainer container f291e8cdd89b4c8bfd4759bfafe01bf37c55f53b04b3f25d87c078686b484141. Sep 12 22:56:49.600039 containerd[2001]: time="2025-09-12T22:56:49.599883049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-k9s7g,Uid:9481e996-e136-4856-9ef3-76f13b0dd132,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f291e8cdd89b4c8bfd4759bfafe01bf37c55f53b04b3f25d87c078686b484141\"" Sep 12 22:56:49.603851 containerd[2001]: time="2025-09-12T22:56:49.603803515Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 22:56:49.668588 kubelet[3355]: I0912 22:56:49.668518 3355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4qlht" podStartSLOduration=1.6685005560000001 podStartE2EDuration="1.668500556s" podCreationTimestamp="2025-09-12 22:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:56:49.668406592 +0000 UTC m=+7.226454385" watchObservedRunningTime="2025-09-12 22:56:49.668500556 +0000 UTC m=+7.226548331" Sep 12 22:56:50.172531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2379074189.mount: Deactivated successfully. Sep 12 22:56:51.425047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2665343043.mount: Deactivated successfully. Sep 12 22:56:53.043014 containerd[2001]: time="2025-09-12T22:56:53.042963505Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:53.044089 containerd[2001]: time="2025-09-12T22:56:53.044048141Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 22:56:53.045317 containerd[2001]: time="2025-09-12T22:56:53.045257849Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:53.047634 containerd[2001]: time="2025-09-12T22:56:53.047576508Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:53.048841 containerd[2001]: time="2025-09-12T22:56:53.048318394Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.444462021s" Sep 12 22:56:53.048841 containerd[2001]: time="2025-09-12T22:56:53.048358322Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 22:56:53.051761 containerd[2001]: time="2025-09-12T22:56:53.051729025Z" level=info msg="CreateContainer within sandbox \"f291e8cdd89b4c8bfd4759bfafe01bf37c55f53b04b3f25d87c078686b484141\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 22:56:53.062155 containerd[2001]: time="2025-09-12T22:56:53.061878641Z" level=info msg="Container b67c26d262f71fa0becc44a11e70f3f1277a9aab0e10c8a761b543b6e85ff98e: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:56:53.068100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2232185876.mount: Deactivated successfully. Sep 12 22:56:53.070135 containerd[2001]: time="2025-09-12T22:56:53.069957583Z" level=info msg="CreateContainer within sandbox \"f291e8cdd89b4c8bfd4759bfafe01bf37c55f53b04b3f25d87c078686b484141\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b67c26d262f71fa0becc44a11e70f3f1277a9aab0e10c8a761b543b6e85ff98e\"" Sep 12 22:56:53.071492 containerd[2001]: time="2025-09-12T22:56:53.071438712Z" level=info msg="StartContainer for \"b67c26d262f71fa0becc44a11e70f3f1277a9aab0e10c8a761b543b6e85ff98e\"" Sep 12 22:56:53.073761 containerd[2001]: time="2025-09-12T22:56:53.073714198Z" level=info msg="connecting to shim b67c26d262f71fa0becc44a11e70f3f1277a9aab0e10c8a761b543b6e85ff98e" address="unix:///run/containerd/s/32fd309852600d6f87715f32264bb720dfdc537d805c9a6e3044d5320d4c156a" protocol=ttrpc version=3 Sep 12 22:56:53.101631 systemd[1]: Started cri-containerd-b67c26d262f71fa0becc44a11e70f3f1277a9aab0e10c8a761b543b6e85ff98e.scope - libcontainer container b67c26d262f71fa0becc44a11e70f3f1277a9aab0e10c8a761b543b6e85ff98e. Sep 12 22:56:53.137702 containerd[2001]: time="2025-09-12T22:56:53.137665171Z" level=info msg="StartContainer for \"b67c26d262f71fa0becc44a11e70f3f1277a9aab0e10c8a761b543b6e85ff98e\" returns successfully" Sep 12 22:57:00.115197 sudo[2397]: pam_unix(sudo:session): session closed for user root Sep 12 22:57:00.140401 sshd[2396]: Connection closed by 139.178.89.65 port 49088 Sep 12 22:57:00.140839 sshd-session[2393]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:00.149028 systemd[1]: sshd@8-172.31.30.120:22-139.178.89.65:49088.service: Deactivated successfully. Sep 12 22:57:00.156172 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 22:57:00.158556 systemd[1]: session-9.scope: Consumed 4.931s CPU time, 157.1M memory peak. Sep 12 22:57:00.161921 systemd-logind[1971]: Session 9 logged out. Waiting for processes to exit. Sep 12 22:57:00.167038 systemd-logind[1971]: Removed session 9. Sep 12 22:57:05.132475 kubelet[3355]: I0912 22:57:05.132360 3355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-k9s7g" podStartSLOduration=12.684571963 podStartE2EDuration="16.132339577s" podCreationTimestamp="2025-09-12 22:56:49 +0000 UTC" firstStartedPulling="2025-09-12 22:56:49.601828726 +0000 UTC m=+7.159876487" lastFinishedPulling="2025-09-12 22:56:53.049596328 +0000 UTC m=+10.607644101" observedRunningTime="2025-09-12 22:56:53.685053821 +0000 UTC m=+11.243101598" watchObservedRunningTime="2025-09-12 22:57:05.132339577 +0000 UTC m=+22.690387360" Sep 12 22:57:05.165459 systemd[1]: Created slice kubepods-besteffort-pod198c927e_f9ca_45e0_8560_8d21bfdb47b5.slice - libcontainer container kubepods-besteffort-pod198c927e_f9ca_45e0_8560_8d21bfdb47b5.slice. Sep 12 22:57:05.249214 kubelet[3355]: I0912 22:57:05.249166 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2888n\" (UniqueName: \"kubernetes.io/projected/198c927e-f9ca-45e0-8560-8d21bfdb47b5-kube-api-access-2888n\") pod \"calico-typha-77c5f4454f-fcjh5\" (UID: \"198c927e-f9ca-45e0-8560-8d21bfdb47b5\") " pod="calico-system/calico-typha-77c5f4454f-fcjh5" Sep 12 22:57:05.249214 kubelet[3355]: I0912 22:57:05.249211 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/198c927e-f9ca-45e0-8560-8d21bfdb47b5-tigera-ca-bundle\") pod \"calico-typha-77c5f4454f-fcjh5\" (UID: \"198c927e-f9ca-45e0-8560-8d21bfdb47b5\") " pod="calico-system/calico-typha-77c5f4454f-fcjh5" Sep 12 22:57:05.249393 kubelet[3355]: I0912 22:57:05.249229 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/198c927e-f9ca-45e0-8560-8d21bfdb47b5-typha-certs\") pod \"calico-typha-77c5f4454f-fcjh5\" (UID: \"198c927e-f9ca-45e0-8560-8d21bfdb47b5\") " pod="calico-system/calico-typha-77c5f4454f-fcjh5" Sep 12 22:57:05.302047 systemd[1]: Created slice kubepods-besteffort-pode842708c_e7c8_4b47_a824_d85ee4c81c08.slice - libcontainer container kubepods-besteffort-pode842708c_e7c8_4b47_a824_d85ee4c81c08.slice. Sep 12 22:57:05.349467 kubelet[3355]: I0912 22:57:05.349414 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e842708c-e7c8-4b47-a824-d85ee4c81c08-cni-net-dir\") pod \"calico-node-5krq4\" (UID: \"e842708c-e7c8-4b47-a824-d85ee4c81c08\") " pod="calico-system/calico-node-5krq4" Sep 12 22:57:05.349467 kubelet[3355]: I0912 22:57:05.349460 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e842708c-e7c8-4b47-a824-d85ee4c81c08-node-certs\") pod \"calico-node-5krq4\" (UID: \"e842708c-e7c8-4b47-a824-d85ee4c81c08\") " pod="calico-system/calico-node-5krq4" Sep 12 22:57:05.349641 kubelet[3355]: I0912 22:57:05.349502 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e842708c-e7c8-4b47-a824-d85ee4c81c08-policysync\") pod \"calico-node-5krq4\" (UID: \"e842708c-e7c8-4b47-a824-d85ee4c81c08\") " pod="calico-system/calico-node-5krq4" Sep 12 22:57:05.349641 kubelet[3355]: I0912 22:57:05.349517 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e842708c-e7c8-4b47-a824-d85ee4c81c08-tigera-ca-bundle\") pod \"calico-node-5krq4\" (UID: \"e842708c-e7c8-4b47-a824-d85ee4c81c08\") " pod="calico-system/calico-node-5krq4" Sep 12 22:57:05.349641 kubelet[3355]: I0912 22:57:05.349533 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e842708c-e7c8-4b47-a824-d85ee4c81c08-var-run-calico\") pod \"calico-node-5krq4\" (UID: \"e842708c-e7c8-4b47-a824-d85ee4c81c08\") " pod="calico-system/calico-node-5krq4" Sep 12 22:57:05.349641 kubelet[3355]: I0912 22:57:05.349548 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e842708c-e7c8-4b47-a824-d85ee4c81c08-flexvol-driver-host\") pod \"calico-node-5krq4\" (UID: \"e842708c-e7c8-4b47-a824-d85ee4c81c08\") " pod="calico-system/calico-node-5krq4" Sep 12 22:57:05.349641 kubelet[3355]: I0912 22:57:05.349565 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8lfk\" (UniqueName: \"kubernetes.io/projected/e842708c-e7c8-4b47-a824-d85ee4c81c08-kube-api-access-x8lfk\") pod \"calico-node-5krq4\" (UID: \"e842708c-e7c8-4b47-a824-d85ee4c81c08\") " pod="calico-system/calico-node-5krq4" Sep 12 22:57:05.349765 kubelet[3355]: I0912 22:57:05.349580 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e842708c-e7c8-4b47-a824-d85ee4c81c08-xtables-lock\") pod \"calico-node-5krq4\" (UID: \"e842708c-e7c8-4b47-a824-d85ee4c81c08\") " pod="calico-system/calico-node-5krq4" Sep 12 22:57:05.349765 kubelet[3355]: I0912 22:57:05.349595 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e842708c-e7c8-4b47-a824-d85ee4c81c08-lib-modules\") pod \"calico-node-5krq4\" (UID: \"e842708c-e7c8-4b47-a824-d85ee4c81c08\") " pod="calico-system/calico-node-5krq4" Sep 12 22:57:05.349765 kubelet[3355]: I0912 22:57:05.349610 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e842708c-e7c8-4b47-a824-d85ee4c81c08-var-lib-calico\") pod \"calico-node-5krq4\" (UID: \"e842708c-e7c8-4b47-a824-d85ee4c81c08\") " pod="calico-system/calico-node-5krq4" Sep 12 22:57:05.349765 kubelet[3355]: I0912 22:57:05.349625 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e842708c-e7c8-4b47-a824-d85ee4c81c08-cni-log-dir\") pod \"calico-node-5krq4\" (UID: \"e842708c-e7c8-4b47-a824-d85ee4c81c08\") " pod="calico-system/calico-node-5krq4" Sep 12 22:57:05.349765 kubelet[3355]: I0912 22:57:05.349660 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e842708c-e7c8-4b47-a824-d85ee4c81c08-cni-bin-dir\") pod \"calico-node-5krq4\" (UID: \"e842708c-e7c8-4b47-a824-d85ee4c81c08\") " pod="calico-system/calico-node-5krq4" Sep 12 22:57:05.464798 kubelet[3355]: E0912 22:57:05.464619 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.464798 kubelet[3355]: W0912 22:57:05.464650 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.464798 kubelet[3355]: E0912 22:57:05.464680 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.475254 containerd[2001]: time="2025-09-12T22:57:05.474892982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77c5f4454f-fcjh5,Uid:198c927e-f9ca-45e0-8560-8d21bfdb47b5,Namespace:calico-system,Attempt:0,}" Sep 12 22:57:05.494888 kubelet[3355]: E0912 22:57:05.494318 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.494888 kubelet[3355]: W0912 22:57:05.494343 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.496951 kubelet[3355]: E0912 22:57:05.495777 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.560692 containerd[2001]: time="2025-09-12T22:57:05.560582610Z" level=info msg="connecting to shim fbdda2d2f0b912e52aa550bd2590961aaa5b1d84b437742ca83ea8e34ba2c9cc" address="unix:///run/containerd/s/b70ee55817c3d8f643d506d0b3b4edc4657338c9f0e2d7234e2561e3884fd375" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:57:05.600613 systemd[1]: Started cri-containerd-fbdda2d2f0b912e52aa550bd2590961aaa5b1d84b437742ca83ea8e34ba2c9cc.scope - libcontainer container fbdda2d2f0b912e52aa550bd2590961aaa5b1d84b437742ca83ea8e34ba2c9cc. Sep 12 22:57:05.611132 containerd[2001]: time="2025-09-12T22:57:05.611091513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5krq4,Uid:e842708c-e7c8-4b47-a824-d85ee4c81c08,Namespace:calico-system,Attempt:0,}" Sep 12 22:57:05.636287 kubelet[3355]: E0912 22:57:05.636035 3355 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-25l5j" podUID="b680e444-12ae-4def-88c1-f80311726a91" Sep 12 22:57:05.697134 containerd[2001]: time="2025-09-12T22:57:05.695975461Z" level=info msg="connecting to shim 6901e57e89721c6b00f8b7665090f14b6f45917e6c69beab76eac116399225e8" address="unix:///run/containerd/s/808ec940c1b9f512fb5599c5b49e643f359cd9f2e2da1c25769f54d6ab25be95" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:57:05.734300 kubelet[3355]: E0912 22:57:05.733623 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.734300 kubelet[3355]: W0912 22:57:05.733649 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.734300 kubelet[3355]: E0912 22:57:05.734020 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.735868 kubelet[3355]: E0912 22:57:05.735623 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.735868 kubelet[3355]: W0912 22:57:05.735660 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.735868 kubelet[3355]: E0912 22:57:05.735685 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.736005 kubelet[3355]: E0912 22:57:05.735949 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.736005 kubelet[3355]: W0912 22:57:05.735960 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.736005 kubelet[3355]: E0912 22:57:05.735989 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.737413 kubelet[3355]: E0912 22:57:05.736534 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.737413 kubelet[3355]: W0912 22:57:05.736551 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.737413 kubelet[3355]: E0912 22:57:05.736569 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.737640 kubelet[3355]: E0912 22:57:05.737559 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.737640 kubelet[3355]: W0912 22:57:05.737588 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.737640 kubelet[3355]: E0912 22:57:05.737605 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.738334 kubelet[3355]: E0912 22:57:05.738034 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.738334 kubelet[3355]: W0912 22:57:05.738048 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.738334 kubelet[3355]: E0912 22:57:05.738062 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.739175 kubelet[3355]: E0912 22:57:05.738979 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.739175 kubelet[3355]: W0912 22:57:05.738997 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.739175 kubelet[3355]: E0912 22:57:05.739011 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.740539 kubelet[3355]: E0912 22:57:05.740520 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.740539 kubelet[3355]: W0912 22:57:05.740538 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.740664 kubelet[3355]: E0912 22:57:05.740554 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.742439 kubelet[3355]: E0912 22:57:05.742417 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.742527 kubelet[3355]: W0912 22:57:05.742439 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.742527 kubelet[3355]: E0912 22:57:05.742456 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.742848 kubelet[3355]: E0912 22:57:05.742830 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.742915 kubelet[3355]: W0912 22:57:05.742849 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.742915 kubelet[3355]: E0912 22:57:05.742864 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.744682 kubelet[3355]: E0912 22:57:05.744443 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.744682 kubelet[3355]: W0912 22:57:05.744464 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.744682 kubelet[3355]: E0912 22:57:05.744481 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.745412 kubelet[3355]: E0912 22:57:05.745057 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.745412 kubelet[3355]: W0912 22:57:05.745076 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.745412 kubelet[3355]: E0912 22:57:05.745092 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.746349 kubelet[3355]: E0912 22:57:05.746190 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.746349 kubelet[3355]: W0912 22:57:05.746207 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.746349 kubelet[3355]: E0912 22:57:05.746222 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.747453 kubelet[3355]: E0912 22:57:05.747406 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.747549 kubelet[3355]: W0912 22:57:05.747530 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.747615 kubelet[3355]: E0912 22:57:05.747555 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.748174 kubelet[3355]: E0912 22:57:05.748146 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.748425 kubelet[3355]: W0912 22:57:05.748407 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.748499 kubelet[3355]: E0912 22:57:05.748433 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.749217 kubelet[3355]: E0912 22:57:05.749196 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.749217 kubelet[3355]: W0912 22:57:05.749216 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.749326 kubelet[3355]: E0912 22:57:05.749232 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.750594 systemd[1]: Started cri-containerd-6901e57e89721c6b00f8b7665090f14b6f45917e6c69beab76eac116399225e8.scope - libcontainer container 6901e57e89721c6b00f8b7665090f14b6f45917e6c69beab76eac116399225e8. Sep 12 22:57:05.751612 kubelet[3355]: E0912 22:57:05.751214 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.751612 kubelet[3355]: W0912 22:57:05.751229 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.751612 kubelet[3355]: E0912 22:57:05.751249 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.752156 kubelet[3355]: E0912 22:57:05.752135 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.752156 kubelet[3355]: W0912 22:57:05.752155 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.752277 kubelet[3355]: E0912 22:57:05.752171 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.754474 kubelet[3355]: E0912 22:57:05.754154 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.754566 kubelet[3355]: W0912 22:57:05.754477 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.754566 kubelet[3355]: E0912 22:57:05.754498 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.755515 kubelet[3355]: E0912 22:57:05.755493 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.755777 kubelet[3355]: W0912 22:57:05.755756 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.755856 kubelet[3355]: E0912 22:57:05.755782 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.757266 kubelet[3355]: E0912 22:57:05.757205 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.757266 kubelet[3355]: W0912 22:57:05.757222 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.757266 kubelet[3355]: E0912 22:57:05.757241 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.757634 kubelet[3355]: I0912 22:57:05.757421 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b680e444-12ae-4def-88c1-f80311726a91-registration-dir\") pod \"csi-node-driver-25l5j\" (UID: \"b680e444-12ae-4def-88c1-f80311726a91\") " pod="calico-system/csi-node-driver-25l5j" Sep 12 22:57:05.757784 kubelet[3355]: E0912 22:57:05.757768 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.758009 kubelet[3355]: W0912 22:57:05.757847 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.758009 kubelet[3355]: E0912 22:57:05.757872 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.758009 kubelet[3355]: I0912 22:57:05.757901 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b680e444-12ae-4def-88c1-f80311726a91-varrun\") pod \"csi-node-driver-25l5j\" (UID: \"b680e444-12ae-4def-88c1-f80311726a91\") " pod="calico-system/csi-node-driver-25l5j" Sep 12 22:57:05.758245 kubelet[3355]: E0912 22:57:05.758231 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.758309 kubelet[3355]: W0912 22:57:05.758296 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.758467 kubelet[3355]: E0912 22:57:05.758390 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.758467 kubelet[3355]: I0912 22:57:05.758421 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxgdl\" (UniqueName: \"kubernetes.io/projected/b680e444-12ae-4def-88c1-f80311726a91-kube-api-access-cxgdl\") pod \"csi-node-driver-25l5j\" (UID: \"b680e444-12ae-4def-88c1-f80311726a91\") " pod="calico-system/csi-node-driver-25l5j" Sep 12 22:57:05.758763 kubelet[3355]: E0912 22:57:05.758741 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.758825 kubelet[3355]: W0912 22:57:05.758774 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.758825 kubelet[3355]: E0912 22:57:05.758795 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.759068 kubelet[3355]: E0912 22:57:05.759055 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.759123 kubelet[3355]: W0912 22:57:05.759069 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.759123 kubelet[3355]: E0912 22:57:05.759107 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.759429 kubelet[3355]: E0912 22:57:05.759413 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.759429 kubelet[3355]: W0912 22:57:05.759429 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.759656 kubelet[3355]: E0912 22:57:05.759447 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.759656 kubelet[3355]: E0912 22:57:05.759654 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.759739 kubelet[3355]: W0912 22:57:05.759665 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.759739 kubelet[3355]: E0912 22:57:05.759702 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.760434 kubelet[3355]: E0912 22:57:05.760414 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.760434 kubelet[3355]: W0912 22:57:05.760434 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.760534 kubelet[3355]: E0912 22:57:05.760454 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.760534 kubelet[3355]: I0912 22:57:05.760500 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b680e444-12ae-4def-88c1-f80311726a91-kubelet-dir\") pod \"csi-node-driver-25l5j\" (UID: \"b680e444-12ae-4def-88c1-f80311726a91\") " pod="calico-system/csi-node-driver-25l5j" Sep 12 22:57:05.761010 kubelet[3355]: E0912 22:57:05.760993 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.761010 kubelet[3355]: W0912 22:57:05.761010 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.761248 kubelet[3355]: E0912 22:57:05.761140 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.761248 kubelet[3355]: I0912 22:57:05.761178 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b680e444-12ae-4def-88c1-f80311726a91-socket-dir\") pod \"csi-node-driver-25l5j\" (UID: \"b680e444-12ae-4def-88c1-f80311726a91\") " pod="calico-system/csi-node-driver-25l5j" Sep 12 22:57:05.761347 kubelet[3355]: E0912 22:57:05.761309 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.761486 kubelet[3355]: W0912 22:57:05.761345 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.761486 kubelet[3355]: E0912 22:57:05.761402 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.761637 kubelet[3355]: E0912 22:57:05.761624 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.761705 kubelet[3355]: W0912 22:57:05.761638 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.761705 kubelet[3355]: E0912 22:57:05.761658 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.761925 kubelet[3355]: E0912 22:57:05.761910 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.761977 kubelet[3355]: W0912 22:57:05.761926 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.761977 kubelet[3355]: E0912 22:57:05.761945 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.762223 kubelet[3355]: E0912 22:57:05.762192 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.762280 kubelet[3355]: W0912 22:57:05.762224 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.762280 kubelet[3355]: E0912 22:57:05.762238 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.762624 kubelet[3355]: E0912 22:57:05.762610 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.762682 kubelet[3355]: W0912 22:57:05.762625 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.762682 kubelet[3355]: E0912 22:57:05.762658 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.762925 kubelet[3355]: E0912 22:57:05.762912 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.762983 kubelet[3355]: W0912 22:57:05.762926 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.762983 kubelet[3355]: E0912 22:57:05.762939 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.864044 kubelet[3355]: E0912 22:57:05.863781 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.864044 kubelet[3355]: W0912 22:57:05.863821 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.864044 kubelet[3355]: E0912 22:57:05.863849 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.864848 kubelet[3355]: E0912 22:57:05.864831 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.865481 kubelet[3355]: W0912 22:57:05.864969 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.865800 kubelet[3355]: E0912 22:57:05.865780 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.867521 kubelet[3355]: E0912 22:57:05.866381 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.867521 kubelet[3355]: W0912 22:57:05.866411 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.867521 kubelet[3355]: E0912 22:57:05.866445 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.867521 kubelet[3355]: E0912 22:57:05.867484 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.867521 kubelet[3355]: W0912 22:57:05.867499 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.867952 kubelet[3355]: E0912 22:57:05.867804 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.868452 kubelet[3355]: E0912 22:57:05.868306 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.868452 kubelet[3355]: W0912 22:57:05.868321 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.868452 kubelet[3355]: E0912 22:57:05.868394 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.869081 kubelet[3355]: E0912 22:57:05.868797 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.869081 kubelet[3355]: W0912 22:57:05.868813 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.870790 kubelet[3355]: E0912 22:57:05.870512 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.870940 kubelet[3355]: E0912 22:57:05.870926 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.871048 kubelet[3355]: W0912 22:57:05.871030 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.871137 kubelet[3355]: E0912 22:57:05.871125 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.871684 kubelet[3355]: E0912 22:57:05.871669 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.871860 kubelet[3355]: W0912 22:57:05.871788 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.871860 kubelet[3355]: E0912 22:57:05.871816 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.872412 kubelet[3355]: E0912 22:57:05.872396 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.872585 kubelet[3355]: W0912 22:57:05.872502 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.872717 kubelet[3355]: E0912 22:57:05.872656 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.874629 kubelet[3355]: E0912 22:57:05.874600 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.874907 kubelet[3355]: W0912 22:57:05.874752 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.874907 kubelet[3355]: E0912 22:57:05.874853 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.875264 kubelet[3355]: E0912 22:57:05.875239 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.875460 kubelet[3355]: W0912 22:57:05.875342 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.875646 kubelet[3355]: E0912 22:57:05.875559 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.875964 kubelet[3355]: E0912 22:57:05.875943 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.876078 kubelet[3355]: W0912 22:57:05.876062 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.876274 kubelet[3355]: E0912 22:57:05.876258 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.877449 kubelet[3355]: E0912 22:57:05.877430 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.877539 kubelet[3355]: W0912 22:57:05.877450 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.879910 kubelet[3355]: E0912 22:57:05.879880 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.880405 kubelet[3355]: E0912 22:57:05.880224 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.880496 kubelet[3355]: W0912 22:57:05.880414 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.880696 kubelet[3355]: E0912 22:57:05.880676 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.881729 kubelet[3355]: E0912 22:57:05.881709 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.881729 kubelet[3355]: W0912 22:57:05.881727 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.883445 kubelet[3355]: E0912 22:57:05.883418 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.883572 kubelet[3355]: E0912 22:57:05.883555 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.883638 kubelet[3355]: W0912 22:57:05.883577 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.883971 kubelet[3355]: E0912 22:57:05.883948 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.885595 kubelet[3355]: E0912 22:57:05.885575 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.885686 kubelet[3355]: W0912 22:57:05.885596 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.885917 kubelet[3355]: E0912 22:57:05.885854 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.886892 kubelet[3355]: E0912 22:57:05.886871 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.886981 kubelet[3355]: W0912 22:57:05.886892 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.888230 kubelet[3355]: E0912 22:57:05.888119 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.889202 kubelet[3355]: E0912 22:57:05.889179 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.889278 kubelet[3355]: W0912 22:57:05.889202 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.890039 kubelet[3355]: E0912 22:57:05.889798 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.890818 kubelet[3355]: E0912 22:57:05.890802 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.890892 kubelet[3355]: W0912 22:57:05.890820 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.891198 kubelet[3355]: E0912 22:57:05.891176 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.891476 kubelet[3355]: E0912 22:57:05.891457 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.891476 kubelet[3355]: W0912 22:57:05.891475 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.891898 containerd[2001]: time="2025-09-12T22:57:05.891858924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77c5f4454f-fcjh5,Uid:198c927e-f9ca-45e0-8560-8d21bfdb47b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"fbdda2d2f0b912e52aa550bd2590961aaa5b1d84b437742ca83ea8e34ba2c9cc\"" Sep 12 22:57:05.892430 kubelet[3355]: E0912 22:57:05.892411 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.892430 kubelet[3355]: W0912 22:57:05.892430 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.894162 containerd[2001]: time="2025-09-12T22:57:05.894118117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5krq4,Uid:e842708c-e7c8-4b47-a824-d85ee4c81c08,Namespace:calico-system,Attempt:0,} returns sandbox id \"6901e57e89721c6b00f8b7665090f14b6f45917e6c69beab76eac116399225e8\"" Sep 12 22:57:05.894504 kubelet[3355]: E0912 22:57:05.894486 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.894504 kubelet[3355]: W0912 22:57:05.894504 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.894634 kubelet[3355]: E0912 22:57:05.894523 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.894814 kubelet[3355]: E0912 22:57:05.894796 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.894867 kubelet[3355]: W0912 22:57:05.894815 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.894867 kubelet[3355]: E0912 22:57:05.894831 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.894867 kubelet[3355]: E0912 22:57:05.894862 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.895104 kubelet[3355]: E0912 22:57:05.895078 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.895104 kubelet[3355]: W0912 22:57:05.895094 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.895192 kubelet[3355]: E0912 22:57:05.895109 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.895381 kubelet[3355]: E0912 22:57:05.895350 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:05.910951 containerd[2001]: time="2025-09-12T22:57:05.910913270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 22:57:05.925407 kubelet[3355]: E0912 22:57:05.925354 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:05.925626 kubelet[3355]: W0912 22:57:05.925414 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:05.925626 kubelet[3355]: E0912 22:57:05.925439 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:07.230148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3642884670.mount: Deactivated successfully. Sep 12 22:57:07.599135 kubelet[3355]: E0912 22:57:07.598847 3355 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-25l5j" podUID="b680e444-12ae-4def-88c1-f80311726a91" Sep 12 22:57:08.901866 containerd[2001]: time="2025-09-12T22:57:08.901813802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:08.912917 containerd[2001]: time="2025-09-12T22:57:08.912695456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 22:57:08.914143 containerd[2001]: time="2025-09-12T22:57:08.914104791Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:08.917255 containerd[2001]: time="2025-09-12T22:57:08.917192164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:08.917853 containerd[2001]: time="2025-09-12T22:57:08.917830544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.006856258s" Sep 12 22:57:08.917966 containerd[2001]: time="2025-09-12T22:57:08.917952925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 22:57:08.920925 containerd[2001]: time="2025-09-12T22:57:08.920889195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 22:57:08.952859 containerd[2001]: time="2025-09-12T22:57:08.952820507Z" level=info msg="CreateContainer within sandbox \"fbdda2d2f0b912e52aa550bd2590961aaa5b1d84b437742ca83ea8e34ba2c9cc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 22:57:08.984541 containerd[2001]: time="2025-09-12T22:57:08.984496351Z" level=info msg="Container 1c68f3a84700cbc9e6085329dbf2c0296b714eadab41525d074f72790c3b0615: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:08.985888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount337403226.mount: Deactivated successfully. Sep 12 22:57:09.007429 containerd[2001]: time="2025-09-12T22:57:09.007362968Z" level=info msg="CreateContainer within sandbox \"fbdda2d2f0b912e52aa550bd2590961aaa5b1d84b437742ca83ea8e34ba2c9cc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1c68f3a84700cbc9e6085329dbf2c0296b714eadab41525d074f72790c3b0615\"" Sep 12 22:57:09.008387 containerd[2001]: time="2025-09-12T22:57:09.008159766Z" level=info msg="StartContainer for \"1c68f3a84700cbc9e6085329dbf2c0296b714eadab41525d074f72790c3b0615\"" Sep 12 22:57:09.010565 containerd[2001]: time="2025-09-12T22:57:09.010513216Z" level=info msg="connecting to shim 1c68f3a84700cbc9e6085329dbf2c0296b714eadab41525d074f72790c3b0615" address="unix:///run/containerd/s/b70ee55817c3d8f643d506d0b3b4edc4657338c9f0e2d7234e2561e3884fd375" protocol=ttrpc version=3 Sep 12 22:57:09.034491 systemd[1]: Started cri-containerd-1c68f3a84700cbc9e6085329dbf2c0296b714eadab41525d074f72790c3b0615.scope - libcontainer container 1c68f3a84700cbc9e6085329dbf2c0296b714eadab41525d074f72790c3b0615. Sep 12 22:57:09.119513 containerd[2001]: time="2025-09-12T22:57:09.119449742Z" level=info msg="StartContainer for \"1c68f3a84700cbc9e6085329dbf2c0296b714eadab41525d074f72790c3b0615\" returns successfully" Sep 12 22:57:09.595276 kubelet[3355]: E0912 22:57:09.595212 3355 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-25l5j" podUID="b680e444-12ae-4def-88c1-f80311726a91" Sep 12 22:57:09.801791 kubelet[3355]: E0912 22:57:09.801681 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.802158 kubelet[3355]: W0912 22:57:09.801966 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.802158 kubelet[3355]: E0912 22:57:09.802000 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.802845 kubelet[3355]: E0912 22:57:09.802796 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.802845 kubelet[3355]: W0912 22:57:09.802813 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.803065 kubelet[3355]: E0912 22:57:09.802936 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.803470 kubelet[3355]: E0912 22:57:09.803416 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.803470 kubelet[3355]: W0912 22:57:09.803431 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.803688 kubelet[3355]: E0912 22:57:09.803447 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.803994 kubelet[3355]: E0912 22:57:09.803964 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.804475 kubelet[3355]: W0912 22:57:09.804092 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.804475 kubelet[3355]: E0912 22:57:09.804110 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.804818 kubelet[3355]: E0912 22:57:09.804754 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.804818 kubelet[3355]: W0912 22:57:09.804769 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.804818 kubelet[3355]: E0912 22:57:09.804783 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.805277 kubelet[3355]: E0912 22:57:09.805205 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.805277 kubelet[3355]: W0912 22:57:09.805219 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.805277 kubelet[3355]: E0912 22:57:09.805233 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.806914 kubelet[3355]: E0912 22:57:09.806476 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.806914 kubelet[3355]: W0912 22:57:09.806492 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.806914 kubelet[3355]: E0912 22:57:09.806508 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.806914 kubelet[3355]: E0912 22:57:09.806736 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.806914 kubelet[3355]: W0912 22:57:09.806747 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.806914 kubelet[3355]: E0912 22:57:09.806759 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.807468 kubelet[3355]: E0912 22:57:09.807179 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.807468 kubelet[3355]: W0912 22:57:09.807191 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.807468 kubelet[3355]: E0912 22:57:09.807204 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.808202 kubelet[3355]: E0912 22:57:09.808033 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.808202 kubelet[3355]: W0912 22:57:09.808129 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.808202 kubelet[3355]: E0912 22:57:09.808145 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.808944 kubelet[3355]: E0912 22:57:09.808782 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.808944 kubelet[3355]: W0912 22:57:09.808797 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.808944 kubelet[3355]: E0912 22:57:09.808811 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.809687 kubelet[3355]: E0912 22:57:09.809513 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.809687 kubelet[3355]: W0912 22:57:09.809527 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.809687 kubelet[3355]: E0912 22:57:09.809541 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.810443 kubelet[3355]: E0912 22:57:09.810428 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.810550 kubelet[3355]: W0912 22:57:09.810527 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.810614 kubelet[3355]: E0912 22:57:09.810551 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.811202 kubelet[3355]: E0912 22:57:09.811183 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.811202 kubelet[3355]: W0912 22:57:09.811198 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.811318 kubelet[3355]: E0912 22:57:09.811213 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.811526 kubelet[3355]: E0912 22:57:09.811506 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.811526 kubelet[3355]: W0912 22:57:09.811522 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.811634 kubelet[3355]: E0912 22:57:09.811537 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.811893 kubelet[3355]: E0912 22:57:09.811876 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.811893 kubelet[3355]: W0912 22:57:09.811890 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.812004 kubelet[3355]: E0912 22:57:09.811905 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.812152 kubelet[3355]: E0912 22:57:09.812134 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.812361 kubelet[3355]: W0912 22:57:09.812339 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.812453 kubelet[3355]: E0912 22:57:09.812387 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.812620 kubelet[3355]: E0912 22:57:09.812602 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.812678 kubelet[3355]: W0912 22:57:09.812653 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.812722 kubelet[3355]: E0912 22:57:09.812679 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.812946 kubelet[3355]: E0912 22:57:09.812925 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.812946 kubelet[3355]: W0912 22:57:09.812943 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.813064 kubelet[3355]: E0912 22:57:09.812973 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.813209 kubelet[3355]: E0912 22:57:09.813193 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.813209 kubelet[3355]: W0912 22:57:09.813206 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.813313 kubelet[3355]: E0912 22:57:09.813233 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.813489 kubelet[3355]: E0912 22:57:09.813473 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.813489 kubelet[3355]: W0912 22:57:09.813486 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.813632 kubelet[3355]: E0912 22:57:09.813514 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.813897 kubelet[3355]: E0912 22:57:09.813869 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.813897 kubelet[3355]: W0912 22:57:09.813890 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.814832 kubelet[3355]: E0912 22:57:09.814017 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.814832 kubelet[3355]: E0912 22:57:09.814402 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.814832 kubelet[3355]: W0912 22:57:09.814414 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.814832 kubelet[3355]: E0912 22:57:09.814429 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.815049 kubelet[3355]: E0912 22:57:09.814952 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.815049 kubelet[3355]: W0912 22:57:09.814965 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.815049 kubelet[3355]: E0912 22:57:09.814980 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.815478 kubelet[3355]: E0912 22:57:09.815461 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.815478 kubelet[3355]: W0912 22:57:09.815478 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.815608 kubelet[3355]: E0912 22:57:09.815594 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.816051 kubelet[3355]: E0912 22:57:09.816033 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.816125 kubelet[3355]: W0912 22:57:09.816068 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.816758 kubelet[3355]: E0912 22:57:09.816162 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.816758 kubelet[3355]: E0912 22:57:09.816493 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.816758 kubelet[3355]: W0912 22:57:09.816504 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.817852 kubelet[3355]: E0912 22:57:09.817468 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.817852 kubelet[3355]: W0912 22:57:09.817505 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.817852 kubelet[3355]: E0912 22:57:09.817522 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.818666 kubelet[3355]: E0912 22:57:09.818124 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.818748 kubelet[3355]: W0912 22:57:09.818669 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.818748 kubelet[3355]: E0912 22:57:09.818686 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.819246 kubelet[3355]: E0912 22:57:09.818910 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.819246 kubelet[3355]: W0912 22:57:09.818923 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.819246 kubelet[3355]: E0912 22:57:09.818965 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.819979 kubelet[3355]: E0912 22:57:09.819956 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.819979 kubelet[3355]: W0912 22:57:09.819974 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.820092 kubelet[3355]: E0912 22:57:09.819991 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.820647 kubelet[3355]: E0912 22:57:09.820507 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.820647 kubelet[3355]: W0912 22:57:09.820644 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.820768 kubelet[3355]: E0912 22:57:09.820659 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.822210 kubelet[3355]: E0912 22:57:09.822173 3355 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:57:09.822538 kubelet[3355]: W0912 22:57:09.822291 3355 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:57:09.822538 kubelet[3355]: E0912 22:57:09.822312 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.822538 kubelet[3355]: E0912 22:57:09.822346 3355 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:57:09.822538 kubelet[3355]: I0912 22:57:09.819660 3355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77c5f4454f-fcjh5" podStartSLOduration=1.801034311 podStartE2EDuration="4.819644379s" podCreationTimestamp="2025-09-12 22:57:05 +0000 UTC" firstStartedPulling="2025-09-12 22:57:05.902066663 +0000 UTC m=+23.460114438" lastFinishedPulling="2025-09-12 22:57:08.920676744 +0000 UTC m=+26.478724506" observedRunningTime="2025-09-12 22:57:09.815854193 +0000 UTC m=+27.373902000" watchObservedRunningTime="2025-09-12 22:57:09.819644379 +0000 UTC m=+27.377692166" Sep 12 22:57:10.233227 containerd[2001]: time="2025-09-12T22:57:10.233159599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:10.234446 containerd[2001]: time="2025-09-12T22:57:10.234250445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 22:57:10.236634 containerd[2001]: time="2025-09-12T22:57:10.236562893Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:10.239996 containerd[2001]: time="2025-09-12T22:57:10.239918070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:10.240881 containerd[2001]: time="2025-09-12T22:57:10.240840001Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.319914678s" Sep 12 22:57:10.240994 containerd[2001]: time="2025-09-12T22:57:10.240883786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 22:57:10.246923 containerd[2001]: time="2025-09-12T22:57:10.246857582Z" level=info msg="CreateContainer within sandbox \"6901e57e89721c6b00f8b7665090f14b6f45917e6c69beab76eac116399225e8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 22:57:10.264034 containerd[2001]: time="2025-09-12T22:57:10.261604616Z" level=info msg="Container 1e44e36b7ae91a91c5cb852eac0c3f77ab7db5505ff956b6993647ff42f1b6f0: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:10.279672 containerd[2001]: time="2025-09-12T22:57:10.279544555Z" level=info msg="CreateContainer within sandbox \"6901e57e89721c6b00f8b7665090f14b6f45917e6c69beab76eac116399225e8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1e44e36b7ae91a91c5cb852eac0c3f77ab7db5505ff956b6993647ff42f1b6f0\"" Sep 12 22:57:10.280653 containerd[2001]: time="2025-09-12T22:57:10.280568894Z" level=info msg="StartContainer for \"1e44e36b7ae91a91c5cb852eac0c3f77ab7db5505ff956b6993647ff42f1b6f0\"" Sep 12 22:57:10.282505 containerd[2001]: time="2025-09-12T22:57:10.282448978Z" level=info msg="connecting to shim 1e44e36b7ae91a91c5cb852eac0c3f77ab7db5505ff956b6993647ff42f1b6f0" address="unix:///run/containerd/s/808ec940c1b9f512fb5599c5b49e643f359cd9f2e2da1c25769f54d6ab25be95" protocol=ttrpc version=3 Sep 12 22:57:10.305598 systemd[1]: Started cri-containerd-1e44e36b7ae91a91c5cb852eac0c3f77ab7db5505ff956b6993647ff42f1b6f0.scope - libcontainer container 1e44e36b7ae91a91c5cb852eac0c3f77ab7db5505ff956b6993647ff42f1b6f0. Sep 12 22:57:10.351389 containerd[2001]: time="2025-09-12T22:57:10.351333420Z" level=info msg="StartContainer for \"1e44e36b7ae91a91c5cb852eac0c3f77ab7db5505ff956b6993647ff42f1b6f0\" returns successfully" Sep 12 22:57:10.366744 systemd[1]: cri-containerd-1e44e36b7ae91a91c5cb852eac0c3f77ab7db5505ff956b6993647ff42f1b6f0.scope: Deactivated successfully. Sep 12 22:57:10.367298 systemd[1]: cri-containerd-1e44e36b7ae91a91c5cb852eac0c3f77ab7db5505ff956b6993647ff42f1b6f0.scope: Consumed 33ms CPU time, 6M memory peak, 2M written to disk. Sep 12 22:57:10.393708 containerd[2001]: time="2025-09-12T22:57:10.393467743Z" level=info msg="received exit event container_id:\"1e44e36b7ae91a91c5cb852eac0c3f77ab7db5505ff956b6993647ff42f1b6f0\" id:\"1e44e36b7ae91a91c5cb852eac0c3f77ab7db5505ff956b6993647ff42f1b6f0\" pid:4305 exited_at:{seconds:1757717830 nanos:370296152}" Sep 12 22:57:10.405405 containerd[2001]: time="2025-09-12T22:57:10.404821274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e44e36b7ae91a91c5cb852eac0c3f77ab7db5505ff956b6993647ff42f1b6f0\" id:\"1e44e36b7ae91a91c5cb852eac0c3f77ab7db5505ff956b6993647ff42f1b6f0\" pid:4305 exited_at:{seconds:1757717830 nanos:370296152}" Sep 12 22:57:10.451478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e44e36b7ae91a91c5cb852eac0c3f77ab7db5505ff956b6993647ff42f1b6f0-rootfs.mount: Deactivated successfully. Sep 12 22:57:10.791055 kubelet[3355]: I0912 22:57:10.791018 3355 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 22:57:10.793719 containerd[2001]: time="2025-09-12T22:57:10.793687569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 22:57:11.595906 kubelet[3355]: E0912 22:57:11.595842 3355 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-25l5j" podUID="b680e444-12ae-4def-88c1-f80311726a91" Sep 12 22:57:13.595744 kubelet[3355]: E0912 22:57:13.595664 3355 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-25l5j" podUID="b680e444-12ae-4def-88c1-f80311726a91" Sep 12 22:57:15.081719 containerd[2001]: time="2025-09-12T22:57:15.081635930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:15.083585 containerd[2001]: time="2025-09-12T22:57:15.083268878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 22:57:15.084804 containerd[2001]: time="2025-09-12T22:57:15.084746677Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:15.088263 containerd[2001]: time="2025-09-12T22:57:15.088098535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:15.092936 containerd[2001]: time="2025-09-12T22:57:15.092866995Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.298974452s" Sep 12 22:57:15.093232 containerd[2001]: time="2025-09-12T22:57:15.093114342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 22:57:15.102406 containerd[2001]: time="2025-09-12T22:57:15.102268074Z" level=info msg="CreateContainer within sandbox \"6901e57e89721c6b00f8b7665090f14b6f45917e6c69beab76eac116399225e8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 22:57:15.118883 containerd[2001]: time="2025-09-12T22:57:15.118011247Z" level=info msg="Container e7a75d381e7c21724e22e56f12e1e30f39837314a72450d2a01138bb1590ad84: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:15.132046 containerd[2001]: time="2025-09-12T22:57:15.131986450Z" level=info msg="CreateContainer within sandbox \"6901e57e89721c6b00f8b7665090f14b6f45917e6c69beab76eac116399225e8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e7a75d381e7c21724e22e56f12e1e30f39837314a72450d2a01138bb1590ad84\"" Sep 12 22:57:15.134064 containerd[2001]: time="2025-09-12T22:57:15.132913529Z" level=info msg="StartContainer for \"e7a75d381e7c21724e22e56f12e1e30f39837314a72450d2a01138bb1590ad84\"" Sep 12 22:57:15.136590 containerd[2001]: time="2025-09-12T22:57:15.136542090Z" level=info msg="connecting to shim e7a75d381e7c21724e22e56f12e1e30f39837314a72450d2a01138bb1590ad84" address="unix:///run/containerd/s/808ec940c1b9f512fb5599c5b49e643f359cd9f2e2da1c25769f54d6ab25be95" protocol=ttrpc version=3 Sep 12 22:57:15.163761 systemd[1]: Started cri-containerd-e7a75d381e7c21724e22e56f12e1e30f39837314a72450d2a01138bb1590ad84.scope - libcontainer container e7a75d381e7c21724e22e56f12e1e30f39837314a72450d2a01138bb1590ad84. Sep 12 22:57:15.224477 containerd[2001]: time="2025-09-12T22:57:15.224432676Z" level=info msg="StartContainer for \"e7a75d381e7c21724e22e56f12e1e30f39837314a72450d2a01138bb1590ad84\" returns successfully" Sep 12 22:57:15.596250 kubelet[3355]: E0912 22:57:15.595923 3355 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-25l5j" podUID="b680e444-12ae-4def-88c1-f80311726a91" Sep 12 22:57:16.097037 systemd[1]: cri-containerd-e7a75d381e7c21724e22e56f12e1e30f39837314a72450d2a01138bb1590ad84.scope: Deactivated successfully. Sep 12 22:57:16.097418 systemd[1]: cri-containerd-e7a75d381e7c21724e22e56f12e1e30f39837314a72450d2a01138bb1590ad84.scope: Consumed 592ms CPU time, 163.7M memory peak, 9.2M read from disk, 171.3M written to disk. Sep 12 22:57:16.152334 kubelet[3355]: I0912 22:57:16.152132 3355 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 22:57:16.164331 containerd[2001]: time="2025-09-12T22:57:16.164136921Z" level=info msg="received exit event container_id:\"e7a75d381e7c21724e22e56f12e1e30f39837314a72450d2a01138bb1590ad84\" id:\"e7a75d381e7c21724e22e56f12e1e30f39837314a72450d2a01138bb1590ad84\" pid:4361 exited_at:{seconds:1757717836 nanos:163897252}" Sep 12 22:57:16.164878 containerd[2001]: time="2025-09-12T22:57:16.164601901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7a75d381e7c21724e22e56f12e1e30f39837314a72450d2a01138bb1590ad84\" id:\"e7a75d381e7c21724e22e56f12e1e30f39837314a72450d2a01138bb1590ad84\" pid:4361 exited_at:{seconds:1757717836 nanos:163897252}" Sep 12 22:57:16.226806 systemd[1]: Created slice kubepods-besteffort-pod73452f16_5cdc_4405_a665_642cdc11f813.slice - libcontainer container kubepods-besteffort-pod73452f16_5cdc_4405_a665_642cdc11f813.slice. Sep 12 22:57:16.264393 kubelet[3355]: I0912 22:57:16.262512 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cr66\" (UniqueName: \"kubernetes.io/projected/73452f16-5cdc-4405-a665-642cdc11f813-kube-api-access-4cr66\") pod \"calico-kube-controllers-6549ddf4c9-rbbqr\" (UID: \"73452f16-5cdc-4405-a665-642cdc11f813\") " pod="calico-system/calico-kube-controllers-6549ddf4c9-rbbqr" Sep 12 22:57:16.264393 kubelet[3355]: I0912 22:57:16.262563 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2dlp\" (UniqueName: \"kubernetes.io/projected/ccbc5552-229a-44db-b0cc-8d9c359fbcb5-kube-api-access-p2dlp\") pod \"coredns-7c65d6cfc9-84v28\" (UID: \"ccbc5552-229a-44db-b0cc-8d9c359fbcb5\") " pod="kube-system/coredns-7c65d6cfc9-84v28" Sep 12 22:57:16.264393 kubelet[3355]: I0912 22:57:16.262595 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8v6p\" (UniqueName: \"kubernetes.io/projected/02020cce-2386-4312-944b-9136ad36d797-kube-api-access-c8v6p\") pod \"calico-apiserver-5b6c6f8c54-wvkpl\" (UID: \"02020cce-2386-4312-944b-9136ad36d797\") " pod="calico-apiserver/calico-apiserver-5b6c6f8c54-wvkpl" Sep 12 22:57:16.264393 kubelet[3355]: I0912 22:57:16.262618 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3e0a937-c0ff-49cd-987c-00e813a4ecaf-config-volume\") pod \"coredns-7c65d6cfc9-9bnxv\" (UID: \"c3e0a937-c0ff-49cd-987c-00e813a4ecaf\") " pod="kube-system/coredns-7c65d6cfc9-9bnxv" Sep 12 22:57:16.264393 kubelet[3355]: I0912 22:57:16.262646 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/02020cce-2386-4312-944b-9136ad36d797-calico-apiserver-certs\") pod \"calico-apiserver-5b6c6f8c54-wvkpl\" (UID: \"02020cce-2386-4312-944b-9136ad36d797\") " pod="calico-apiserver/calico-apiserver-5b6c6f8c54-wvkpl" Sep 12 22:57:16.264716 kubelet[3355]: I0912 22:57:16.262668 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wmfh\" (UniqueName: \"kubernetes.io/projected/c3e0a937-c0ff-49cd-987c-00e813a4ecaf-kube-api-access-7wmfh\") pod \"coredns-7c65d6cfc9-9bnxv\" (UID: \"c3e0a937-c0ff-49cd-987c-00e813a4ecaf\") " pod="kube-system/coredns-7c65d6cfc9-9bnxv" Sep 12 22:57:16.264716 kubelet[3355]: I0912 22:57:16.262695 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73452f16-5cdc-4405-a665-642cdc11f813-tigera-ca-bundle\") pod \"calico-kube-controllers-6549ddf4c9-rbbqr\" (UID: \"73452f16-5cdc-4405-a665-642cdc11f813\") " pod="calico-system/calico-kube-controllers-6549ddf4c9-rbbqr" Sep 12 22:57:16.264716 kubelet[3355]: I0912 22:57:16.262719 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccbc5552-229a-44db-b0cc-8d9c359fbcb5-config-volume\") pod \"coredns-7c65d6cfc9-84v28\" (UID: \"ccbc5552-229a-44db-b0cc-8d9c359fbcb5\") " pod="kube-system/coredns-7c65d6cfc9-84v28" Sep 12 22:57:16.277740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7a75d381e7c21724e22e56f12e1e30f39837314a72450d2a01138bb1590ad84-rootfs.mount: Deactivated successfully. Sep 12 22:57:16.295757 kubelet[3355]: W0912 22:57:16.294178 3355 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-30-120" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-30-120' and this object Sep 12 22:57:16.295757 kubelet[3355]: E0912 22:57:16.294289 3355 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ip-172-31-30-120\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ip-172-31-30-120' and this object" logger="UnhandledError" Sep 12 22:57:16.295757 kubelet[3355]: W0912 22:57:16.294539 3355 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-120" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-30-120' and this object Sep 12 22:57:16.295757 kubelet[3355]: E0912 22:57:16.294776 3355 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-30-120\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ip-172-31-30-120' and this object" logger="UnhandledError" Sep 12 22:57:16.297314 systemd[1]: Created slice kubepods-burstable-podccbc5552_229a_44db_b0cc_8d9c359fbcb5.slice - libcontainer container kubepods-burstable-podccbc5552_229a_44db_b0cc_8d9c359fbcb5.slice. Sep 12 22:57:16.309914 systemd[1]: Created slice kubepods-burstable-podc3e0a937_c0ff_49cd_987c_00e813a4ecaf.slice - libcontainer container kubepods-burstable-podc3e0a937_c0ff_49cd_987c_00e813a4ecaf.slice. Sep 12 22:57:16.324288 systemd[1]: Created slice kubepods-besteffort-pod02020cce_2386_4312_944b_9136ad36d797.slice - libcontainer container kubepods-besteffort-pod02020cce_2386_4312_944b_9136ad36d797.slice. Sep 12 22:57:16.371233 systemd[1]: Created slice kubepods-besteffort-pod44432559_50cd_4e4e_a1e6_5eddd4740dfb.slice - libcontainer container kubepods-besteffort-pod44432559_50cd_4e4e_a1e6_5eddd4740dfb.slice. Sep 12 22:57:16.377470 systemd[1]: Created slice kubepods-besteffort-podb7e68407_bfcb_4501_bdff_a7deadef4b7e.slice - libcontainer container kubepods-besteffort-podb7e68407_bfcb_4501_bdff_a7deadef4b7e.slice. Sep 12 22:57:16.401282 systemd[1]: Created slice kubepods-besteffort-pod3874e0de_cb88_4eca_a88f_0286a3db799c.slice - libcontainer container kubepods-besteffort-pod3874e0de_cb88_4eca_a88f_0286a3db799c.slice. Sep 12 22:57:16.464210 kubelet[3355]: I0912 22:57:16.464114 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3874e0de-cb88-4eca-a88f-0286a3db799c-goldmane-key-pair\") pod \"goldmane-7988f88666-bn5vs\" (UID: \"3874e0de-cb88-4eca-a88f-0286a3db799c\") " pod="calico-system/goldmane-7988f88666-bn5vs" Sep 12 22:57:16.464423 kubelet[3355]: I0912 22:57:16.464231 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/44432559-50cd-4e4e-a1e6-5eddd4740dfb-calico-apiserver-certs\") pod \"calico-apiserver-5b6c6f8c54-dmtph\" (UID: \"44432559-50cd-4e4e-a1e6-5eddd4740dfb\") " pod="calico-apiserver/calico-apiserver-5b6c6f8c54-dmtph" Sep 12 22:57:16.464423 kubelet[3355]: I0912 22:57:16.464289 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7e68407-bfcb-4501-bdff-a7deadef4b7e-whisker-ca-bundle\") pod \"whisker-694df8d9c5-xsl92\" (UID: \"b7e68407-bfcb-4501-bdff-a7deadef4b7e\") " pod="calico-system/whisker-694df8d9c5-xsl92" Sep 12 22:57:16.464423 kubelet[3355]: I0912 22:57:16.464316 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-257hl\" (UniqueName: \"kubernetes.io/projected/b7e68407-bfcb-4501-bdff-a7deadef4b7e-kube-api-access-257hl\") pod \"whisker-694df8d9c5-xsl92\" (UID: \"b7e68407-bfcb-4501-bdff-a7deadef4b7e\") " pod="calico-system/whisker-694df8d9c5-xsl92" Sep 12 22:57:16.464423 kubelet[3355]: I0912 22:57:16.464348 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqtrj\" (UniqueName: \"kubernetes.io/projected/44432559-50cd-4e4e-a1e6-5eddd4740dfb-kube-api-access-zqtrj\") pod \"calico-apiserver-5b6c6f8c54-dmtph\" (UID: \"44432559-50cd-4e4e-a1e6-5eddd4740dfb\") " pod="calico-apiserver/calico-apiserver-5b6c6f8c54-dmtph" Sep 12 22:57:16.464686 kubelet[3355]: I0912 22:57:16.464523 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3874e0de-cb88-4eca-a88f-0286a3db799c-config\") pod \"goldmane-7988f88666-bn5vs\" (UID: \"3874e0de-cb88-4eca-a88f-0286a3db799c\") " pod="calico-system/goldmane-7988f88666-bn5vs" Sep 12 22:57:16.464686 kubelet[3355]: I0912 22:57:16.464615 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45vrt\" (UniqueName: \"kubernetes.io/projected/3874e0de-cb88-4eca-a88f-0286a3db799c-kube-api-access-45vrt\") pod \"goldmane-7988f88666-bn5vs\" (UID: \"3874e0de-cb88-4eca-a88f-0286a3db799c\") " pod="calico-system/goldmane-7988f88666-bn5vs" Sep 12 22:57:16.464782 kubelet[3355]: I0912 22:57:16.464700 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b7e68407-bfcb-4501-bdff-a7deadef4b7e-whisker-backend-key-pair\") pod \"whisker-694df8d9c5-xsl92\" (UID: \"b7e68407-bfcb-4501-bdff-a7deadef4b7e\") " pod="calico-system/whisker-694df8d9c5-xsl92" Sep 12 22:57:16.464830 kubelet[3355]: I0912 22:57:16.464768 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3874e0de-cb88-4eca-a88f-0286a3db799c-goldmane-ca-bundle\") pod \"goldmane-7988f88666-bn5vs\" (UID: \"3874e0de-cb88-4eca-a88f-0286a3db799c\") " pod="calico-system/goldmane-7988f88666-bn5vs" Sep 12 22:57:16.552794 containerd[2001]: time="2025-09-12T22:57:16.552736894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6549ddf4c9-rbbqr,Uid:73452f16-5cdc-4405-a665-642cdc11f813,Namespace:calico-system,Attempt:0,}" Sep 12 22:57:16.608081 containerd[2001]: time="2025-09-12T22:57:16.608041107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-84v28,Uid:ccbc5552-229a-44db-b0cc-8d9c359fbcb5,Namespace:kube-system,Attempt:0,}" Sep 12 22:57:16.617401 containerd[2001]: time="2025-09-12T22:57:16.617267476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9bnxv,Uid:c3e0a937-c0ff-49cd-987c-00e813a4ecaf,Namespace:kube-system,Attempt:0,}" Sep 12 22:57:16.694986 containerd[2001]: time="2025-09-12T22:57:16.694734056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-694df8d9c5-xsl92,Uid:b7e68407-bfcb-4501-bdff-a7deadef4b7e,Namespace:calico-system,Attempt:0,}" Sep 12 22:57:16.723484 containerd[2001]: time="2025-09-12T22:57:16.723438042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-bn5vs,Uid:3874e0de-cb88-4eca-a88f-0286a3db799c,Namespace:calico-system,Attempt:0,}" Sep 12 22:57:16.846682 containerd[2001]: time="2025-09-12T22:57:16.845874517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 22:57:17.016939 containerd[2001]: time="2025-09-12T22:57:17.016729649Z" level=error msg="Failed to destroy network for sandbox \"d850cd9d5bf1b0189e048c07658552305ddce51c5a6ba56c2c2a16e2eb708a31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.020091 containerd[2001]: time="2025-09-12T22:57:17.019817125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6549ddf4c9-rbbqr,Uid:73452f16-5cdc-4405-a665-642cdc11f813,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d850cd9d5bf1b0189e048c07658552305ddce51c5a6ba56c2c2a16e2eb708a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.020569 containerd[2001]: time="2025-09-12T22:57:17.019998924Z" level=error msg="Failed to destroy network for sandbox \"444e65b703840f5d0d2043d50745abf779b20d9b146b968c3ec767529087d473\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.023551 containerd[2001]: time="2025-09-12T22:57:17.023481524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-694df8d9c5-xsl92,Uid:b7e68407-bfcb-4501-bdff-a7deadef4b7e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"444e65b703840f5d0d2043d50745abf779b20d9b146b968c3ec767529087d473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.023914 kubelet[3355]: E0912 22:57:17.023661 3355 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d850cd9d5bf1b0189e048c07658552305ddce51c5a6ba56c2c2a16e2eb708a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.023914 kubelet[3355]: E0912 22:57:17.023763 3355 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d850cd9d5bf1b0189e048c07658552305ddce51c5a6ba56c2c2a16e2eb708a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6549ddf4c9-rbbqr" Sep 12 22:57:17.023914 kubelet[3355]: E0912 22:57:17.023794 3355 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d850cd9d5bf1b0189e048c07658552305ddce51c5a6ba56c2c2a16e2eb708a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6549ddf4c9-rbbqr" Sep 12 22:57:17.025973 kubelet[3355]: E0912 22:57:17.023856 3355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6549ddf4c9-rbbqr_calico-system(73452f16-5cdc-4405-a665-642cdc11f813)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6549ddf4c9-rbbqr_calico-system(73452f16-5cdc-4405-a665-642cdc11f813)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d850cd9d5bf1b0189e048c07658552305ddce51c5a6ba56c2c2a16e2eb708a31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6549ddf4c9-rbbqr" podUID="73452f16-5cdc-4405-a665-642cdc11f813" Sep 12 22:57:17.048328 containerd[2001]: time="2025-09-12T22:57:17.048264773Z" level=error msg="Failed to destroy network for sandbox \"5991706f9d2d8704130c05d05a0b5d59bc004642dde33852648655046aae02a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.049273 kubelet[3355]: E0912 22:57:17.049213 3355 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"444e65b703840f5d0d2043d50745abf779b20d9b146b968c3ec767529087d473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.049933 kubelet[3355]: E0912 22:57:17.049316 3355 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"444e65b703840f5d0d2043d50745abf779b20d9b146b968c3ec767529087d473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-694df8d9c5-xsl92" Sep 12 22:57:17.049933 kubelet[3355]: E0912 22:57:17.049357 3355 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"444e65b703840f5d0d2043d50745abf779b20d9b146b968c3ec767529087d473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-694df8d9c5-xsl92" Sep 12 22:57:17.049933 kubelet[3355]: E0912 22:57:17.049469 3355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-694df8d9c5-xsl92_calico-system(b7e68407-bfcb-4501-bdff-a7deadef4b7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-694df8d9c5-xsl92_calico-system(b7e68407-bfcb-4501-bdff-a7deadef4b7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"444e65b703840f5d0d2043d50745abf779b20d9b146b968c3ec767529087d473\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-694df8d9c5-xsl92" podUID="b7e68407-bfcb-4501-bdff-a7deadef4b7e" Sep 12 22:57:17.050540 containerd[2001]: time="2025-09-12T22:57:17.050498385Z" level=error msg="Failed to destroy network for sandbox \"e24f7baab006ae61d79110621e0e5457b86c10ed8af1f828f5e94959f6bb04fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.051393 containerd[2001]: time="2025-09-12T22:57:17.051161857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-84v28,Uid:ccbc5552-229a-44db-b0cc-8d9c359fbcb5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5991706f9d2d8704130c05d05a0b5d59bc004642dde33852648655046aae02a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.052476 containerd[2001]: time="2025-09-12T22:57:17.052327236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9bnxv,Uid:c3e0a937-c0ff-49cd-987c-00e813a4ecaf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e24f7baab006ae61d79110621e0e5457b86c10ed8af1f828f5e94959f6bb04fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.053092 kubelet[3355]: E0912 22:57:17.052885 3355 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e24f7baab006ae61d79110621e0e5457b86c10ed8af1f828f5e94959f6bb04fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.053092 kubelet[3355]: E0912 22:57:17.052961 3355 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e24f7baab006ae61d79110621e0e5457b86c10ed8af1f828f5e94959f6bb04fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9bnxv" Sep 12 22:57:17.054613 kubelet[3355]: E0912 22:57:17.054275 3355 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5991706f9d2d8704130c05d05a0b5d59bc004642dde33852648655046aae02a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.054613 kubelet[3355]: E0912 22:57:17.054336 3355 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5991706f9d2d8704130c05d05a0b5d59bc004642dde33852648655046aae02a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-84v28" Sep 12 22:57:17.054613 kubelet[3355]: E0912 22:57:17.054360 3355 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5991706f9d2d8704130c05d05a0b5d59bc004642dde33852648655046aae02a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-84v28" Sep 12 22:57:17.054821 kubelet[3355]: E0912 22:57:17.054444 3355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-84v28_kube-system(ccbc5552-229a-44db-b0cc-8d9c359fbcb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-84v28_kube-system(ccbc5552-229a-44db-b0cc-8d9c359fbcb5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5991706f9d2d8704130c05d05a0b5d59bc004642dde33852648655046aae02a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-84v28" podUID="ccbc5552-229a-44db-b0cc-8d9c359fbcb5" Sep 12 22:57:17.054821 kubelet[3355]: E0912 22:57:17.052989 3355 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e24f7baab006ae61d79110621e0e5457b86c10ed8af1f828f5e94959f6bb04fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9bnxv" Sep 12 22:57:17.054821 kubelet[3355]: E0912 22:57:17.054554 3355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9bnxv_kube-system(c3e0a937-c0ff-49cd-987c-00e813a4ecaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9bnxv_kube-system(c3e0a937-c0ff-49cd-987c-00e813a4ecaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e24f7baab006ae61d79110621e0e5457b86c10ed8af1f828f5e94959f6bb04fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-9bnxv" podUID="c3e0a937-c0ff-49cd-987c-00e813a4ecaf" Sep 12 22:57:17.059032 containerd[2001]: time="2025-09-12T22:57:17.058981080Z" level=error msg="Failed to destroy network for sandbox \"240fa2d3ef749e025871423b0395da3a41f43d283a6328185b65374f3b2b8438\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.060414 containerd[2001]: time="2025-09-12T22:57:17.060260796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-bn5vs,Uid:3874e0de-cb88-4eca-a88f-0286a3db799c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"240fa2d3ef749e025871423b0395da3a41f43d283a6328185b65374f3b2b8438\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.060761 kubelet[3355]: E0912 22:57:17.060717 3355 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"240fa2d3ef749e025871423b0395da3a41f43d283a6328185b65374f3b2b8438\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.060890 kubelet[3355]: E0912 22:57:17.060788 3355 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"240fa2d3ef749e025871423b0395da3a41f43d283a6328185b65374f3b2b8438\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-bn5vs" Sep 12 22:57:17.060890 kubelet[3355]: E0912 22:57:17.060815 3355 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"240fa2d3ef749e025871423b0395da3a41f43d283a6328185b65374f3b2b8438\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-bn5vs" Sep 12 22:57:17.060890 kubelet[3355]: E0912 22:57:17.060871 3355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-bn5vs_calico-system(3874e0de-cb88-4eca-a88f-0286a3db799c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-bn5vs_calico-system(3874e0de-cb88-4eca-a88f-0286a3db799c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"240fa2d3ef749e025871423b0395da3a41f43d283a6328185b65374f3b2b8438\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-bn5vs" podUID="3874e0de-cb88-4eca-a88f-0286a3db799c" Sep 12 22:57:17.281689 systemd[1]: run-netns-cni\x2d8e689aed\x2d5bf9\x2d8f3e\x2dc514\x2d0bc50864de7f.mount: Deactivated successfully. Sep 12 22:57:17.392339 kubelet[3355]: E0912 22:57:17.392257 3355 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 12 22:57:17.392533 kubelet[3355]: E0912 22:57:17.392450 3355 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02020cce-2386-4312-944b-9136ad36d797-calico-apiserver-certs podName:02020cce-2386-4312-944b-9136ad36d797 nodeName:}" failed. No retries permitted until 2025-09-12 22:57:17.892416256 +0000 UTC m=+35.450464034 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/02020cce-2386-4312-944b-9136ad36d797-calico-apiserver-certs") pod "calico-apiserver-5b6c6f8c54-wvkpl" (UID: "02020cce-2386-4312-944b-9136ad36d797") : failed to sync secret cache: timed out waiting for the condition Sep 12 22:57:17.566142 kubelet[3355]: E0912 22:57:17.565713 3355 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 12 22:57:17.566142 kubelet[3355]: E0912 22:57:17.565831 3355 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44432559-50cd-4e4e-a1e6-5eddd4740dfb-calico-apiserver-certs podName:44432559-50cd-4e4e-a1e6-5eddd4740dfb nodeName:}" failed. No retries permitted until 2025-09-12 22:57:18.06580751 +0000 UTC m=+35.623855275 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/44432559-50cd-4e4e-a1e6-5eddd4740dfb-calico-apiserver-certs") pod "calico-apiserver-5b6c6f8c54-dmtph" (UID: "44432559-50cd-4e4e-a1e6-5eddd4740dfb") : failed to sync secret cache: timed out waiting for the condition Sep 12 22:57:17.601677 systemd[1]: Created slice kubepods-besteffort-podb680e444_12ae_4def_88c1_f80311726a91.slice - libcontainer container kubepods-besteffort-podb680e444_12ae_4def_88c1_f80311726a91.slice. Sep 12 22:57:17.604901 containerd[2001]: time="2025-09-12T22:57:17.604860778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-25l5j,Uid:b680e444-12ae-4def-88c1-f80311726a91,Namespace:calico-system,Attempt:0,}" Sep 12 22:57:17.670939 containerd[2001]: time="2025-09-12T22:57:17.670874900Z" level=error msg="Failed to destroy network for sandbox \"bcff649a0931d738c80d611706d0d2f7a3ffb789a3bdd6e588ec2e4b414524d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.676936 systemd[1]: run-netns-cni\x2dae6d211b\x2db5ea\x2dd399\x2dfe77\x2d5c843f7edef1.mount: Deactivated successfully. Sep 12 22:57:17.677850 containerd[2001]: time="2025-09-12T22:57:17.676947478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-25l5j,Uid:b680e444-12ae-4def-88c1-f80311726a91,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcff649a0931d738c80d611706d0d2f7a3ffb789a3bdd6e588ec2e4b414524d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.677995 kubelet[3355]: E0912 22:57:17.677266 3355 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcff649a0931d738c80d611706d0d2f7a3ffb789a3bdd6e588ec2e4b414524d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:17.677995 kubelet[3355]: E0912 22:57:17.677317 3355 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcff649a0931d738c80d611706d0d2f7a3ffb789a3bdd6e588ec2e4b414524d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-25l5j" Sep 12 22:57:17.677995 kubelet[3355]: E0912 22:57:17.677342 3355 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcff649a0931d738c80d611706d0d2f7a3ffb789a3bdd6e588ec2e4b414524d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-25l5j" Sep 12 22:57:17.678141 kubelet[3355]: E0912 22:57:17.677404 3355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-25l5j_calico-system(b680e444-12ae-4def-88c1-f80311726a91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-25l5j_calico-system(b680e444-12ae-4def-88c1-f80311726a91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcff649a0931d738c80d611706d0d2f7a3ffb789a3bdd6e588ec2e4b414524d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-25l5j" podUID="b680e444-12ae-4def-88c1-f80311726a91" Sep 12 22:57:18.137834 containerd[2001]: time="2025-09-12T22:57:18.137769109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6c6f8c54-wvkpl,Uid:02020cce-2386-4312-944b-9136ad36d797,Namespace:calico-apiserver,Attempt:0,}" Sep 12 22:57:18.186241 containerd[2001]: time="2025-09-12T22:57:18.185830312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6c6f8c54-dmtph,Uid:44432559-50cd-4e4e-a1e6-5eddd4740dfb,Namespace:calico-apiserver,Attempt:0,}" Sep 12 22:57:18.229398 containerd[2001]: time="2025-09-12T22:57:18.229330723Z" level=error msg="Failed to destroy network for sandbox \"0b5c5cd4ef3cab8efd0e8899c90b5aafb63da39ee927c9ba99ee10432b1f1a2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:18.234354 containerd[2001]: time="2025-09-12T22:57:18.232652634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6c6f8c54-wvkpl,Uid:02020cce-2386-4312-944b-9136ad36d797,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b5c5cd4ef3cab8efd0e8899c90b5aafb63da39ee927c9ba99ee10432b1f1a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:18.234579 kubelet[3355]: E0912 22:57:18.232923 3355 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b5c5cd4ef3cab8efd0e8899c90b5aafb63da39ee927c9ba99ee10432b1f1a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:18.234579 kubelet[3355]: E0912 22:57:18.232987 3355 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b5c5cd4ef3cab8efd0e8899c90b5aafb63da39ee927c9ba99ee10432b1f1a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b6c6f8c54-wvkpl" Sep 12 22:57:18.234579 kubelet[3355]: E0912 22:57:18.233015 3355 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b5c5cd4ef3cab8efd0e8899c90b5aafb63da39ee927c9ba99ee10432b1f1a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b6c6f8c54-wvkpl" Sep 12 22:57:18.235051 kubelet[3355]: E0912 22:57:18.233081 3355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b6c6f8c54-wvkpl_calico-apiserver(02020cce-2386-4312-944b-9136ad36d797)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b6c6f8c54-wvkpl_calico-apiserver(02020cce-2386-4312-944b-9136ad36d797)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b5c5cd4ef3cab8efd0e8899c90b5aafb63da39ee927c9ba99ee10432b1f1a2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b6c6f8c54-wvkpl" podUID="02020cce-2386-4312-944b-9136ad36d797" Sep 12 22:57:18.358421 containerd[2001]: time="2025-09-12T22:57:18.356778362Z" level=error msg="Failed to destroy network for sandbox \"ac00199473dbbb6f916e5a4fa5ee0e266aa52802ff1e44ebf5024176dedc51b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:18.360911 systemd[1]: run-netns-cni\x2d234b54e1\x2d4f90\x2d5c99\x2df3b0\x2d2bdf2cb35f2f.mount: Deactivated successfully. Sep 12 22:57:18.361967 containerd[2001]: time="2025-09-12T22:57:18.361290342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6c6f8c54-dmtph,Uid:44432559-50cd-4e4e-a1e6-5eddd4740dfb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac00199473dbbb6f916e5a4fa5ee0e266aa52802ff1e44ebf5024176dedc51b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:18.362127 kubelet[3355]: E0912 22:57:18.361555 3355 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac00199473dbbb6f916e5a4fa5ee0e266aa52802ff1e44ebf5024176dedc51b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:57:18.362127 kubelet[3355]: E0912 22:57:18.361609 3355 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac00199473dbbb6f916e5a4fa5ee0e266aa52802ff1e44ebf5024176dedc51b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b6c6f8c54-dmtph" Sep 12 22:57:18.362127 kubelet[3355]: E0912 22:57:18.361629 3355 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac00199473dbbb6f916e5a4fa5ee0e266aa52802ff1e44ebf5024176dedc51b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b6c6f8c54-dmtph" Sep 12 22:57:18.362354 kubelet[3355]: E0912 22:57:18.361667 3355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b6c6f8c54-dmtph_calico-apiserver(44432559-50cd-4e4e-a1e6-5eddd4740dfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b6c6f8c54-dmtph_calico-apiserver(44432559-50cd-4e4e-a1e6-5eddd4740dfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac00199473dbbb6f916e5a4fa5ee0e266aa52802ff1e44ebf5024176dedc51b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b6c6f8c54-dmtph" podUID="44432559-50cd-4e4e-a1e6-5eddd4740dfb" Sep 12 22:57:24.244234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1269941908.mount: Deactivated successfully. Sep 12 22:57:24.436997 containerd[2001]: time="2025-09-12T22:57:24.401658139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 22:57:24.470399 containerd[2001]: time="2025-09-12T22:57:24.464400774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:24.495553 containerd[2001]: time="2025-09-12T22:57:24.495425478Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:24.497411 containerd[2001]: time="2025-09-12T22:57:24.497347771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:24.506176 containerd[2001]: time="2025-09-12T22:57:24.506106274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.650531324s" Sep 12 22:57:24.506176 containerd[2001]: time="2025-09-12T22:57:24.506168063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 22:57:24.584015 containerd[2001]: time="2025-09-12T22:57:24.583965670Z" level=info msg="CreateContainer within sandbox \"6901e57e89721c6b00f8b7665090f14b6f45917e6c69beab76eac116399225e8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 22:57:24.638404 containerd[2001]: time="2025-09-12T22:57:24.637739266Z" level=info msg="Container 4a67f13a30bf26673e2613c8a27e5dc27ab1a31f6eb2c147a9012d310debddda: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:24.645471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3819618658.mount: Deactivated successfully. Sep 12 22:57:24.759293 containerd[2001]: time="2025-09-12T22:57:24.759189125Z" level=info msg="CreateContainer within sandbox \"6901e57e89721c6b00f8b7665090f14b6f45917e6c69beab76eac116399225e8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4a67f13a30bf26673e2613c8a27e5dc27ab1a31f6eb2c147a9012d310debddda\"" Sep 12 22:57:24.760694 containerd[2001]: time="2025-09-12T22:57:24.760659244Z" level=info msg="StartContainer for \"4a67f13a30bf26673e2613c8a27e5dc27ab1a31f6eb2c147a9012d310debddda\"" Sep 12 22:57:24.769384 containerd[2001]: time="2025-09-12T22:57:24.769330374Z" level=info msg="connecting to shim 4a67f13a30bf26673e2613c8a27e5dc27ab1a31f6eb2c147a9012d310debddda" address="unix:///run/containerd/s/808ec940c1b9f512fb5599c5b49e643f359cd9f2e2da1c25769f54d6ab25be95" protocol=ttrpc version=3 Sep 12 22:57:24.885551 systemd[1]: Started cri-containerd-4a67f13a30bf26673e2613c8a27e5dc27ab1a31f6eb2c147a9012d310debddda.scope - libcontainer container 4a67f13a30bf26673e2613c8a27e5dc27ab1a31f6eb2c147a9012d310debddda. Sep 12 22:57:24.959907 containerd[2001]: time="2025-09-12T22:57:24.958469668Z" level=info msg="StartContainer for \"4a67f13a30bf26673e2613c8a27e5dc27ab1a31f6eb2c147a9012d310debddda\" returns successfully" Sep 12 22:57:25.276732 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 22:57:25.277763 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 22:57:25.311454 kubelet[3355]: I0912 22:57:25.311418 3355 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 22:57:25.838038 kubelet[3355]: I0912 22:57:25.837984 3355 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-257hl\" (UniqueName: \"kubernetes.io/projected/b7e68407-bfcb-4501-bdff-a7deadef4b7e-kube-api-access-257hl\") pod \"b7e68407-bfcb-4501-bdff-a7deadef4b7e\" (UID: \"b7e68407-bfcb-4501-bdff-a7deadef4b7e\") " Sep 12 22:57:25.838929 kubelet[3355]: I0912 22:57:25.838051 3355 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7e68407-bfcb-4501-bdff-a7deadef4b7e-whisker-ca-bundle\") pod \"b7e68407-bfcb-4501-bdff-a7deadef4b7e\" (UID: \"b7e68407-bfcb-4501-bdff-a7deadef4b7e\") " Sep 12 22:57:25.838929 kubelet[3355]: I0912 22:57:25.838090 3355 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b7e68407-bfcb-4501-bdff-a7deadef4b7e-whisker-backend-key-pair\") pod \"b7e68407-bfcb-4501-bdff-a7deadef4b7e\" (UID: \"b7e68407-bfcb-4501-bdff-a7deadef4b7e\") " Sep 12 22:57:25.850655 kubelet[3355]: I0912 22:57:25.850600 3355 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e68407-bfcb-4501-bdff-a7deadef4b7e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b7e68407-bfcb-4501-bdff-a7deadef4b7e" (UID: "b7e68407-bfcb-4501-bdff-a7deadef4b7e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 22:57:25.861466 systemd[1]: var-lib-kubelet-pods-b7e68407\x2dbfcb\x2d4501\x2dbdff\x2da7deadef4b7e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d257hl.mount: Deactivated successfully. Sep 12 22:57:25.866757 kubelet[3355]: I0912 22:57:25.866711 3355 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7e68407-bfcb-4501-bdff-a7deadef4b7e-kube-api-access-257hl" (OuterVolumeSpecName: "kube-api-access-257hl") pod "b7e68407-bfcb-4501-bdff-a7deadef4b7e" (UID: "b7e68407-bfcb-4501-bdff-a7deadef4b7e"). InnerVolumeSpecName "kube-api-access-257hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 22:57:25.867654 systemd[1]: var-lib-kubelet-pods-b7e68407\x2dbfcb\x2d4501\x2dbdff\x2da7deadef4b7e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 22:57:25.868271 kubelet[3355]: I0912 22:57:25.868085 3355 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e68407-bfcb-4501-bdff-a7deadef4b7e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b7e68407-bfcb-4501-bdff-a7deadef4b7e" (UID: "b7e68407-bfcb-4501-bdff-a7deadef4b7e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 22:57:25.889565 systemd[1]: Removed slice kubepods-besteffort-podb7e68407_bfcb_4501_bdff_a7deadef4b7e.slice - libcontainer container kubepods-besteffort-podb7e68407_bfcb_4501_bdff_a7deadef4b7e.slice. Sep 12 22:57:25.905824 kubelet[3355]: I0912 22:57:25.905766 3355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5krq4" podStartSLOduration=2.274064652 podStartE2EDuration="20.905741732s" podCreationTimestamp="2025-09-12 22:57:05 +0000 UTC" firstStartedPulling="2025-09-12 22:57:05.904850464 +0000 UTC m=+23.462898226" lastFinishedPulling="2025-09-12 22:57:24.536527543 +0000 UTC m=+42.094575306" observedRunningTime="2025-09-12 22:57:25.903951366 +0000 UTC m=+43.461999147" watchObservedRunningTime="2025-09-12 22:57:25.905741732 +0000 UTC m=+43.463789534" Sep 12 22:57:25.939711 kubelet[3355]: I0912 22:57:25.939247 3355 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7e68407-bfcb-4501-bdff-a7deadef4b7e-whisker-ca-bundle\") on node \"ip-172-31-30-120\" DevicePath \"\"" Sep 12 22:57:25.940138 kubelet[3355]: I0912 22:57:25.939792 3355 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-257hl\" (UniqueName: \"kubernetes.io/projected/b7e68407-bfcb-4501-bdff-a7deadef4b7e-kube-api-access-257hl\") on node \"ip-172-31-30-120\" DevicePath \"\"" Sep 12 22:57:25.940138 kubelet[3355]: I0912 22:57:25.939806 3355 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b7e68407-bfcb-4501-bdff-a7deadef4b7e-whisker-backend-key-pair\") on node \"ip-172-31-30-120\" DevicePath \"\"" Sep 12 22:57:26.074022 systemd[1]: Created slice kubepods-besteffort-podab3d6be3_c145_4e54_a210_63354c62c9f1.slice - libcontainer container kubepods-besteffort-podab3d6be3_c145_4e54_a210_63354c62c9f1.slice. Sep 12 22:57:26.143828 kubelet[3355]: I0912 22:57:26.143690 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g85l\" (UniqueName: \"kubernetes.io/projected/ab3d6be3-c145-4e54-a210-63354c62c9f1-kube-api-access-6g85l\") pod \"whisker-6b8cdcc966-ckcqf\" (UID: \"ab3d6be3-c145-4e54-a210-63354c62c9f1\") " pod="calico-system/whisker-6b8cdcc966-ckcqf" Sep 12 22:57:26.143828 kubelet[3355]: I0912 22:57:26.143738 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab3d6be3-c145-4e54-a210-63354c62c9f1-whisker-ca-bundle\") pod \"whisker-6b8cdcc966-ckcqf\" (UID: \"ab3d6be3-c145-4e54-a210-63354c62c9f1\") " pod="calico-system/whisker-6b8cdcc966-ckcqf" Sep 12 22:57:26.143828 kubelet[3355]: I0912 22:57:26.143761 3355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ab3d6be3-c145-4e54-a210-63354c62c9f1-whisker-backend-key-pair\") pod \"whisker-6b8cdcc966-ckcqf\" (UID: \"ab3d6be3-c145-4e54-a210-63354c62c9f1\") " pod="calico-system/whisker-6b8cdcc966-ckcqf" Sep 12 22:57:26.199542 containerd[2001]: time="2025-09-12T22:57:26.199474632Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a67f13a30bf26673e2613c8a27e5dc27ab1a31f6eb2c147a9012d310debddda\" id:\"0b5dd263d6b07792621a8ac6a21d855cd552849757f6895c9d150c9a39646d6e\" pid:4706 exit_status:1 exited_at:{seconds:1757717846 nanos:176188076}" Sep 12 22:57:26.385634 containerd[2001]: time="2025-09-12T22:57:26.385581615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b8cdcc966-ckcqf,Uid:ab3d6be3-c145-4e54-a210-63354c62c9f1,Namespace:calico-system,Attempt:0,}" Sep 12 22:57:26.597842 kubelet[3355]: I0912 22:57:26.597734 3355 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7e68407-bfcb-4501-bdff-a7deadef4b7e" path="/var/lib/kubelet/pods/b7e68407-bfcb-4501-bdff-a7deadef4b7e/volumes" Sep 12 22:57:26.942338 (udev-worker)[4660]: Network interface NamePolicy= disabled on kernel command line. Sep 12 22:57:26.949287 systemd-networkd[1815]: cali83ffb073f4c: Link UP Sep 12 22:57:26.951538 systemd-networkd[1815]: cali83ffb073f4c: Gained carrier Sep 12 22:57:26.996686 containerd[2001]: 2025-09-12 22:57:26.427 [INFO][4722] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 22:57:26.996686 containerd[2001]: 2025-09-12 22:57:26.481 [INFO][4722] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--120-k8s-whisker--6b8cdcc966--ckcqf-eth0 whisker-6b8cdcc966- calico-system ab3d6be3-c145-4e54-a210-63354c62c9f1 878 0 2025-09-12 22:57:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b8cdcc966 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-30-120 whisker-6b8cdcc966-ckcqf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali83ffb073f4c [] [] }} ContainerID="c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" Namespace="calico-system" Pod="whisker-6b8cdcc966-ckcqf" WorkloadEndpoint="ip--172--31--30--120-k8s-whisker--6b8cdcc966--ckcqf-" Sep 12 22:57:26.996686 containerd[2001]: 2025-09-12 22:57:26.481 [INFO][4722] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" Namespace="calico-system" Pod="whisker-6b8cdcc966-ckcqf" WorkloadEndpoint="ip--172--31--30--120-k8s-whisker--6b8cdcc966--ckcqf-eth0" Sep 12 22:57:26.996686 containerd[2001]: 2025-09-12 22:57:26.827 [INFO][4733] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" HandleID="k8s-pod-network.c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" Workload="ip--172--31--30--120-k8s-whisker--6b8cdcc966--ckcqf-eth0" Sep 12 22:57:26.997063 containerd[2001]: 2025-09-12 22:57:26.831 [INFO][4733] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" HandleID="k8s-pod-network.c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" Workload="ip--172--31--30--120-k8s-whisker--6b8cdcc966--ckcqf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004aa170), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-120", "pod":"whisker-6b8cdcc966-ckcqf", "timestamp":"2025-09-12 22:57:26.82739446 +0000 UTC"}, Hostname:"ip-172-31-30-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:57:26.997063 containerd[2001]: 2025-09-12 22:57:26.831 [INFO][4733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:57:26.997063 containerd[2001]: 2025-09-12 22:57:26.832 [INFO][4733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:57:26.997063 containerd[2001]: 2025-09-12 22:57:26.833 [INFO][4733] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-120' Sep 12 22:57:26.997063 containerd[2001]: 2025-09-12 22:57:26.858 [INFO][4733] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" host="ip-172-31-30-120" Sep 12 22:57:26.997063 containerd[2001]: 2025-09-12 22:57:26.875 [INFO][4733] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-120" Sep 12 22:57:26.997063 containerd[2001]: 2025-09-12 22:57:26.881 [INFO][4733] ipam/ipam.go 511: Trying affinity for 192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:26.997063 containerd[2001]: 2025-09-12 22:57:26.884 [INFO][4733] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:26.997063 containerd[2001]: 2025-09-12 22:57:26.890 [INFO][4733] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:26.997445 containerd[2001]: 2025-09-12 22:57:26.890 [INFO][4733] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" host="ip-172-31-30-120" Sep 12 22:57:26.997445 containerd[2001]: 2025-09-12 22:57:26.893 [INFO][4733] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877 Sep 12 22:57:26.997445 containerd[2001]: 2025-09-12 22:57:26.905 [INFO][4733] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" host="ip-172-31-30-120" Sep 12 22:57:26.997445 containerd[2001]: 2025-09-12 22:57:26.916 [INFO][4733] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.74.129/26] block=192.168.74.128/26 handle="k8s-pod-network.c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" host="ip-172-31-30-120" Sep 12 22:57:26.997445 containerd[2001]: 2025-09-12 22:57:26.916 [INFO][4733] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.129/26] handle="k8s-pod-network.c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" host="ip-172-31-30-120" Sep 12 22:57:26.997445 containerd[2001]: 2025-09-12 22:57:26.917 [INFO][4733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:57:26.997445 containerd[2001]: 2025-09-12 22:57:26.917 [INFO][4733] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.129/26] IPv6=[] ContainerID="c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" HandleID="k8s-pod-network.c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" Workload="ip--172--31--30--120-k8s-whisker--6b8cdcc966--ckcqf-eth0" Sep 12 22:57:26.997707 containerd[2001]: 2025-09-12 22:57:26.926 [INFO][4722] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" Namespace="calico-system" Pod="whisker-6b8cdcc966-ckcqf" WorkloadEndpoint="ip--172--31--30--120-k8s-whisker--6b8cdcc966--ckcqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-whisker--6b8cdcc966--ckcqf-eth0", GenerateName:"whisker-6b8cdcc966-", Namespace:"calico-system", SelfLink:"", UID:"ab3d6be3-c145-4e54-a210-63354c62c9f1", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 57, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b8cdcc966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"", Pod:"whisker-6b8cdcc966-ckcqf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.74.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali83ffb073f4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:26.997707 containerd[2001]: 2025-09-12 22:57:26.927 [INFO][4722] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.129/32] ContainerID="c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" Namespace="calico-system" Pod="whisker-6b8cdcc966-ckcqf" WorkloadEndpoint="ip--172--31--30--120-k8s-whisker--6b8cdcc966--ckcqf-eth0" Sep 12 22:57:26.997848 containerd[2001]: 2025-09-12 22:57:26.927 [INFO][4722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83ffb073f4c ContainerID="c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" Namespace="calico-system" Pod="whisker-6b8cdcc966-ckcqf" WorkloadEndpoint="ip--172--31--30--120-k8s-whisker--6b8cdcc966--ckcqf-eth0" Sep 12 22:57:26.997848 containerd[2001]: 2025-09-12 22:57:26.953 [INFO][4722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" Namespace="calico-system" Pod="whisker-6b8cdcc966-ckcqf" WorkloadEndpoint="ip--172--31--30--120-k8s-whisker--6b8cdcc966--ckcqf-eth0" Sep 12 22:57:26.997944 containerd[2001]: 2025-09-12 22:57:26.955 [INFO][4722] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" Namespace="calico-system" Pod="whisker-6b8cdcc966-ckcqf" WorkloadEndpoint="ip--172--31--30--120-k8s-whisker--6b8cdcc966--ckcqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-whisker--6b8cdcc966--ckcqf-eth0", GenerateName:"whisker-6b8cdcc966-", Namespace:"calico-system", SelfLink:"", UID:"ab3d6be3-c145-4e54-a210-63354c62c9f1", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 57, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b8cdcc966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877", Pod:"whisker-6b8cdcc966-ckcqf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.74.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali83ffb073f4c", MAC:"1a:e0:1d:3a:85:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:26.998038 containerd[2001]: 2025-09-12 22:57:26.991 [INFO][4722] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" Namespace="calico-system" Pod="whisker-6b8cdcc966-ckcqf" WorkloadEndpoint="ip--172--31--30--120-k8s-whisker--6b8cdcc966--ckcqf-eth0" Sep 12 22:57:27.319496 containerd[2001]: time="2025-09-12T22:57:27.317598213Z" level=info msg="connecting to shim c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877" address="unix:///run/containerd/s/3bf40ec77e2b9d08f046ac831b8cf25ff0fa34eaae16da90a40d1473f8e85922" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:57:27.365200 systemd[1]: Started cri-containerd-c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877.scope - libcontainer container c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877. Sep 12 22:57:27.540807 containerd[2001]: time="2025-09-12T22:57:27.540757117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b8cdcc966-ckcqf,Uid:ab3d6be3-c145-4e54-a210-63354c62c9f1,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877\"" Sep 12 22:57:27.550031 containerd[2001]: time="2025-09-12T22:57:27.549538173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 22:57:27.668423 containerd[2001]: time="2025-09-12T22:57:27.668350321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a67f13a30bf26673e2613c8a27e5dc27ab1a31f6eb2c147a9012d310debddda\" id:\"f1ef8860d77c980faa39909566f3ada743e8aa1d556bdfbd930fb96958811015\" pid:4777 exit_status:1 exited_at:{seconds:1757717847 nanos:667965994}" Sep 12 22:57:28.108732 systemd-networkd[1815]: vxlan.calico: Link UP Sep 12 22:57:28.110409 systemd-networkd[1815]: vxlan.calico: Gained carrier Sep 12 22:57:28.140315 (udev-worker)[4658]: Network interface NamePolicy= disabled on kernel command line. Sep 12 22:57:28.215557 systemd-networkd[1815]: cali83ffb073f4c: Gained IPv6LL Sep 12 22:57:28.597829 containerd[2001]: time="2025-09-12T22:57:28.597782069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6c6f8c54-wvkpl,Uid:02020cce-2386-4312-944b-9136ad36d797,Namespace:calico-apiserver,Attempt:0,}" Sep 12 22:57:28.598721 containerd[2001]: time="2025-09-12T22:57:28.598684660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-bn5vs,Uid:3874e0de-cb88-4eca-a88f-0286a3db799c,Namespace:calico-system,Attempt:0,}" Sep 12 22:57:28.807579 systemd-networkd[1815]: cali849f1d74e1c: Link UP Sep 12 22:57:28.809230 systemd-networkd[1815]: cali849f1d74e1c: Gained carrier Sep 12 22:57:28.836382 containerd[2001]: 2025-09-12 22:57:28.684 [INFO][5009] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--wvkpl-eth0 calico-apiserver-5b6c6f8c54- calico-apiserver 02020cce-2386-4312-944b-9136ad36d797 800 0 2025-09-12 22:57:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b6c6f8c54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-120 calico-apiserver-5b6c6f8c54-wvkpl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali849f1d74e1c [] [] }} ContainerID="5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" Namespace="calico-apiserver" Pod="calico-apiserver-5b6c6f8c54-wvkpl" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--wvkpl-" Sep 12 22:57:28.836382 containerd[2001]: 2025-09-12 22:57:28.684 [INFO][5009] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" Namespace="calico-apiserver" Pod="calico-apiserver-5b6c6f8c54-wvkpl" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--wvkpl-eth0" Sep 12 22:57:28.836382 containerd[2001]: 2025-09-12 22:57:28.739 [INFO][5042] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" HandleID="k8s-pod-network.5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" Workload="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--wvkpl-eth0" Sep 12 22:57:28.837299 containerd[2001]: 2025-09-12 22:57:28.740 [INFO][5042] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" HandleID="k8s-pod-network.5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" Workload="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--wvkpl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-120", "pod":"calico-apiserver-5b6c6f8c54-wvkpl", "timestamp":"2025-09-12 22:57:28.739972356 +0000 UTC"}, Hostname:"ip-172-31-30-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:57:28.837299 containerd[2001]: 2025-09-12 22:57:28.740 [INFO][5042] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:57:28.837299 containerd[2001]: 2025-09-12 22:57:28.740 [INFO][5042] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:57:28.837299 containerd[2001]: 2025-09-12 22:57:28.740 [INFO][5042] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-120' Sep 12 22:57:28.837299 containerd[2001]: 2025-09-12 22:57:28.749 [INFO][5042] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" host="ip-172-31-30-120" Sep 12 22:57:28.837299 containerd[2001]: 2025-09-12 22:57:28.757 [INFO][5042] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-120" Sep 12 22:57:28.837299 containerd[2001]: 2025-09-12 22:57:28.765 [INFO][5042] ipam/ipam.go 511: Trying affinity for 192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:28.837299 containerd[2001]: 2025-09-12 22:57:28.768 [INFO][5042] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:28.837299 containerd[2001]: 2025-09-12 22:57:28.778 [INFO][5042] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:28.838508 containerd[2001]: 2025-09-12 22:57:28.778 [INFO][5042] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" host="ip-172-31-30-120" Sep 12 22:57:28.838508 containerd[2001]: 2025-09-12 22:57:28.781 [INFO][5042] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed Sep 12 22:57:28.838508 containerd[2001]: 2025-09-12 22:57:28.785 [INFO][5042] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" host="ip-172-31-30-120" Sep 12 22:57:28.838508 containerd[2001]: 2025-09-12 22:57:28.795 [INFO][5042] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.74.130/26] block=192.168.74.128/26 handle="k8s-pod-network.5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" host="ip-172-31-30-120" Sep 12 22:57:28.838508 containerd[2001]: 2025-09-12 22:57:28.795 [INFO][5042] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.130/26] handle="k8s-pod-network.5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" host="ip-172-31-30-120" Sep 12 22:57:28.838508 containerd[2001]: 2025-09-12 22:57:28.795 [INFO][5042] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:57:28.838508 containerd[2001]: 2025-09-12 22:57:28.795 [INFO][5042] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.130/26] IPv6=[] ContainerID="5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" HandleID="k8s-pod-network.5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" Workload="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--wvkpl-eth0" Sep 12 22:57:28.838782 containerd[2001]: 2025-09-12 22:57:28.802 [INFO][5009] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" Namespace="calico-apiserver" Pod="calico-apiserver-5b6c6f8c54-wvkpl" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--wvkpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--wvkpl-eth0", GenerateName:"calico-apiserver-5b6c6f8c54-", Namespace:"calico-apiserver", SelfLink:"", UID:"02020cce-2386-4312-944b-9136ad36d797", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 57, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b6c6f8c54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"", Pod:"calico-apiserver-5b6c6f8c54-wvkpl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali849f1d74e1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:28.838893 containerd[2001]: 2025-09-12 22:57:28.802 [INFO][5009] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.130/32] ContainerID="5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" Namespace="calico-apiserver" Pod="calico-apiserver-5b6c6f8c54-wvkpl" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--wvkpl-eth0" Sep 12 22:57:28.838893 containerd[2001]: 2025-09-12 22:57:28.802 [INFO][5009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali849f1d74e1c ContainerID="5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" Namespace="calico-apiserver" Pod="calico-apiserver-5b6c6f8c54-wvkpl" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--wvkpl-eth0" Sep 12 22:57:28.838893 containerd[2001]: 2025-09-12 22:57:28.810 [INFO][5009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" Namespace="calico-apiserver" Pod="calico-apiserver-5b6c6f8c54-wvkpl" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--wvkpl-eth0" Sep 12 22:57:28.839026 containerd[2001]: 2025-09-12 22:57:28.810 [INFO][5009] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" Namespace="calico-apiserver" Pod="calico-apiserver-5b6c6f8c54-wvkpl" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--wvkpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--wvkpl-eth0", GenerateName:"calico-apiserver-5b6c6f8c54-", Namespace:"calico-apiserver", SelfLink:"", UID:"02020cce-2386-4312-944b-9136ad36d797", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 57, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b6c6f8c54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed", Pod:"calico-apiserver-5b6c6f8c54-wvkpl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali849f1d74e1c", MAC:"8e:e9:eb:5d:92:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:28.839122 containerd[2001]: 2025-09-12 22:57:28.826 [INFO][5009] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" Namespace="calico-apiserver" Pod="calico-apiserver-5b6c6f8c54-wvkpl" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--wvkpl-eth0" Sep 12 22:57:28.894868 containerd[2001]: time="2025-09-12T22:57:28.893677544Z" level=info msg="connecting to shim 5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed" address="unix:///run/containerd/s/ef1e7f9a8357a24ec3946b0490729f1c5b48eb474dc860ec91adc4813bd8cffe" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:57:28.955607 systemd[1]: Started cri-containerd-5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed.scope - libcontainer container 5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed. Sep 12 22:57:29.011454 systemd-networkd[1815]: cali7cbc84a1c06: Link UP Sep 12 22:57:29.018977 systemd-networkd[1815]: cali7cbc84a1c06: Gained carrier Sep 12 22:57:29.096428 containerd[2001]: 2025-09-12 22:57:28.679 [INFO][5020] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--120-k8s-goldmane--7988f88666--bn5vs-eth0 goldmane-7988f88666- calico-system 3874e0de-cb88-4eca-a88f-0286a3db799c 801 0 2025-09-12 22:57:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-30-120 goldmane-7988f88666-bn5vs eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7cbc84a1c06 [] [] }} ContainerID="447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" Namespace="calico-system" Pod="goldmane-7988f88666-bn5vs" WorkloadEndpoint="ip--172--31--30--120-k8s-goldmane--7988f88666--bn5vs-" Sep 12 22:57:29.096428 containerd[2001]: 2025-09-12 22:57:28.679 [INFO][5020] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" Namespace="calico-system" Pod="goldmane-7988f88666-bn5vs" WorkloadEndpoint="ip--172--31--30--120-k8s-goldmane--7988f88666--bn5vs-eth0" Sep 12 22:57:29.096428 containerd[2001]: 2025-09-12 22:57:28.741 [INFO][5037] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" HandleID="k8s-pod-network.447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" Workload="ip--172--31--30--120-k8s-goldmane--7988f88666--bn5vs-eth0" Sep 12 22:57:29.096724 containerd[2001]: 2025-09-12 22:57:28.741 [INFO][5037] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" HandleID="k8s-pod-network.447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" Workload="ip--172--31--30--120-k8s-goldmane--7988f88666--bn5vs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-120", "pod":"goldmane-7988f88666-bn5vs", "timestamp":"2025-09-12 22:57:28.74153856 +0000 UTC"}, Hostname:"ip-172-31-30-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:57:29.096724 containerd[2001]: 2025-09-12 22:57:28.741 [INFO][5037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:57:29.096724 containerd[2001]: 2025-09-12 22:57:28.795 [INFO][5037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:57:29.096724 containerd[2001]: 2025-09-12 22:57:28.796 [INFO][5037] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-120' Sep 12 22:57:29.096724 containerd[2001]: 2025-09-12 22:57:28.853 [INFO][5037] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" host="ip-172-31-30-120" Sep 12 22:57:29.096724 containerd[2001]: 2025-09-12 22:57:28.872 [INFO][5037] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-120" Sep 12 22:57:29.096724 containerd[2001]: 2025-09-12 22:57:28.888 [INFO][5037] ipam/ipam.go 511: Trying affinity for 192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:29.096724 containerd[2001]: 2025-09-12 22:57:28.895 [INFO][5037] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:29.096724 containerd[2001]: 2025-09-12 22:57:28.919 [INFO][5037] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:29.097089 containerd[2001]: 2025-09-12 22:57:28.920 [INFO][5037] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" host="ip-172-31-30-120" Sep 12 22:57:29.097089 containerd[2001]: 2025-09-12 22:57:28.926 [INFO][5037] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e Sep 12 22:57:29.097089 containerd[2001]: 2025-09-12 22:57:28.942 [INFO][5037] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" host="ip-172-31-30-120" Sep 12 22:57:29.097089 containerd[2001]: 2025-09-12 22:57:28.970 [INFO][5037] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.74.131/26] block=192.168.74.128/26 handle="k8s-pod-network.447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" host="ip-172-31-30-120" Sep 12 22:57:29.097089 containerd[2001]: 2025-09-12 22:57:28.970 [INFO][5037] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.131/26] handle="k8s-pod-network.447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" host="ip-172-31-30-120" Sep 12 22:57:29.097089 containerd[2001]: 2025-09-12 22:57:28.970 [INFO][5037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:57:29.097089 containerd[2001]: 2025-09-12 22:57:28.970 [INFO][5037] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.131/26] IPv6=[] ContainerID="447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" HandleID="k8s-pod-network.447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" Workload="ip--172--31--30--120-k8s-goldmane--7988f88666--bn5vs-eth0" Sep 12 22:57:29.098747 containerd[2001]: 2025-09-12 22:57:28.986 [INFO][5020] cni-plugin/k8s.go 418: Populated endpoint ContainerID="447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" Namespace="calico-system" Pod="goldmane-7988f88666-bn5vs" WorkloadEndpoint="ip--172--31--30--120-k8s-goldmane--7988f88666--bn5vs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-goldmane--7988f88666--bn5vs-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"3874e0de-cb88-4eca-a88f-0286a3db799c", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"", Pod:"goldmane-7988f88666-bn5vs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7cbc84a1c06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:29.098747 containerd[2001]: 2025-09-12 22:57:28.986 [INFO][5020] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.131/32] ContainerID="447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" Namespace="calico-system" Pod="goldmane-7988f88666-bn5vs" WorkloadEndpoint="ip--172--31--30--120-k8s-goldmane--7988f88666--bn5vs-eth0" Sep 12 22:57:29.098905 containerd[2001]: 2025-09-12 22:57:28.987 [INFO][5020] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cbc84a1c06 ContainerID="447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" Namespace="calico-system" Pod="goldmane-7988f88666-bn5vs" WorkloadEndpoint="ip--172--31--30--120-k8s-goldmane--7988f88666--bn5vs-eth0" Sep 12 22:57:29.098905 containerd[2001]: 2025-09-12 22:57:29.016 [INFO][5020] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" Namespace="calico-system" Pod="goldmane-7988f88666-bn5vs" WorkloadEndpoint="ip--172--31--30--120-k8s-goldmane--7988f88666--bn5vs-eth0" Sep 12 22:57:29.098988 containerd[2001]: 2025-09-12 22:57:29.020 [INFO][5020] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" Namespace="calico-system" Pod="goldmane-7988f88666-bn5vs" WorkloadEndpoint="ip--172--31--30--120-k8s-goldmane--7988f88666--bn5vs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-goldmane--7988f88666--bn5vs-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"3874e0de-cb88-4eca-a88f-0286a3db799c", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e", Pod:"goldmane-7988f88666-bn5vs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7cbc84a1c06", MAC:"5e:1a:66:fd:37:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:29.099078 containerd[2001]: 2025-09-12 22:57:29.087 [INFO][5020] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" Namespace="calico-system" Pod="goldmane-7988f88666-bn5vs" WorkloadEndpoint="ip--172--31--30--120-k8s-goldmane--7988f88666--bn5vs-eth0" Sep 12 22:57:29.188477 containerd[2001]: time="2025-09-12T22:57:29.187105169Z" level=info msg="connecting to shim 447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e" address="unix:///run/containerd/s/424a2aee742bd29cacde359f7fb9b773231fccf269b2673cc177a872e27d7b3d" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:57:29.264637 systemd[1]: Started cri-containerd-447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e.scope - libcontainer container 447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e. Sep 12 22:57:29.303769 systemd-networkd[1815]: vxlan.calico: Gained IPv6LL Sep 12 22:57:29.357324 containerd[2001]: time="2025-09-12T22:57:29.357276294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:29.361025 containerd[2001]: time="2025-09-12T22:57:29.360987104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 22:57:29.364735 containerd[2001]: time="2025-09-12T22:57:29.364692826Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:29.377840 containerd[2001]: time="2025-09-12T22:57:29.377093771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:29.380407 containerd[2001]: time="2025-09-12T22:57:29.380159440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.829955542s" Sep 12 22:57:29.381637 containerd[2001]: time="2025-09-12T22:57:29.381590222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 22:57:29.388749 containerd[2001]: time="2025-09-12T22:57:29.388707953Z" level=info msg="CreateContainer within sandbox \"c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 22:57:29.393707 containerd[2001]: time="2025-09-12T22:57:29.393662118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6c6f8c54-wvkpl,Uid:02020cce-2386-4312-944b-9136ad36d797,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed\"" Sep 12 22:57:29.397997 containerd[2001]: time="2025-09-12T22:57:29.397517783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 22:57:29.410468 containerd[2001]: time="2025-09-12T22:57:29.410432627Z" level=info msg="Container 56ad6bb5e8ccac25fc0984ec816b886595be80cba1f56aa101c71d415c936ef7: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:29.433281 containerd[2001]: time="2025-09-12T22:57:29.433132870Z" level=info msg="CreateContainer within sandbox \"c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"56ad6bb5e8ccac25fc0984ec816b886595be80cba1f56aa101c71d415c936ef7\"" Sep 12 22:57:29.434665 containerd[2001]: time="2025-09-12T22:57:29.434616418Z" level=info msg="StartContainer for \"56ad6bb5e8ccac25fc0984ec816b886595be80cba1f56aa101c71d415c936ef7\"" Sep 12 22:57:29.441964 containerd[2001]: time="2025-09-12T22:57:29.441617754Z" level=info msg="connecting to shim 56ad6bb5e8ccac25fc0984ec816b886595be80cba1f56aa101c71d415c936ef7" address="unix:///run/containerd/s/3bf40ec77e2b9d08f046ac831b8cf25ff0fa34eaae16da90a40d1473f8e85922" protocol=ttrpc version=3 Sep 12 22:57:29.456671 containerd[2001]: time="2025-09-12T22:57:29.456471660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-bn5vs,Uid:3874e0de-cb88-4eca-a88f-0286a3db799c,Namespace:calico-system,Attempt:0,} returns sandbox id \"447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e\"" Sep 12 22:57:29.472620 systemd[1]: Started cri-containerd-56ad6bb5e8ccac25fc0984ec816b886595be80cba1f56aa101c71d415c936ef7.scope - libcontainer container 56ad6bb5e8ccac25fc0984ec816b886595be80cba1f56aa101c71d415c936ef7. Sep 12 22:57:29.530481 containerd[2001]: time="2025-09-12T22:57:29.530428551Z" level=info msg="StartContainer for \"56ad6bb5e8ccac25fc0984ec816b886595be80cba1f56aa101c71d415c936ef7\" returns successfully" Sep 12 22:57:29.596523 containerd[2001]: time="2025-09-12T22:57:29.596483348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-25l5j,Uid:b680e444-12ae-4def-88c1-f80311726a91,Namespace:calico-system,Attempt:0,}" Sep 12 22:57:29.730445 systemd-networkd[1815]: calia88ad11a5d3: Link UP Sep 12 22:57:29.731552 systemd-networkd[1815]: calia88ad11a5d3: Gained carrier Sep 12 22:57:29.751952 containerd[2001]: 2025-09-12 22:57:29.640 [INFO][5199] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--120-k8s-csi--node--driver--25l5j-eth0 csi-node-driver- calico-system b680e444-12ae-4def-88c1-f80311726a91 686 0 2025-09-12 22:57:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-30-120 csi-node-driver-25l5j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia88ad11a5d3 [] [] }} ContainerID="a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" Namespace="calico-system" Pod="csi-node-driver-25l5j" WorkloadEndpoint="ip--172--31--30--120-k8s-csi--node--driver--25l5j-" Sep 12 22:57:29.751952 containerd[2001]: 2025-09-12 22:57:29.641 [INFO][5199] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" Namespace="calico-system" Pod="csi-node-driver-25l5j" WorkloadEndpoint="ip--172--31--30--120-k8s-csi--node--driver--25l5j-eth0" Sep 12 22:57:29.751952 containerd[2001]: 2025-09-12 22:57:29.677 [INFO][5211] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" HandleID="k8s-pod-network.a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" Workload="ip--172--31--30--120-k8s-csi--node--driver--25l5j-eth0" Sep 12 22:57:29.752662 containerd[2001]: 2025-09-12 22:57:29.677 [INFO][5211] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" HandleID="k8s-pod-network.a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" Workload="ip--172--31--30--120-k8s-csi--node--driver--25l5j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-120", "pod":"csi-node-driver-25l5j", "timestamp":"2025-09-12 22:57:29.677423562 +0000 UTC"}, Hostname:"ip-172-31-30-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:57:29.752662 containerd[2001]: 2025-09-12 22:57:29.677 [INFO][5211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:57:29.752662 containerd[2001]: 2025-09-12 22:57:29.677 [INFO][5211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:57:29.752662 containerd[2001]: 2025-09-12 22:57:29.677 [INFO][5211] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-120' Sep 12 22:57:29.752662 containerd[2001]: 2025-09-12 22:57:29.684 [INFO][5211] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" host="ip-172-31-30-120" Sep 12 22:57:29.752662 containerd[2001]: 2025-09-12 22:57:29.690 [INFO][5211] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-120" Sep 12 22:57:29.752662 containerd[2001]: 2025-09-12 22:57:29.701 [INFO][5211] ipam/ipam.go 511: Trying affinity for 192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:29.752662 containerd[2001]: 2025-09-12 22:57:29.703 [INFO][5211] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:29.752662 containerd[2001]: 2025-09-12 22:57:29.706 [INFO][5211] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:29.754749 containerd[2001]: 2025-09-12 22:57:29.706 [INFO][5211] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" host="ip-172-31-30-120" Sep 12 22:57:29.754749 containerd[2001]: 2025-09-12 22:57:29.708 [INFO][5211] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc Sep 12 22:57:29.754749 containerd[2001]: 2025-09-12 22:57:29.715 [INFO][5211] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" host="ip-172-31-30-120" Sep 12 22:57:29.754749 containerd[2001]: 2025-09-12 22:57:29.722 [INFO][5211] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.74.132/26] block=192.168.74.128/26 handle="k8s-pod-network.a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" host="ip-172-31-30-120" Sep 12 22:57:29.754749 containerd[2001]: 2025-09-12 22:57:29.722 [INFO][5211] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.132/26] handle="k8s-pod-network.a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" host="ip-172-31-30-120" Sep 12 22:57:29.754749 containerd[2001]: 2025-09-12 22:57:29.722 [INFO][5211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:57:29.754749 containerd[2001]: 2025-09-12 22:57:29.722 [INFO][5211] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.132/26] IPv6=[] ContainerID="a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" HandleID="k8s-pod-network.a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" Workload="ip--172--31--30--120-k8s-csi--node--driver--25l5j-eth0" Sep 12 22:57:29.755069 containerd[2001]: 2025-09-12 22:57:29.725 [INFO][5199] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" Namespace="calico-system" Pod="csi-node-driver-25l5j" WorkloadEndpoint="ip--172--31--30--120-k8s-csi--node--driver--25l5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-csi--node--driver--25l5j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b680e444-12ae-4def-88c1-f80311726a91", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"", Pod:"csi-node-driver-25l5j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia88ad11a5d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:29.755174 containerd[2001]: 2025-09-12 22:57:29.725 [INFO][5199] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.132/32] ContainerID="a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" Namespace="calico-system" Pod="csi-node-driver-25l5j" WorkloadEndpoint="ip--172--31--30--120-k8s-csi--node--driver--25l5j-eth0" Sep 12 22:57:29.755174 containerd[2001]: 2025-09-12 22:57:29.725 [INFO][5199] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia88ad11a5d3 ContainerID="a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" Namespace="calico-system" Pod="csi-node-driver-25l5j" WorkloadEndpoint="ip--172--31--30--120-k8s-csi--node--driver--25l5j-eth0" Sep 12 22:57:29.755174 containerd[2001]: 2025-09-12 22:57:29.732 [INFO][5199] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" Namespace="calico-system" Pod="csi-node-driver-25l5j" WorkloadEndpoint="ip--172--31--30--120-k8s-csi--node--driver--25l5j-eth0" Sep 12 22:57:29.755302 containerd[2001]: 2025-09-12 22:57:29.733 [INFO][5199] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" Namespace="calico-system" Pod="csi-node-driver-25l5j" WorkloadEndpoint="ip--172--31--30--120-k8s-csi--node--driver--25l5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-csi--node--driver--25l5j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b680e444-12ae-4def-88c1-f80311726a91", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc", Pod:"csi-node-driver-25l5j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia88ad11a5d3", MAC:"92:1b:66:da:52:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:29.756592 containerd[2001]: 2025-09-12 22:57:29.746 [INFO][5199] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" Namespace="calico-system" Pod="csi-node-driver-25l5j" WorkloadEndpoint="ip--172--31--30--120-k8s-csi--node--driver--25l5j-eth0" Sep 12 22:57:29.800948 containerd[2001]: time="2025-09-12T22:57:29.800554944Z" level=info msg="connecting to shim a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc" address="unix:///run/containerd/s/632dad90c680aed3495a33eb80a53354f684c48acbfa7267ac1e6fec0d088cbd" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:57:29.837614 systemd[1]: Started cri-containerd-a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc.scope - libcontainer container a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc. Sep 12 22:57:29.877418 containerd[2001]: time="2025-09-12T22:57:29.877342674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-25l5j,Uid:b680e444-12ae-4def-88c1-f80311726a91,Namespace:calico-system,Attempt:0,} returns sandbox id \"a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc\"" Sep 12 22:57:30.263610 systemd-networkd[1815]: cali849f1d74e1c: Gained IPv6LL Sep 12 22:57:30.839948 systemd-networkd[1815]: calia88ad11a5d3: Gained IPv6LL Sep 12 22:57:31.031563 systemd-networkd[1815]: cali7cbc84a1c06: Gained IPv6LL Sep 12 22:57:31.598357 containerd[2001]: time="2025-09-12T22:57:31.598306801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6549ddf4c9-rbbqr,Uid:73452f16-5cdc-4405-a665-642cdc11f813,Namespace:calico-system,Attempt:0,}" Sep 12 22:57:31.601388 containerd[2001]: time="2025-09-12T22:57:31.599796810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-84v28,Uid:ccbc5552-229a-44db-b0cc-8d9c359fbcb5,Namespace:kube-system,Attempt:0,}" Sep 12 22:57:32.088516 systemd-networkd[1815]: cali04ccc9c833d: Link UP Sep 12 22:57:32.091681 systemd-networkd[1815]: cali04ccc9c833d: Gained carrier Sep 12 22:57:32.138337 containerd[2001]: 2025-09-12 22:57:31.888 [INFO][5282] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--120-k8s-calico--kube--controllers--6549ddf4c9--rbbqr-eth0 calico-kube-controllers-6549ddf4c9- calico-system 73452f16-5cdc-4405-a665-642cdc11f813 791 0 2025-09-12 22:57:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6549ddf4c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-120 calico-kube-controllers-6549ddf4c9-rbbqr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali04ccc9c833d [] [] }} ContainerID="7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" Namespace="calico-system" Pod="calico-kube-controllers-6549ddf4c9-rbbqr" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--kube--controllers--6549ddf4c9--rbbqr-" Sep 12 22:57:32.138337 containerd[2001]: 2025-09-12 22:57:31.890 [INFO][5282] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" Namespace="calico-system" Pod="calico-kube-controllers-6549ddf4c9-rbbqr" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--kube--controllers--6549ddf4c9--rbbqr-eth0" Sep 12 22:57:32.138337 containerd[2001]: 2025-09-12 22:57:31.993 [INFO][5309] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" HandleID="k8s-pod-network.7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" Workload="ip--172--31--30--120-k8s-calico--kube--controllers--6549ddf4c9--rbbqr-eth0" Sep 12 22:57:32.138638 containerd[2001]: 2025-09-12 22:57:31.995 [INFO][5309] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" HandleID="k8s-pod-network.7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" Workload="ip--172--31--30--120-k8s-calico--kube--controllers--6549ddf4c9--rbbqr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5880), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-120", "pod":"calico-kube-controllers-6549ddf4c9-rbbqr", "timestamp":"2025-09-12 22:57:31.993854679 +0000 UTC"}, Hostname:"ip-172-31-30-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:57:32.138638 containerd[2001]: 2025-09-12 22:57:31.995 [INFO][5309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:57:32.138638 containerd[2001]: 2025-09-12 22:57:31.995 [INFO][5309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:57:32.138638 containerd[2001]: 2025-09-12 22:57:31.995 [INFO][5309] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-120' Sep 12 22:57:32.138638 containerd[2001]: 2025-09-12 22:57:32.014 [INFO][5309] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" host="ip-172-31-30-120" Sep 12 22:57:32.138638 containerd[2001]: 2025-09-12 22:57:32.026 [INFO][5309] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-120" Sep 12 22:57:32.138638 containerd[2001]: 2025-09-12 22:57:32.038 [INFO][5309] ipam/ipam.go 511: Trying affinity for 192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:32.138638 containerd[2001]: 2025-09-12 22:57:32.041 [INFO][5309] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:32.138638 containerd[2001]: 2025-09-12 22:57:32.046 [INFO][5309] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:32.138876 containerd[2001]: 2025-09-12 22:57:32.046 [INFO][5309] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" host="ip-172-31-30-120" Sep 12 22:57:32.138876 containerd[2001]: 2025-09-12 22:57:32.050 [INFO][5309] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967 Sep 12 22:57:32.138876 containerd[2001]: 2025-09-12 22:57:32.057 [INFO][5309] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" host="ip-172-31-30-120" Sep 12 22:57:32.138876 containerd[2001]: 2025-09-12 22:57:32.075 [INFO][5309] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.74.133/26] block=192.168.74.128/26 handle="k8s-pod-network.7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" host="ip-172-31-30-120" Sep 12 22:57:32.138876 containerd[2001]: 2025-09-12 22:57:32.075 [INFO][5309] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.133/26] handle="k8s-pod-network.7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" host="ip-172-31-30-120" Sep 12 22:57:32.138876 containerd[2001]: 2025-09-12 22:57:32.076 [INFO][5309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:57:32.138876 containerd[2001]: 2025-09-12 22:57:32.076 [INFO][5309] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.133/26] IPv6=[] ContainerID="7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" HandleID="k8s-pod-network.7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" Workload="ip--172--31--30--120-k8s-calico--kube--controllers--6549ddf4c9--rbbqr-eth0" Sep 12 22:57:32.139047 containerd[2001]: 2025-09-12 22:57:32.081 [INFO][5282] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" Namespace="calico-system" Pod="calico-kube-controllers-6549ddf4c9-rbbqr" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--kube--controllers--6549ddf4c9--rbbqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-calico--kube--controllers--6549ddf4c9--rbbqr-eth0", GenerateName:"calico-kube-controllers-6549ddf4c9-", Namespace:"calico-system", SelfLink:"", UID:"73452f16-5cdc-4405-a665-642cdc11f813", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6549ddf4c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"", Pod:"calico-kube-controllers-6549ddf4c9-rbbqr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali04ccc9c833d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:32.139111 containerd[2001]: 2025-09-12 22:57:32.082 [INFO][5282] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.133/32] ContainerID="7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" Namespace="calico-system" Pod="calico-kube-controllers-6549ddf4c9-rbbqr" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--kube--controllers--6549ddf4c9--rbbqr-eth0" Sep 12 22:57:32.139111 containerd[2001]: 2025-09-12 22:57:32.082 [INFO][5282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04ccc9c833d ContainerID="7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" Namespace="calico-system" Pod="calico-kube-controllers-6549ddf4c9-rbbqr" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--kube--controllers--6549ddf4c9--rbbqr-eth0" Sep 12 22:57:32.139111 containerd[2001]: 2025-09-12 22:57:32.096 [INFO][5282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" Namespace="calico-system" Pod="calico-kube-controllers-6549ddf4c9-rbbqr" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--kube--controllers--6549ddf4c9--rbbqr-eth0" Sep 12 22:57:32.139192 containerd[2001]: 2025-09-12 22:57:32.097 [INFO][5282] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" Namespace="calico-system" Pod="calico-kube-controllers-6549ddf4c9-rbbqr" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--kube--controllers--6549ddf4c9--rbbqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-calico--kube--controllers--6549ddf4c9--rbbqr-eth0", GenerateName:"calico-kube-controllers-6549ddf4c9-", Namespace:"calico-system", SelfLink:"", UID:"73452f16-5cdc-4405-a665-642cdc11f813", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6549ddf4c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967", Pod:"calico-kube-controllers-6549ddf4c9-rbbqr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali04ccc9c833d", MAC:"d6:29:6d:ce:40:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:32.139869 containerd[2001]: 2025-09-12 22:57:32.129 [INFO][5282] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" Namespace="calico-system" Pod="calico-kube-controllers-6549ddf4c9-rbbqr" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--kube--controllers--6549ddf4c9--rbbqr-eth0" Sep 12 22:57:32.202083 systemd-networkd[1815]: cali6abb6fa8cea: Link UP Sep 12 22:57:32.202312 systemd-networkd[1815]: cali6abb6fa8cea: Gained carrier Sep 12 22:57:32.233764 containerd[2001]: time="2025-09-12T22:57:32.233712984Z" level=info msg="connecting to shim 7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967" address="unix:///run/containerd/s/e2d1890074cf3fd9a3943b95c3c786a9cc845fc962a57d2a7d7e01e3e0a491c8" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:57:32.268711 containerd[2001]: 2025-09-12 22:57:31.886 [INFO][5287] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--120-k8s-coredns--7c65d6cfc9--84v28-eth0 coredns-7c65d6cfc9- kube-system ccbc5552-229a-44db-b0cc-8d9c359fbcb5 797 0 2025-09-12 22:56:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-120 coredns-7c65d6cfc9-84v28 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6abb6fa8cea [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" Namespace="kube-system" Pod="coredns-7c65d6cfc9-84v28" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--84v28-" Sep 12 22:57:32.268711 containerd[2001]: 2025-09-12 22:57:31.886 [INFO][5287] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" Namespace="kube-system" Pod="coredns-7c65d6cfc9-84v28" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--84v28-eth0" Sep 12 22:57:32.268711 containerd[2001]: 2025-09-12 22:57:32.000 [INFO][5307] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" HandleID="k8s-pod-network.6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" Workload="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--84v28-eth0" Sep 12 22:57:32.269017 containerd[2001]: 2025-09-12 22:57:32.006 [INFO][5307] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" HandleID="k8s-pod-network.6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" Workload="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--84v28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032f500), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-120", "pod":"coredns-7c65d6cfc9-84v28", "timestamp":"2025-09-12 22:57:32.000526923 +0000 UTC"}, Hostname:"ip-172-31-30-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:57:32.269017 containerd[2001]: 2025-09-12 22:57:32.007 [INFO][5307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:57:32.269017 containerd[2001]: 2025-09-12 22:57:32.076 [INFO][5307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:57:32.269017 containerd[2001]: 2025-09-12 22:57:32.076 [INFO][5307] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-120' Sep 12 22:57:32.269017 containerd[2001]: 2025-09-12 22:57:32.121 [INFO][5307] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" host="ip-172-31-30-120" Sep 12 22:57:32.269017 containerd[2001]: 2025-09-12 22:57:32.134 [INFO][5307] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-120" Sep 12 22:57:32.269017 containerd[2001]: 2025-09-12 22:57:32.143 [INFO][5307] ipam/ipam.go 511: Trying affinity for 192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:32.269017 containerd[2001]: 2025-09-12 22:57:32.146 [INFO][5307] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:32.269017 containerd[2001]: 2025-09-12 22:57:32.152 [INFO][5307] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:32.269751 containerd[2001]: 2025-09-12 22:57:32.152 [INFO][5307] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" host="ip-172-31-30-120" Sep 12 22:57:32.269751 containerd[2001]: 2025-09-12 22:57:32.155 [INFO][5307] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37 Sep 12 22:57:32.269751 containerd[2001]: 2025-09-12 22:57:32.163 [INFO][5307] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" host="ip-172-31-30-120" Sep 12 22:57:32.269751 containerd[2001]: 2025-09-12 22:57:32.182 [INFO][5307] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.74.134/26] block=192.168.74.128/26 handle="k8s-pod-network.6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" host="ip-172-31-30-120" Sep 12 22:57:32.269751 containerd[2001]: 2025-09-12 22:57:32.182 [INFO][5307] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.134/26] handle="k8s-pod-network.6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" host="ip-172-31-30-120" Sep 12 22:57:32.269751 containerd[2001]: 2025-09-12 22:57:32.183 [INFO][5307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:57:32.269751 containerd[2001]: 2025-09-12 22:57:32.183 [INFO][5307] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.134/26] IPv6=[] ContainerID="6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" HandleID="k8s-pod-network.6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" Workload="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--84v28-eth0" Sep 12 22:57:32.272746 containerd[2001]: 2025-09-12 22:57:32.197 [INFO][5287] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" Namespace="kube-system" Pod="coredns-7c65d6cfc9-84v28" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--84v28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-coredns--7c65d6cfc9--84v28-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ccbc5552-229a-44db-b0cc-8d9c359fbcb5", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"", Pod:"coredns-7c65d6cfc9-84v28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6abb6fa8cea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:32.272746 containerd[2001]: 2025-09-12 22:57:32.197 [INFO][5287] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.134/32] ContainerID="6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" Namespace="kube-system" Pod="coredns-7c65d6cfc9-84v28" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--84v28-eth0" Sep 12 22:57:32.272746 containerd[2001]: 2025-09-12 22:57:32.198 [INFO][5287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6abb6fa8cea ContainerID="6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" Namespace="kube-system" Pod="coredns-7c65d6cfc9-84v28" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--84v28-eth0" Sep 12 22:57:32.272746 containerd[2001]: 2025-09-12 22:57:32.202 [INFO][5287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" Namespace="kube-system" Pod="coredns-7c65d6cfc9-84v28" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--84v28-eth0" Sep 12 22:57:32.272746 containerd[2001]: 2025-09-12 22:57:32.207 [INFO][5287] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" Namespace="kube-system" Pod="coredns-7c65d6cfc9-84v28" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--84v28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-coredns--7c65d6cfc9--84v28-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ccbc5552-229a-44db-b0cc-8d9c359fbcb5", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37", Pod:"coredns-7c65d6cfc9-84v28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6abb6fa8cea", MAC:"3e:6f:a8:38:f4:89", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:32.272746 containerd[2001]: 2025-09-12 22:57:32.249 [INFO][5287] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" Namespace="kube-system" Pod="coredns-7c65d6cfc9-84v28" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--84v28-eth0" Sep 12 22:57:32.315048 systemd[1]: Started sshd@9-172.31.30.120:22-139.178.89.65:42700.service - OpenSSH per-connection server daemon (139.178.89.65:42700). Sep 12 22:57:32.423845 systemd[1]: Started cri-containerd-7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967.scope - libcontainer container 7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967. Sep 12 22:57:32.461476 containerd[2001]: time="2025-09-12T22:57:32.461341765Z" level=info msg="connecting to shim 6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37" address="unix:///run/containerd/s/4b101c76d53fa4bf7e39663e3b6b0821f1529f240a4b27a29c5f8a8774414a0c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:57:32.574698 systemd[1]: Started cri-containerd-6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37.scope - libcontainer container 6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37. Sep 12 22:57:32.602098 sshd[5359]: Accepted publickey for core from 139.178.89.65 port 42700 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:57:32.607831 sshd-session[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:32.649320 systemd-logind[1971]: New session 10 of user core. Sep 12 22:57:32.651766 containerd[2001]: time="2025-09-12T22:57:32.651725117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9bnxv,Uid:c3e0a937-c0ff-49cd-987c-00e813a4ecaf,Namespace:kube-system,Attempt:0,}" Sep 12 22:57:32.652769 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 22:57:32.756990 containerd[2001]: time="2025-09-12T22:57:32.756943613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6549ddf4c9-rbbqr,Uid:73452f16-5cdc-4405-a665-642cdc11f813,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967\"" Sep 12 22:57:32.869888 containerd[2001]: time="2025-09-12T22:57:32.869849152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-84v28,Uid:ccbc5552-229a-44db-b0cc-8d9c359fbcb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37\"" Sep 12 22:57:32.925575 containerd[2001]: time="2025-09-12T22:57:32.925196725Z" level=info msg="CreateContainer within sandbox \"6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 22:57:33.000150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3764975710.mount: Deactivated successfully. Sep 12 22:57:33.016796 containerd[2001]: time="2025-09-12T22:57:33.016452884Z" level=info msg="Container c7d34dd32c3ce4c4d19afd97fe5417def65a4caac5631bc5c6f8972890299cf3: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:33.051812 containerd[2001]: time="2025-09-12T22:57:33.051623153Z" level=info msg="CreateContainer within sandbox \"6c92484f46d70041e025c114a9e20ee0ea73ea16bc1a3f23a6781a5b15573a37\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c7d34dd32c3ce4c4d19afd97fe5417def65a4caac5631bc5c6f8972890299cf3\"" Sep 12 22:57:33.104031 containerd[2001]: time="2025-09-12T22:57:33.102587537Z" level=info msg="StartContainer for \"c7d34dd32c3ce4c4d19afd97fe5417def65a4caac5631bc5c6f8972890299cf3\"" Sep 12 22:57:33.108054 containerd[2001]: time="2025-09-12T22:57:33.105892216Z" level=info msg="connecting to shim c7d34dd32c3ce4c4d19afd97fe5417def65a4caac5631bc5c6f8972890299cf3" address="unix:///run/containerd/s/4b101c76d53fa4bf7e39663e3b6b0821f1529f240a4b27a29c5f8a8774414a0c" protocol=ttrpc version=3 Sep 12 22:57:33.173700 systemd[1]: Started cri-containerd-c7d34dd32c3ce4c4d19afd97fe5417def65a4caac5631bc5c6f8972890299cf3.scope - libcontainer container c7d34dd32c3ce4c4d19afd97fe5417def65a4caac5631bc5c6f8972890299cf3. Sep 12 22:57:33.371272 systemd-networkd[1815]: califefc8fd25a0: Link UP Sep 12 22:57:33.383343 systemd-networkd[1815]: califefc8fd25a0: Gained carrier Sep 12 22:57:33.396657 containerd[2001]: time="2025-09-12T22:57:33.396619044Z" level=info msg="StartContainer for \"c7d34dd32c3ce4c4d19afd97fe5417def65a4caac5631bc5c6f8972890299cf3\" returns successfully" Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:32.880 [INFO][5421] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--120-k8s-coredns--7c65d6cfc9--9bnxv-eth0 coredns-7c65d6cfc9- kube-system c3e0a937-c0ff-49cd-987c-00e813a4ecaf 803 0 2025-09-12 22:56:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-120 coredns-7c65d6cfc9-9bnxv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califefc8fd25a0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9bnxv" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--9bnxv-" Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:32.880 [INFO][5421] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9bnxv" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--9bnxv-eth0" Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.049 [INFO][5454] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" HandleID="k8s-pod-network.0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" Workload="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--9bnxv-eth0" Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.050 [INFO][5454] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" HandleID="k8s-pod-network.0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" Workload="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--9bnxv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003558a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-120", "pod":"coredns-7c65d6cfc9-9bnxv", "timestamp":"2025-09-12 22:57:33.048108672 +0000 UTC"}, Hostname:"ip-172-31-30-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.052 [INFO][5454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.052 [INFO][5454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.052 [INFO][5454] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-120' Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.091 [INFO][5454] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" host="ip-172-31-30-120" Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.105 [INFO][5454] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-120" Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.127 [INFO][5454] ipam/ipam.go 511: Trying affinity for 192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.135 [INFO][5454] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.144 [INFO][5454] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.145 [INFO][5454] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" host="ip-172-31-30-120" Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.155 [INFO][5454] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76 Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.251 [INFO][5454] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" host="ip-172-31-30-120" Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.301 [INFO][5454] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.74.135/26] block=192.168.74.128/26 handle="k8s-pod-network.0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" host="ip-172-31-30-120" Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.301 [INFO][5454] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.135/26] handle="k8s-pod-network.0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" host="ip-172-31-30-120" Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.301 [INFO][5454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:57:33.467750 containerd[2001]: 2025-09-12 22:57:33.301 [INFO][5454] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.135/26] IPv6=[] ContainerID="0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" HandleID="k8s-pod-network.0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" Workload="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--9bnxv-eth0" Sep 12 22:57:33.472465 containerd[2001]: 2025-09-12 22:57:33.319 [INFO][5421] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9bnxv" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--9bnxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-coredns--7c65d6cfc9--9bnxv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c3e0a937-c0ff-49cd-987c-00e813a4ecaf", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"", Pod:"coredns-7c65d6cfc9-9bnxv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califefc8fd25a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:33.472465 containerd[2001]: 2025-09-12 22:57:33.320 [INFO][5421] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.135/32] ContainerID="0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9bnxv" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--9bnxv-eth0" Sep 12 22:57:33.472465 containerd[2001]: 2025-09-12 22:57:33.320 [INFO][5421] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califefc8fd25a0 ContainerID="0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9bnxv" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--9bnxv-eth0" Sep 12 22:57:33.472465 containerd[2001]: 2025-09-12 22:57:33.393 [INFO][5421] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9bnxv" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--9bnxv-eth0" Sep 12 22:57:33.472465 containerd[2001]: 2025-09-12 22:57:33.394 [INFO][5421] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9bnxv" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--9bnxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-coredns--7c65d6cfc9--9bnxv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c3e0a937-c0ff-49cd-987c-00e813a4ecaf", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76", Pod:"coredns-7c65d6cfc9-9bnxv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califefc8fd25a0", MAC:"26:9b:9c:96:b0:97", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:33.472465 containerd[2001]: 2025-09-12 22:57:33.448 [INFO][5421] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9bnxv" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--7c65d6cfc9--9bnxv-eth0" Sep 12 22:57:33.571682 containerd[2001]: time="2025-09-12T22:57:33.571565391Z" level=info msg="connecting to shim 0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76" address="unix:///run/containerd/s/c4f089970ea69199678d0d663d7a6cc750907303b0556c54900ec36cff5f1199" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:57:33.597348 containerd[2001]: time="2025-09-12T22:57:33.597283809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6c6f8c54-dmtph,Uid:44432559-50cd-4e4e-a1e6-5eddd4740dfb,Namespace:calico-apiserver,Attempt:0,}" Sep 12 22:57:33.646152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4013894866.mount: Deactivated successfully. Sep 12 22:57:33.710674 systemd[1]: Started cri-containerd-0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76.scope - libcontainer container 0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76. Sep 12 22:57:33.721027 systemd-networkd[1815]: cali04ccc9c833d: Gained IPv6LL Sep 12 22:57:33.915131 systemd-networkd[1815]: cali6abb6fa8cea: Gained IPv6LL Sep 12 22:57:34.122821 containerd[2001]: time="2025-09-12T22:57:34.122600692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9bnxv,Uid:c3e0a937-c0ff-49cd-987c-00e813a4ecaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76\"" Sep 12 22:57:34.148308 containerd[2001]: time="2025-09-12T22:57:34.147572713Z" level=info msg="CreateContainer within sandbox \"0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 22:57:34.216008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1556351994.mount: Deactivated successfully. Sep 12 22:57:34.226020 containerd[2001]: time="2025-09-12T22:57:34.225977662Z" level=info msg="Container 6dcca87db56935553b288fc027e50aa3feac3ac2d5512ad7ea584676edc28a83: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:34.259768 containerd[2001]: time="2025-09-12T22:57:34.258802888Z" level=info msg="CreateContainer within sandbox \"0bd4dc00b529db0aaa5fea8677142e2958695cdd0391d7b481f15cd8f7173f76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6dcca87db56935553b288fc027e50aa3feac3ac2d5512ad7ea584676edc28a83\"" Sep 12 22:57:34.266122 containerd[2001]: time="2025-09-12T22:57:34.266081061Z" level=info msg="StartContainer for \"6dcca87db56935553b288fc027e50aa3feac3ac2d5512ad7ea584676edc28a83\"" Sep 12 22:57:34.269174 containerd[2001]: time="2025-09-12T22:57:34.269121681Z" level=info msg="connecting to shim 6dcca87db56935553b288fc027e50aa3feac3ac2d5512ad7ea584676edc28a83" address="unix:///run/containerd/s/c4f089970ea69199678d0d663d7a6cc750907303b0556c54900ec36cff5f1199" protocol=ttrpc version=3 Sep 12 22:57:34.359564 sshd[5422]: Connection closed by 139.178.89.65 port 42700 Sep 12 22:57:34.360136 sshd-session[5359]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:34.381577 systemd[1]: sshd@9-172.31.30.120:22-139.178.89.65:42700.service: Deactivated successfully. Sep 12 22:57:34.388817 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 22:57:34.392333 systemd-logind[1971]: Session 10 logged out. Waiting for processes to exit. Sep 12 22:57:34.406867 systemd[1]: Started cri-containerd-6dcca87db56935553b288fc027e50aa3feac3ac2d5512ad7ea584676edc28a83.scope - libcontainer container 6dcca87db56935553b288fc027e50aa3feac3ac2d5512ad7ea584676edc28a83. Sep 12 22:57:34.412912 systemd-logind[1971]: Removed session 10. Sep 12 22:57:34.528768 systemd-networkd[1815]: cali5035c4f8f37: Link UP Sep 12 22:57:34.531973 systemd-networkd[1815]: cali5035c4f8f37: Gained carrier Sep 12 22:57:34.573753 containerd[2001]: time="2025-09-12T22:57:34.573637226Z" level=info msg="StartContainer for \"6dcca87db56935553b288fc027e50aa3feac3ac2d5512ad7ea584676edc28a83\" returns successfully" Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:33.900 [INFO][5529] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--dmtph-eth0 calico-apiserver-5b6c6f8c54- calico-apiserver 44432559-50cd-4e4e-a1e6-5eddd4740dfb 802 0 2025-09-12 22:57:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b6c6f8c54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-120 calico-apiserver-5b6c6f8c54-dmtph eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5035c4f8f37 [] [] }} ContainerID="82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" Namespace="calico-apiserver" Pod="calico-apiserver-5b6c6f8c54-dmtph" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--dmtph-" Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:33.900 [INFO][5529] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" Namespace="calico-apiserver" Pod="calico-apiserver-5b6c6f8c54-dmtph" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--dmtph-eth0" Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.186 [INFO][5559] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" HandleID="k8s-pod-network.82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" Workload="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--dmtph-eth0" Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.190 [INFO][5559] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" HandleID="k8s-pod-network.82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" Workload="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--dmtph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002eaeb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-120", "pod":"calico-apiserver-5b6c6f8c54-dmtph", "timestamp":"2025-09-12 22:57:34.186458613 +0000 UTC"}, Hostname:"ip-172-31-30-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.191 [INFO][5559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.196 [INFO][5559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.196 [INFO][5559] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-120' Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.264 [INFO][5559] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" host="ip-172-31-30-120" Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.337 [INFO][5559] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-120" Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.363 [INFO][5559] ipam/ipam.go 511: Trying affinity for 192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.388 [INFO][5559] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.403 [INFO][5559] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.404 [INFO][5559] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" host="ip-172-31-30-120" Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.417 [INFO][5559] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.435 [INFO][5559] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" host="ip-172-31-30-120" Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.468 [INFO][5559] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.74.136/26] block=192.168.74.128/26 handle="k8s-pod-network.82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" host="ip-172-31-30-120" Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.468 [INFO][5559] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.136/26] handle="k8s-pod-network.82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" host="ip-172-31-30-120" Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.468 [INFO][5559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:57:34.595679 containerd[2001]: 2025-09-12 22:57:34.468 [INFO][5559] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.136/26] IPv6=[] ContainerID="82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" HandleID="k8s-pod-network.82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" Workload="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--dmtph-eth0" Sep 12 22:57:34.598211 containerd[2001]: 2025-09-12 22:57:34.486 [INFO][5529] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" Namespace="calico-apiserver" Pod="calico-apiserver-5b6c6f8c54-dmtph" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--dmtph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--dmtph-eth0", GenerateName:"calico-apiserver-5b6c6f8c54-", Namespace:"calico-apiserver", SelfLink:"", UID:"44432559-50cd-4e4e-a1e6-5eddd4740dfb", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b6c6f8c54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"", Pod:"calico-apiserver-5b6c6f8c54-dmtph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5035c4f8f37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:34.598211 containerd[2001]: 2025-09-12 22:57:34.491 [INFO][5529] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.136/32] ContainerID="82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" Namespace="calico-apiserver" Pod="calico-apiserver-5b6c6f8c54-dmtph" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--dmtph-eth0" Sep 12 22:57:34.598211 containerd[2001]: 2025-09-12 22:57:34.492 [INFO][5529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5035c4f8f37 ContainerID="82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" Namespace="calico-apiserver" Pod="calico-apiserver-5b6c6f8c54-dmtph" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--dmtph-eth0" Sep 12 22:57:34.598211 containerd[2001]: 2025-09-12 22:57:34.531 [INFO][5529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" Namespace="calico-apiserver" Pod="calico-apiserver-5b6c6f8c54-dmtph" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--dmtph-eth0" Sep 12 22:57:34.598211 containerd[2001]: 2025-09-12 22:57:34.538 [INFO][5529] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" Namespace="calico-apiserver" Pod="calico-apiserver-5b6c6f8c54-dmtph" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--dmtph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--dmtph-eth0", GenerateName:"calico-apiserver-5b6c6f8c54-", Namespace:"calico-apiserver", SelfLink:"", UID:"44432559-50cd-4e4e-a1e6-5eddd4740dfb", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b6c6f8c54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff", Pod:"calico-apiserver-5b6c6f8c54-dmtph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5035c4f8f37", MAC:"12:ef:62:8f:28:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:34.598211 containerd[2001]: 2025-09-12 22:57:34.580 [INFO][5529] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" Namespace="calico-apiserver" Pod="calico-apiserver-5b6c6f8c54-dmtph" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5b6c6f8c54--dmtph-eth0" Sep 12 22:57:34.649313 kubelet[3355]: I0912 22:57:34.649149 3355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-84v28" podStartSLOduration=45.587591318 podStartE2EDuration="45.587591318s" podCreationTimestamp="2025-09-12 22:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:57:34.311107201 +0000 UTC m=+51.869154984" watchObservedRunningTime="2025-09-12 22:57:34.587591318 +0000 UTC m=+52.145639106" Sep 12 22:57:34.717040 containerd[2001]: time="2025-09-12T22:57:34.716784578Z" level=info msg="connecting to shim 82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff" address="unix:///run/containerd/s/aabe674d1f6c7ac3ac077f55465e3ad059efee697c1f50251a2b90b6bb7be2b3" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:57:34.802960 systemd[1]: Started cri-containerd-82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff.scope - libcontainer container 82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff. Sep 12 22:57:34.872558 systemd-networkd[1815]: califefc8fd25a0: Gained IPv6LL Sep 12 22:57:34.954843 containerd[2001]: time="2025-09-12T22:57:34.954784501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6c6f8c54-dmtph,Uid:44432559-50cd-4e4e-a1e6-5eddd4740dfb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff\"" Sep 12 22:57:35.031455 kubelet[3355]: I0912 22:57:35.031182 3355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9bnxv" podStartSLOduration=46.031158645 podStartE2EDuration="46.031158645s" podCreationTimestamp="2025-09-12 22:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:57:35.026353139 +0000 UTC m=+52.584400927" watchObservedRunningTime="2025-09-12 22:57:35.031158645 +0000 UTC m=+52.589206428" Sep 12 22:57:35.471690 containerd[2001]: time="2025-09-12T22:57:35.471634936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:35.473649 containerd[2001]: time="2025-09-12T22:57:35.473584239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 22:57:35.475828 containerd[2001]: time="2025-09-12T22:57:35.475768453Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:35.479100 containerd[2001]: time="2025-09-12T22:57:35.479040294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:35.480041 containerd[2001]: time="2025-09-12T22:57:35.479622038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 6.082064054s" Sep 12 22:57:35.480041 containerd[2001]: time="2025-09-12T22:57:35.479655332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 22:57:35.481448 containerd[2001]: time="2025-09-12T22:57:35.481428898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 22:57:35.482148 containerd[2001]: time="2025-09-12T22:57:35.482121435Z" level=info msg="CreateContainer within sandbox \"5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 22:57:35.504476 containerd[2001]: time="2025-09-12T22:57:35.504426883Z" level=info msg="Container 3b8e51a4a1906de460e61527cf1b3134fe2a3747a55772918724ebd05e5ad489: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:35.522050 containerd[2001]: time="2025-09-12T22:57:35.522011518Z" level=info msg="CreateContainer within sandbox \"5e4c294cb8cb9aee172d850ee3393c4339ac28b6906bd9f19da6315278d1eaed\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3b8e51a4a1906de460e61527cf1b3134fe2a3747a55772918724ebd05e5ad489\"" Sep 12 22:57:35.522541 containerd[2001]: time="2025-09-12T22:57:35.522495489Z" level=info msg="StartContainer for \"3b8e51a4a1906de460e61527cf1b3134fe2a3747a55772918724ebd05e5ad489\"" Sep 12 22:57:35.523954 containerd[2001]: time="2025-09-12T22:57:35.523522558Z" level=info msg="connecting to shim 3b8e51a4a1906de460e61527cf1b3134fe2a3747a55772918724ebd05e5ad489" address="unix:///run/containerd/s/ef1e7f9a8357a24ec3946b0490729f1c5b48eb474dc860ec91adc4813bd8cffe" protocol=ttrpc version=3 Sep 12 22:57:35.545579 systemd[1]: Started cri-containerd-3b8e51a4a1906de460e61527cf1b3134fe2a3747a55772918724ebd05e5ad489.scope - libcontainer container 3b8e51a4a1906de460e61527cf1b3134fe2a3747a55772918724ebd05e5ad489. Sep 12 22:57:35.609201 containerd[2001]: time="2025-09-12T22:57:35.609156185Z" level=info msg="StartContainer for \"3b8e51a4a1906de460e61527cf1b3134fe2a3747a55772918724ebd05e5ad489\" returns successfully" Sep 12 22:57:36.217393 systemd-networkd[1815]: cali5035c4f8f37: Gained IPv6LL Sep 12 22:57:37.047258 kubelet[3355]: I0912 22:57:37.047129 3355 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 22:57:38.896250 ntpd[2223]: Listen normally on 6 vxlan.calico 192.168.74.128:123 Sep 12 22:57:38.898952 ntpd[2223]: 12 Sep 22:57:38 ntpd[2223]: Listen normally on 6 vxlan.calico 192.168.74.128:123 Sep 12 22:57:38.898952 ntpd[2223]: 12 Sep 22:57:38 ntpd[2223]: Listen normally on 7 cali83ffb073f4c [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 22:57:38.898952 ntpd[2223]: 12 Sep 22:57:38 ntpd[2223]: Listen normally on 8 vxlan.calico [fe80::645f:29ff:fed6:a68a%5]:123 Sep 12 22:57:38.898952 ntpd[2223]: 12 Sep 22:57:38 ntpd[2223]: Listen normally on 9 cali849f1d74e1c [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 22:57:38.898952 ntpd[2223]: 12 Sep 22:57:38 ntpd[2223]: Listen normally on 10 cali7cbc84a1c06 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 22:57:38.898952 ntpd[2223]: 12 Sep 22:57:38 ntpd[2223]: Listen normally on 11 calia88ad11a5d3 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 22:57:38.898952 ntpd[2223]: 12 Sep 22:57:38 ntpd[2223]: Listen normally on 12 cali04ccc9c833d [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 22:57:38.898952 ntpd[2223]: 12 Sep 22:57:38 ntpd[2223]: Listen normally on 13 cali6abb6fa8cea [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 22:57:38.898952 ntpd[2223]: 12 Sep 22:57:38 ntpd[2223]: Listen normally on 14 califefc8fd25a0 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 22:57:38.898952 ntpd[2223]: 12 Sep 22:57:38 ntpd[2223]: Listen normally on 15 cali5035c4f8f37 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 22:57:38.896324 ntpd[2223]: Listen normally on 7 cali83ffb073f4c [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 22:57:38.896349 ntpd[2223]: Listen normally on 8 vxlan.calico [fe80::645f:29ff:fed6:a68a%5]:123 Sep 12 22:57:38.896387 ntpd[2223]: Listen normally on 9 cali849f1d74e1c [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 22:57:38.896408 ntpd[2223]: Listen normally on 10 cali7cbc84a1c06 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 22:57:38.896426 ntpd[2223]: Listen normally on 11 calia88ad11a5d3 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 22:57:38.896446 ntpd[2223]: Listen normally on 12 cali04ccc9c833d [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 22:57:38.896466 ntpd[2223]: Listen normally on 13 cali6abb6fa8cea [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 22:57:38.896488 ntpd[2223]: Listen normally on 14 califefc8fd25a0 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 22:57:38.896505 ntpd[2223]: Listen normally on 15 cali5035c4f8f37 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 22:57:39.399228 systemd[1]: Started sshd@10-172.31.30.120:22-139.178.89.65:42712.service - OpenSSH per-connection server daemon (139.178.89.65:42712). Sep 12 22:57:39.662468 sshd[5729]: Accepted publickey for core from 139.178.89.65 port 42712 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:57:39.667496 sshd-session[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:39.678561 systemd-logind[1971]: New session 11 of user core. Sep 12 22:57:39.683842 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 22:57:40.165920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3252217437.mount: Deactivated successfully. Sep 12 22:57:40.848620 sshd[5732]: Connection closed by 139.178.89.65 port 42712 Sep 12 22:57:40.853821 systemd[1]: sshd@10-172.31.30.120:22-139.178.89.65:42712.service: Deactivated successfully. Sep 12 22:57:40.849168 sshd-session[5729]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:40.858144 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 22:57:40.863662 systemd-logind[1971]: Session 11 logged out. Waiting for processes to exit. Sep 12 22:57:40.865206 systemd-logind[1971]: Removed session 11. Sep 12 22:57:41.405239 containerd[2001]: time="2025-09-12T22:57:41.405188503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:41.407902 containerd[2001]: time="2025-09-12T22:57:41.407837154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 22:57:41.413479 containerd[2001]: time="2025-09-12T22:57:41.412461078Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:41.415826 containerd[2001]: time="2025-09-12T22:57:41.415782355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:41.416488 containerd[2001]: time="2025-09-12T22:57:41.416423914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.934870707s" Sep 12 22:57:41.416488 containerd[2001]: time="2025-09-12T22:57:41.416467182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 22:57:41.443771 containerd[2001]: time="2025-09-12T22:57:41.443733929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 22:57:41.449198 containerd[2001]: time="2025-09-12T22:57:41.449164093Z" level=info msg="CreateContainer within sandbox \"447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 22:57:41.501393 containerd[2001]: time="2025-09-12T22:57:41.500543681Z" level=info msg="Container dc2dd61d03edfb4a73348b0d9439d493a3ef50787f107e5bb387286b075cefa7: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:41.512243 containerd[2001]: time="2025-09-12T22:57:41.512202955Z" level=info msg="CreateContainer within sandbox \"447b4e93d9cb1924660f0b245d4b86e2fd7a3cf8eac30ee61037504007474a4e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"dc2dd61d03edfb4a73348b0d9439d493a3ef50787f107e5bb387286b075cefa7\"" Sep 12 22:57:41.513056 containerd[2001]: time="2025-09-12T22:57:41.512980601Z" level=info msg="StartContainer for \"dc2dd61d03edfb4a73348b0d9439d493a3ef50787f107e5bb387286b075cefa7\"" Sep 12 22:57:41.515213 containerd[2001]: time="2025-09-12T22:57:41.515173696Z" level=info msg="connecting to shim dc2dd61d03edfb4a73348b0d9439d493a3ef50787f107e5bb387286b075cefa7" address="unix:///run/containerd/s/424a2aee742bd29cacde359f7fb9b773231fccf269b2673cc177a872e27d7b3d" protocol=ttrpc version=3 Sep 12 22:57:41.598573 systemd[1]: Started cri-containerd-dc2dd61d03edfb4a73348b0d9439d493a3ef50787f107e5bb387286b075cefa7.scope - libcontainer container dc2dd61d03edfb4a73348b0d9439d493a3ef50787f107e5bb387286b075cefa7. Sep 12 22:57:41.745452 containerd[2001]: time="2025-09-12T22:57:41.745344757Z" level=info msg="StartContainer for \"dc2dd61d03edfb4a73348b0d9439d493a3ef50787f107e5bb387286b075cefa7\" returns successfully" Sep 12 22:57:42.164446 kubelet[3355]: I0912 22:57:42.164120 3355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b6c6f8c54-wvkpl" podStartSLOduration=36.055220425 podStartE2EDuration="42.138941796s" podCreationTimestamp="2025-09-12 22:57:00 +0000 UTC" firstStartedPulling="2025-09-12 22:57:29.396892892 +0000 UTC m=+46.954940659" lastFinishedPulling="2025-09-12 22:57:35.480614259 +0000 UTC m=+53.038662030" observedRunningTime="2025-09-12 22:57:36.137050147 +0000 UTC m=+53.695097934" watchObservedRunningTime="2025-09-12 22:57:42.138941796 +0000 UTC m=+59.696989579" Sep 12 22:57:42.322439 containerd[2001]: time="2025-09-12T22:57:42.322153987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc2dd61d03edfb4a73348b0d9439d493a3ef50787f107e5bb387286b075cefa7\" id:\"f77eccae4dc751a6e32b5dbc5d0d8cc50974f3112762bd86761788296e1e2ac6\" pid:5799 exit_status:1 exited_at:{seconds:1757717862 nanos:303400103}" Sep 12 22:57:43.376599 containerd[2001]: time="2025-09-12T22:57:43.376511282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc2dd61d03edfb4a73348b0d9439d493a3ef50787f107e5bb387286b075cefa7\" id:\"93f5334ba76e106c532a9139c2b857c2178fc8e0d225b61a4efa83dc0cca2900\" pid:5825 exit_status:1 exited_at:{seconds:1757717863 nanos:376156588}" Sep 12 22:57:44.010954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3027339665.mount: Deactivated successfully. Sep 12 22:57:44.037632 containerd[2001]: time="2025-09-12T22:57:44.037578376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:44.038873 containerd[2001]: time="2025-09-12T22:57:44.038710854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 22:57:44.040153 containerd[2001]: time="2025-09-12T22:57:44.040034082Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:44.042406 containerd[2001]: time="2025-09-12T22:57:44.042045646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:44.043123 containerd[2001]: time="2025-09-12T22:57:44.042661744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.598887983s" Sep 12 22:57:44.043123 containerd[2001]: time="2025-09-12T22:57:44.042706820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 22:57:44.044552 containerd[2001]: time="2025-09-12T22:57:44.044518348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 22:57:44.047896 containerd[2001]: time="2025-09-12T22:57:44.047545232Z" level=info msg="CreateContainer within sandbox \"c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 22:57:44.061106 containerd[2001]: time="2025-09-12T22:57:44.057699332Z" level=info msg="Container 9cd9cc4e94862148536ffddc2cc24bd865e8750fb7e063aa665f534db6a7a9f0: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:44.074340 containerd[2001]: time="2025-09-12T22:57:44.074294894Z" level=info msg="CreateContainer within sandbox \"c2eeb88823140531fa59c97cd6b5b516c0b8bfc4cd23fe2b64d62a6e3cb10877\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9cd9cc4e94862148536ffddc2cc24bd865e8750fb7e063aa665f534db6a7a9f0\"" Sep 12 22:57:44.075569 containerd[2001]: time="2025-09-12T22:57:44.075456921Z" level=info msg="StartContainer for \"9cd9cc4e94862148536ffddc2cc24bd865e8750fb7e063aa665f534db6a7a9f0\"" Sep 12 22:57:44.077930 containerd[2001]: time="2025-09-12T22:57:44.077889426Z" level=info msg="connecting to shim 9cd9cc4e94862148536ffddc2cc24bd865e8750fb7e063aa665f534db6a7a9f0" address="unix:///run/containerd/s/3bf40ec77e2b9d08f046ac831b8cf25ff0fa34eaae16da90a40d1473f8e85922" protocol=ttrpc version=3 Sep 12 22:57:44.117912 systemd[1]: Started cri-containerd-9cd9cc4e94862148536ffddc2cc24bd865e8750fb7e063aa665f534db6a7a9f0.scope - libcontainer container 9cd9cc4e94862148536ffddc2cc24bd865e8750fb7e063aa665f534db6a7a9f0. Sep 12 22:57:44.279537 containerd[2001]: time="2025-09-12T22:57:44.279221737Z" level=info msg="StartContainer for \"9cd9cc4e94862148536ffddc2cc24bd865e8750fb7e063aa665f534db6a7a9f0\" returns successfully" Sep 12 22:57:44.318550 containerd[2001]: time="2025-09-12T22:57:44.318501877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc2dd61d03edfb4a73348b0d9439d493a3ef50787f107e5bb387286b075cefa7\" id:\"46f15e9965f89d34c071206af5bbaf615cff22c701be59eb82aea87d86d4a99a\" pid:5875 exit_status:1 exited_at:{seconds:1757717864 nanos:318174654}" Sep 12 22:57:45.160436 kubelet[3355]: I0912 22:57:45.159603 3355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-bn5vs" podStartSLOduration=28.175039351 podStartE2EDuration="40.159579587s" podCreationTimestamp="2025-09-12 22:57:05 +0000 UTC" firstStartedPulling="2025-09-12 22:57:29.459033791 +0000 UTC m=+47.017081558" lastFinishedPulling="2025-09-12 22:57:41.443574032 +0000 UTC m=+59.001621794" observedRunningTime="2025-09-12 22:57:42.171859288 +0000 UTC m=+59.729907049" watchObservedRunningTime="2025-09-12 22:57:45.159579587 +0000 UTC m=+62.717627371" Sep 12 22:57:45.373490 containerd[2001]: time="2025-09-12T22:57:45.373435994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:45.374644 containerd[2001]: time="2025-09-12T22:57:45.374427820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 22:57:45.375745 containerd[2001]: time="2025-09-12T22:57:45.375703757Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:45.393595 containerd[2001]: time="2025-09-12T22:57:45.393538440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:45.394328 containerd[2001]: time="2025-09-12T22:57:45.394275978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.349725884s" Sep 12 22:57:45.394484 containerd[2001]: time="2025-09-12T22:57:45.394465740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 22:57:45.406059 containerd[2001]: time="2025-09-12T22:57:45.406020330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 22:57:45.421592 containerd[2001]: time="2025-09-12T22:57:45.421480461Z" level=info msg="CreateContainer within sandbox \"a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 22:57:45.469390 containerd[2001]: time="2025-09-12T22:57:45.469228402Z" level=info msg="Container 7714b16de6c7d9504d7acd9c286285bbfe06b7c1ce48064aa8648987a469107e: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:45.511956 containerd[2001]: time="2025-09-12T22:57:45.511885026Z" level=info msg="CreateContainer within sandbox \"a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7714b16de6c7d9504d7acd9c286285bbfe06b7c1ce48064aa8648987a469107e\"" Sep 12 22:57:45.525036 containerd[2001]: time="2025-09-12T22:57:45.524976755Z" level=info msg="StartContainer for \"7714b16de6c7d9504d7acd9c286285bbfe06b7c1ce48064aa8648987a469107e\"" Sep 12 22:57:45.526636 containerd[2001]: time="2025-09-12T22:57:45.526600393Z" level=info msg="connecting to shim 7714b16de6c7d9504d7acd9c286285bbfe06b7c1ce48064aa8648987a469107e" address="unix:///run/containerd/s/632dad90c680aed3495a33eb80a53354f684c48acbfa7267ac1e6fec0d088cbd" protocol=ttrpc version=3 Sep 12 22:57:45.558319 systemd[1]: Started cri-containerd-7714b16de6c7d9504d7acd9c286285bbfe06b7c1ce48064aa8648987a469107e.scope - libcontainer container 7714b16de6c7d9504d7acd9c286285bbfe06b7c1ce48064aa8648987a469107e. Sep 12 22:57:45.621570 containerd[2001]: time="2025-09-12T22:57:45.621532097Z" level=info msg="StartContainer for \"7714b16de6c7d9504d7acd9c286285bbfe06b7c1ce48064aa8648987a469107e\" returns successfully" Sep 12 22:57:45.880977 systemd[1]: Started sshd@11-172.31.30.120:22-139.178.89.65:47602.service - OpenSSH per-connection server daemon (139.178.89.65:47602). Sep 12 22:57:46.134260 sshd[5938]: Accepted publickey for core from 139.178.89.65 port 47602 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:57:46.138237 sshd-session[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:46.152706 systemd-logind[1971]: New session 12 of user core. Sep 12 22:57:46.156754 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 22:57:46.921036 containerd[2001]: time="2025-09-12T22:57:46.920909885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc2dd61d03edfb4a73348b0d9439d493a3ef50787f107e5bb387286b075cefa7\" id:\"dd8e04321af8fd618e5a5df4b6a388ff1e4ac2cde9afa4438bad900c164bc886\" pid:5964 exited_at:{seconds:1757717866 nanos:918658329}" Sep 12 22:57:47.123152 kubelet[3355]: I0912 22:57:47.123074 3355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6b8cdcc966-ckcqf" podStartSLOduration=5.623451738 podStartE2EDuration="22.121121474s" podCreationTimestamp="2025-09-12 22:57:25 +0000 UTC" firstStartedPulling="2025-09-12 22:57:27.546238661 +0000 UTC m=+45.104286431" lastFinishedPulling="2025-09-12 22:57:44.043908404 +0000 UTC m=+61.601956167" observedRunningTime="2025-09-12 22:57:45.162102195 +0000 UTC m=+62.720149977" watchObservedRunningTime="2025-09-12 22:57:47.121121474 +0000 UTC m=+64.679169257" Sep 12 22:57:47.228326 sshd[5941]: Connection closed by 139.178.89.65 port 47602 Sep 12 22:57:47.226523 sshd-session[5938]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:47.234140 systemd[1]: sshd@11-172.31.30.120:22-139.178.89.65:47602.service: Deactivated successfully. Sep 12 22:57:47.235882 systemd-logind[1971]: Session 12 logged out. Waiting for processes to exit. Sep 12 22:57:47.238657 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 22:57:47.243220 systemd-logind[1971]: Removed session 12. Sep 12 22:57:47.261506 systemd[1]: Started sshd@12-172.31.30.120:22-139.178.89.65:47604.service - OpenSSH per-connection server daemon (139.178.89.65:47604). Sep 12 22:57:47.443083 sshd[5981]: Accepted publickey for core from 139.178.89.65 port 47604 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:57:47.445262 sshd-session[5981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:47.453857 systemd-logind[1971]: New session 13 of user core. Sep 12 22:57:47.459639 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 22:57:47.912915 sshd[5988]: Connection closed by 139.178.89.65 port 47604 Sep 12 22:57:47.913711 sshd-session[5981]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:47.923482 systemd[1]: sshd@12-172.31.30.120:22-139.178.89.65:47604.service: Deactivated successfully. Sep 12 22:57:47.930262 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 22:57:47.933612 systemd-logind[1971]: Session 13 logged out. Waiting for processes to exit. Sep 12 22:57:47.951009 systemd[1]: Started sshd@13-172.31.30.120:22-139.178.89.65:47616.service - OpenSSH per-connection server daemon (139.178.89.65:47616). Sep 12 22:57:47.954471 systemd-logind[1971]: Removed session 13. Sep 12 22:57:48.195510 sshd[5998]: Accepted publickey for core from 139.178.89.65 port 47616 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:57:48.198708 sshd-session[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:48.219083 systemd-logind[1971]: New session 14 of user core. Sep 12 22:57:48.224037 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 22:57:48.863186 sshd[6001]: Connection closed by 139.178.89.65 port 47616 Sep 12 22:57:48.869341 sshd-session[5998]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:48.878899 systemd[1]: sshd@13-172.31.30.120:22-139.178.89.65:47616.service: Deactivated successfully. Sep 12 22:57:48.882980 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 22:57:48.889326 systemd-logind[1971]: Session 14 logged out. Waiting for processes to exit. Sep 12 22:57:48.891632 systemd-logind[1971]: Removed session 14. Sep 12 22:57:49.266606 containerd[2001]: time="2025-09-12T22:57:49.266534445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:49.268563 containerd[2001]: time="2025-09-12T22:57:49.268518854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 22:57:49.270410 containerd[2001]: time="2025-09-12T22:57:49.269485165Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:49.272673 containerd[2001]: time="2025-09-12T22:57:49.272583771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:49.274388 containerd[2001]: time="2025-09-12T22:57:49.273915424Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.86784279s" Sep 12 22:57:49.274388 containerd[2001]: time="2025-09-12T22:57:49.273959286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 22:57:49.276425 containerd[2001]: time="2025-09-12T22:57:49.276337164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 22:57:49.433359 containerd[2001]: time="2025-09-12T22:57:49.433013117Z" level=info msg="CreateContainer within sandbox \"7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 22:57:49.471584 containerd[2001]: time="2025-09-12T22:57:49.471526435Z" level=info msg="Container 4ffbd7af468f05bb19c91fc4fc24e0a91f1e4c2e427c3cdc0fb3cc40e66bb300: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:49.509130 containerd[2001]: time="2025-09-12T22:57:49.509081550Z" level=info msg="CreateContainer within sandbox \"7e6ff54634a6f073046f05528a2be21e805e20465824208e34a2cafa86cea967\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4ffbd7af468f05bb19c91fc4fc24e0a91f1e4c2e427c3cdc0fb3cc40e66bb300\"" Sep 12 22:57:49.510140 containerd[2001]: time="2025-09-12T22:57:49.510096959Z" level=info msg="StartContainer for \"4ffbd7af468f05bb19c91fc4fc24e0a91f1e4c2e427c3cdc0fb3cc40e66bb300\"" Sep 12 22:57:49.511982 containerd[2001]: time="2025-09-12T22:57:49.511914011Z" level=info msg="connecting to shim 4ffbd7af468f05bb19c91fc4fc24e0a91f1e4c2e427c3cdc0fb3cc40e66bb300" address="unix:///run/containerd/s/e2d1890074cf3fd9a3943b95c3c786a9cc845fc962a57d2a7d7e01e3e0a491c8" protocol=ttrpc version=3 Sep 12 22:57:49.618580 systemd[1]: Started cri-containerd-4ffbd7af468f05bb19c91fc4fc24e0a91f1e4c2e427c3cdc0fb3cc40e66bb300.scope - libcontainer container 4ffbd7af468f05bb19c91fc4fc24e0a91f1e4c2e427c3cdc0fb3cc40e66bb300. Sep 12 22:57:49.735186 containerd[2001]: time="2025-09-12T22:57:49.735142795Z" level=info msg="StartContainer for \"4ffbd7af468f05bb19c91fc4fc24e0a91f1e4c2e427c3cdc0fb3cc40e66bb300\" returns successfully" Sep 12 22:57:49.790708 containerd[2001]: time="2025-09-12T22:57:49.790662172Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:49.793512 containerd[2001]: time="2025-09-12T22:57:49.793466642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 22:57:49.796819 containerd[2001]: time="2025-09-12T22:57:49.796712505Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 520.340223ms" Sep 12 22:57:49.796819 containerd[2001]: time="2025-09-12T22:57:49.796770782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 22:57:49.799614 containerd[2001]: time="2025-09-12T22:57:49.799552960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 22:57:49.802741 containerd[2001]: time="2025-09-12T22:57:49.802712427Z" level=info msg="CreateContainer within sandbox \"82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 22:57:49.823757 containerd[2001]: time="2025-09-12T22:57:49.822161431Z" level=info msg="Container 9f6a0adca7f87a635496b767409105b9a1c619c7993b3213fc8c2467eaf4b51f: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:49.920334 containerd[2001]: time="2025-09-12T22:57:49.920285304Z" level=info msg="CreateContainer within sandbox \"82d22fcdf3f22dab2fd51424e7d61b05957c7ab0147191bb2797765989e34fff\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9f6a0adca7f87a635496b767409105b9a1c619c7993b3213fc8c2467eaf4b51f\"" Sep 12 22:57:49.921788 containerd[2001]: time="2025-09-12T22:57:49.921680338Z" level=info msg="StartContainer for \"9f6a0adca7f87a635496b767409105b9a1c619c7993b3213fc8c2467eaf4b51f\"" Sep 12 22:57:49.923062 containerd[2001]: time="2025-09-12T22:57:49.923031464Z" level=info msg="connecting to shim 9f6a0adca7f87a635496b767409105b9a1c619c7993b3213fc8c2467eaf4b51f" address="unix:///run/containerd/s/aabe674d1f6c7ac3ac077f55465e3ad059efee697c1f50251a2b90b6bb7be2b3" protocol=ttrpc version=3 Sep 12 22:57:49.950633 systemd[1]: Started cri-containerd-9f6a0adca7f87a635496b767409105b9a1c619c7993b3213fc8c2467eaf4b51f.scope - libcontainer container 9f6a0adca7f87a635496b767409105b9a1c619c7993b3213fc8c2467eaf4b51f. Sep 12 22:57:50.029530 containerd[2001]: time="2025-09-12T22:57:50.029472400Z" level=info msg="StartContainer for \"9f6a0adca7f87a635496b767409105b9a1c619c7993b3213fc8c2467eaf4b51f\" returns successfully" Sep 12 22:57:50.245991 kubelet[3355]: I0912 22:57:50.245849 3355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6549ddf4c9-rbbqr" podStartSLOduration=28.714606221 podStartE2EDuration="45.212105605s" podCreationTimestamp="2025-09-12 22:57:05 +0000 UTC" firstStartedPulling="2025-09-12 22:57:32.778362779 +0000 UTC m=+50.336410551" lastFinishedPulling="2025-09-12 22:57:49.275862163 +0000 UTC m=+66.833909935" observedRunningTime="2025-09-12 22:57:50.190848983 +0000 UTC m=+67.748896766" watchObservedRunningTime="2025-09-12 22:57:50.212105605 +0000 UTC m=+67.770153387" Sep 12 22:57:50.340793 containerd[2001]: time="2025-09-12T22:57:50.340719398Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ffbd7af468f05bb19c91fc4fc24e0a91f1e4c2e427c3cdc0fb3cc40e66bb300\" id:\"ae1de209ff5b5ef9c3d67e93eb9c1613febe02a0a3ac98ee57b8fbff1ac10a38\" pid:6119 exited_at:{seconds:1757717870 nanos:339663138}" Sep 12 22:57:50.358664 kubelet[3355]: I0912 22:57:50.358539 3355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b6c6f8c54-dmtph" podStartSLOduration=34.517041095 podStartE2EDuration="49.358512632s" podCreationTimestamp="2025-09-12 22:57:01 +0000 UTC" firstStartedPulling="2025-09-12 22:57:34.957820937 +0000 UTC m=+52.515868698" lastFinishedPulling="2025-09-12 22:57:49.799292461 +0000 UTC m=+67.357340235" observedRunningTime="2025-09-12 22:57:50.246764478 +0000 UTC m=+67.804812256" watchObservedRunningTime="2025-09-12 22:57:50.358512632 +0000 UTC m=+67.916560414" Sep 12 22:57:50.826273 containerd[2001]: time="2025-09-12T22:57:50.826230913Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a67f13a30bf26673e2613c8a27e5dc27ab1a31f6eb2c147a9012d310debddda\" id:\"de96191a1616fdfd1bb09cb6e25ccaa15747ee997432b07eeb923dae9303ec17\" pid:6131 exit_status:1 exited_at:{seconds:1757717870 nanos:823324254}" Sep 12 22:57:51.183555 kubelet[3355]: I0912 22:57:51.183511 3355 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 22:57:51.697915 kubelet[3355]: I0912 22:57:51.697876 3355 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 22:57:53.789850 containerd[2001]: time="2025-09-12T22:57:53.789791470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:53.791054 containerd[2001]: time="2025-09-12T22:57:53.791011188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 22:57:53.793323 containerd[2001]: time="2025-09-12T22:57:53.792908610Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:53.795170 containerd[2001]: time="2025-09-12T22:57:53.795132901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:53.796360 containerd[2001]: time="2025-09-12T22:57:53.796327305Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.996432612s" Sep 12 22:57:53.796504 containerd[2001]: time="2025-09-12T22:57:53.796483833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 22:57:53.895964 containerd[2001]: time="2025-09-12T22:57:53.895929155Z" level=info msg="CreateContainer within sandbox \"a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 22:57:53.910037 systemd[1]: Started sshd@14-172.31.30.120:22-139.178.89.65:53928.service - OpenSSH per-connection server daemon (139.178.89.65:53928). Sep 12 22:57:53.919644 containerd[2001]: time="2025-09-12T22:57:53.919530706Z" level=info msg="Container 14080761efd7d74301677198ace4caf0ef4d8776437a8dcd3024df825fbbf3b8: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:53.926921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount173062596.mount: Deactivated successfully. Sep 12 22:57:53.945477 containerd[2001]: time="2025-09-12T22:57:53.945166814Z" level=info msg="CreateContainer within sandbox \"a65ba60e0249f517aed075975be95b1fecdd6c11c8d36493dacdaaaa6ad4babc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"14080761efd7d74301677198ace4caf0ef4d8776437a8dcd3024df825fbbf3b8\"" Sep 12 22:57:53.947463 containerd[2001]: time="2025-09-12T22:57:53.947160200Z" level=info msg="StartContainer for \"14080761efd7d74301677198ace4caf0ef4d8776437a8dcd3024df825fbbf3b8\"" Sep 12 22:57:53.958742 containerd[2001]: time="2025-09-12T22:57:53.950833778Z" level=info msg="connecting to shim 14080761efd7d74301677198ace4caf0ef4d8776437a8dcd3024df825fbbf3b8" address="unix:///run/containerd/s/632dad90c680aed3495a33eb80a53354f684c48acbfa7267ac1e6fec0d088cbd" protocol=ttrpc version=3 Sep 12 22:57:54.031648 systemd[1]: Started cri-containerd-14080761efd7d74301677198ace4caf0ef4d8776437a8dcd3024df825fbbf3b8.scope - libcontainer container 14080761efd7d74301677198ace4caf0ef4d8776437a8dcd3024df825fbbf3b8. Sep 12 22:57:54.125169 containerd[2001]: time="2025-09-12T22:57:54.124516592Z" level=info msg="StartContainer for \"14080761efd7d74301677198ace4caf0ef4d8776437a8dcd3024df825fbbf3b8\" returns successfully" Sep 12 22:57:54.262687 sshd[6159]: Accepted publickey for core from 139.178.89.65 port 53928 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:57:54.265848 sshd-session[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:54.275730 systemd-logind[1971]: New session 15 of user core. Sep 12 22:57:54.284604 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 22:57:54.365655 kubelet[3355]: I0912 22:57:54.360056 3355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-25l5j" podStartSLOduration=25.361525098 podStartE2EDuration="49.359104997s" podCreationTimestamp="2025-09-12 22:57:05 +0000 UTC" firstStartedPulling="2025-09-12 22:57:29.880334706 +0000 UTC m=+47.438382466" lastFinishedPulling="2025-09-12 22:57:53.877914606 +0000 UTC m=+71.435962365" observedRunningTime="2025-09-12 22:57:54.357624406 +0000 UTC m=+71.915672190" watchObservedRunningTime="2025-09-12 22:57:54.359104997 +0000 UTC m=+71.917152773" Sep 12 22:57:55.011447 kubelet[3355]: I0912 22:57:54.989066 3355 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 22:57:55.016684 kubelet[3355]: I0912 22:57:55.016640 3355 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 22:57:55.342838 containerd[2001]: time="2025-09-12T22:57:55.341474403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc2dd61d03edfb4a73348b0d9439d493a3ef50787f107e5bb387286b075cefa7\" id:\"fdbd45903a9f187802a38d27be9a092685cbe5a5aea4300fd8cd172f7b3551e7\" pid:6215 exited_at:{seconds:1757717875 nanos:304718275}" Sep 12 22:57:55.830729 sshd[6193]: Connection closed by 139.178.89.65 port 53928 Sep 12 22:57:55.832136 sshd-session[6159]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:55.837424 systemd-logind[1971]: Session 15 logged out. Waiting for processes to exit. Sep 12 22:57:55.837645 systemd[1]: sshd@14-172.31.30.120:22-139.178.89.65:53928.service: Deactivated successfully. Sep 12 22:57:55.840071 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 22:57:55.844478 systemd-logind[1971]: Removed session 15. Sep 12 22:58:00.862853 systemd[1]: Started sshd@15-172.31.30.120:22-139.178.89.65:41986.service - OpenSSH per-connection server daemon (139.178.89.65:41986). Sep 12 22:58:01.073116 sshd[6233]: Accepted publickey for core from 139.178.89.65 port 41986 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:58:01.074671 sshd-session[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:01.080450 systemd-logind[1971]: New session 16 of user core. Sep 12 22:58:01.087772 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 22:58:01.552017 sshd[6236]: Connection closed by 139.178.89.65 port 41986 Sep 12 22:58:01.553859 sshd-session[6233]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:01.570970 systemd-logind[1971]: Session 16 logged out. Waiting for processes to exit. Sep 12 22:58:01.572243 systemd[1]: sshd@15-172.31.30.120:22-139.178.89.65:41986.service: Deactivated successfully. Sep 12 22:58:01.578484 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 22:58:01.580902 systemd-logind[1971]: Removed session 16. Sep 12 22:58:06.599271 systemd[1]: Started sshd@16-172.31.30.120:22-139.178.89.65:42000.service - OpenSSH per-connection server daemon (139.178.89.65:42000). Sep 12 22:58:06.899864 sshd[6254]: Accepted publickey for core from 139.178.89.65 port 42000 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:58:06.904677 sshd-session[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:06.917888 systemd-logind[1971]: New session 17 of user core. Sep 12 22:58:06.924498 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 22:58:07.585264 sshd[6257]: Connection closed by 139.178.89.65 port 42000 Sep 12 22:58:07.586089 sshd-session[6254]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:07.590571 systemd-logind[1971]: Session 17 logged out. Waiting for processes to exit. Sep 12 22:58:07.594062 systemd[1]: sshd@16-172.31.30.120:22-139.178.89.65:42000.service: Deactivated successfully. Sep 12 22:58:07.596265 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 22:58:07.598832 systemd-logind[1971]: Removed session 17. Sep 12 22:58:12.619845 systemd[1]: Started sshd@17-172.31.30.120:22-139.178.89.65:34450.service - OpenSSH per-connection server daemon (139.178.89.65:34450). Sep 12 22:58:12.815394 sshd[6275]: Accepted publickey for core from 139.178.89.65 port 34450 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:58:12.818756 sshd-session[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:12.827027 systemd-logind[1971]: New session 18 of user core. Sep 12 22:58:12.834565 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 22:58:13.237525 sshd[6278]: Connection closed by 139.178.89.65 port 34450 Sep 12 22:58:13.237997 sshd-session[6275]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:13.243640 systemd[1]: sshd@17-172.31.30.120:22-139.178.89.65:34450.service: Deactivated successfully. Sep 12 22:58:13.246668 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 22:58:13.248007 systemd-logind[1971]: Session 18 logged out. Waiting for processes to exit. Sep 12 22:58:13.250179 systemd-logind[1971]: Removed session 18. Sep 12 22:58:13.267942 systemd[1]: Started sshd@18-172.31.30.120:22-139.178.89.65:34452.service - OpenSSH per-connection server daemon (139.178.89.65:34452). Sep 12 22:58:13.467157 sshd[6290]: Accepted publickey for core from 139.178.89.65 port 34452 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:58:13.468829 sshd-session[6290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:13.474197 systemd-logind[1971]: New session 19 of user core. Sep 12 22:58:13.480598 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 22:58:14.229100 sshd[6293]: Connection closed by 139.178.89.65 port 34452 Sep 12 22:58:14.230205 sshd-session[6290]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:14.234119 systemd[1]: sshd@18-172.31.30.120:22-139.178.89.65:34452.service: Deactivated successfully. Sep 12 22:58:14.237232 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 22:58:14.239808 systemd-logind[1971]: Session 19 logged out. Waiting for processes to exit. Sep 12 22:58:14.242899 systemd-logind[1971]: Removed session 19. Sep 12 22:58:14.263778 systemd[1]: Started sshd@19-172.31.30.120:22-139.178.89.65:34466.service - OpenSSH per-connection server daemon (139.178.89.65:34466). Sep 12 22:58:14.480050 sshd[6303]: Accepted publickey for core from 139.178.89.65 port 34466 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:58:14.481848 sshd-session[6303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:14.487445 systemd-logind[1971]: New session 20 of user core. Sep 12 22:58:14.490558 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 22:58:15.068140 kubelet[3355]: I0912 22:58:15.038910 3355 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 22:58:17.753039 containerd[2001]: time="2025-09-12T22:58:17.752984022Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ffbd7af468f05bb19c91fc4fc24e0a91f1e4c2e427c3cdc0fb3cc40e66bb300\" id:\"1d86f1a52266e56ea9d6a262f3b98e7815f219a7b0e783a283a84d477b29d859\" pid:6341 exited_at:{seconds:1757717897 nanos:650714582}" Sep 12 22:58:18.128957 sshd[6306]: Connection closed by 139.178.89.65 port 34466 Sep 12 22:58:18.139620 sshd-session[6303]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:18.204611 systemd[1]: sshd@19-172.31.30.120:22-139.178.89.65:34466.service: Deactivated successfully. Sep 12 22:58:18.214761 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 22:58:18.215224 systemd[1]: session-20.scope: Consumed 841ms CPU time, 78.7M memory peak. Sep 12 22:58:18.223485 systemd-logind[1971]: Session 20 logged out. Waiting for processes to exit. Sep 12 22:58:18.236827 systemd[1]: Started sshd@20-172.31.30.120:22-139.178.89.65:34474.service - OpenSSH per-connection server daemon (139.178.89.65:34474). Sep 12 22:58:18.245727 systemd-logind[1971]: Removed session 20. Sep 12 22:58:18.532684 containerd[2001]: time="2025-09-12T22:58:18.532390391Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc2dd61d03edfb4a73348b0d9439d493a3ef50787f107e5bb387286b075cefa7\" id:\"9acc6a7736d63d5cee26cbfeb9683ad8663b6984f0d79d320f0256acaf8e362a\" pid:6353 exited_at:{seconds:1757717898 nanos:530674505}" Sep 12 22:58:18.570769 sshd[6377]: Accepted publickey for core from 139.178.89.65 port 34474 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:58:18.574174 sshd-session[6377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:18.589139 systemd-logind[1971]: New session 21 of user core. Sep 12 22:58:18.594560 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 22:58:21.200226 sshd[6380]: Connection closed by 139.178.89.65 port 34474 Sep 12 22:58:21.216436 sshd-session[6377]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:21.255876 systemd[1]: sshd@20-172.31.30.120:22-139.178.89.65:34474.service: Deactivated successfully. Sep 12 22:58:21.262076 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 22:58:21.262623 systemd[1]: session-21.scope: Consumed 1.016s CPU time, 65.7M memory peak. Sep 12 22:58:21.265292 systemd-logind[1971]: Session 21 logged out. Waiting for processes to exit. Sep 12 22:58:21.271617 systemd[1]: Started sshd@21-172.31.30.120:22-139.178.89.65:43162.service - OpenSSH per-connection server daemon (139.178.89.65:43162). Sep 12 22:58:21.275073 systemd-logind[1971]: Removed session 21. Sep 12 22:58:21.541385 sshd[6414]: Accepted publickey for core from 139.178.89.65 port 43162 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:58:21.544626 sshd-session[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:21.551671 systemd-logind[1971]: New session 22 of user core. Sep 12 22:58:21.558985 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 22:58:21.672871 containerd[2001]: time="2025-09-12T22:58:21.671186318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a67f13a30bf26673e2613c8a27e5dc27ab1a31f6eb2c147a9012d310debddda\" id:\"a6bb936cdf8b6c96a85179d6c18d76c2f6757c67e19ea754c43425b0375dfc4a\" pid:6400 exited_at:{seconds:1757717901 nanos:653928415}" Sep 12 22:58:22.446497 sshd[6418]: Connection closed by 139.178.89.65 port 43162 Sep 12 22:58:22.447086 sshd-session[6414]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:22.451348 systemd[1]: sshd@21-172.31.30.120:22-139.178.89.65:43162.service: Deactivated successfully. Sep 12 22:58:22.453974 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 22:58:22.455436 systemd-logind[1971]: Session 22 logged out. Waiting for processes to exit. Sep 12 22:58:22.457306 systemd-logind[1971]: Removed session 22. Sep 12 22:58:27.482696 systemd[1]: Started sshd@22-172.31.30.120:22-139.178.89.65:43168.service - OpenSSH per-connection server daemon (139.178.89.65:43168). Sep 12 22:58:27.771111 sshd[6434]: Accepted publickey for core from 139.178.89.65 port 43168 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:58:27.776051 sshd-session[6434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:27.785752 systemd-logind[1971]: New session 23 of user core. Sep 12 22:58:27.793571 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 22:58:28.850398 sshd[6437]: Connection closed by 139.178.89.65 port 43168 Sep 12 22:58:28.851680 sshd-session[6434]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:28.862828 systemd[1]: sshd@22-172.31.30.120:22-139.178.89.65:43168.service: Deactivated successfully. Sep 12 22:58:28.866340 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 22:58:28.868758 systemd-logind[1971]: Session 23 logged out. Waiting for processes to exit. Sep 12 22:58:28.872405 systemd-logind[1971]: Removed session 23. Sep 12 22:58:32.107445 containerd[2001]: time="2025-09-12T22:58:32.106613177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ffbd7af468f05bb19c91fc4fc24e0a91f1e4c2e427c3cdc0fb3cc40e66bb300\" id:\"0f40aec2da472fb1303d46da6fe55dd5621929efc002e6dfc2b9a0467544f8e6\" pid:6460 exited_at:{seconds:1757717912 nanos:98492205}" Sep 12 22:58:33.888216 systemd[1]: Started sshd@23-172.31.30.120:22-139.178.89.65:51508.service - OpenSSH per-connection server daemon (139.178.89.65:51508). Sep 12 22:58:34.168271 sshd[6471]: Accepted publickey for core from 139.178.89.65 port 51508 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:58:34.170624 sshd-session[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:34.177215 systemd-logind[1971]: New session 24 of user core. Sep 12 22:58:34.183615 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 22:58:35.201213 sshd[6480]: Connection closed by 139.178.89.65 port 51508 Sep 12 22:58:35.202592 sshd-session[6471]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:35.209214 systemd-logind[1971]: Session 24 logged out. Waiting for processes to exit. Sep 12 22:58:35.212210 systemd[1]: sshd@23-172.31.30.120:22-139.178.89.65:51508.service: Deactivated successfully. Sep 12 22:58:35.216724 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 22:58:35.219810 systemd-logind[1971]: Removed session 24. Sep 12 22:58:40.241182 systemd[1]: Started sshd@24-172.31.30.120:22-139.178.89.65:35618.service - OpenSSH per-connection server daemon (139.178.89.65:35618). Sep 12 22:58:40.453421 sshd[6492]: Accepted publickey for core from 139.178.89.65 port 35618 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:58:40.456123 sshd-session[6492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:40.462740 systemd-logind[1971]: New session 25 of user core. Sep 12 22:58:40.467576 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 22:58:40.888863 sshd[6495]: Connection closed by 139.178.89.65 port 35618 Sep 12 22:58:40.890805 sshd-session[6492]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:40.896916 systemd-logind[1971]: Session 25 logged out. Waiting for processes to exit. Sep 12 22:58:40.898284 systemd[1]: sshd@24-172.31.30.120:22-139.178.89.65:35618.service: Deactivated successfully. Sep 12 22:58:40.903294 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 22:58:40.909466 systemd-logind[1971]: Removed session 25. Sep 12 22:58:45.925457 systemd[1]: Started sshd@25-172.31.30.120:22-139.178.89.65:35622.service - OpenSSH per-connection server daemon (139.178.89.65:35622). Sep 12 22:58:46.228838 sshd[6509]: Accepted publickey for core from 139.178.89.65 port 35622 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:58:46.234392 sshd-session[6509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:46.257114 systemd-logind[1971]: New session 26 of user core. Sep 12 22:58:46.262160 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 22:58:46.815705 containerd[2001]: time="2025-09-12T22:58:46.783633120Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ffbd7af468f05bb19c91fc4fc24e0a91f1e4c2e427c3cdc0fb3cc40e66bb300\" id:\"47e4b74f71e6c384a751b474e05eb118163c606891a051ae1895b5edfc83aaa1\" pid:6528 exited_at:{seconds:1757717926 nanos:783012238}" Sep 12 22:58:47.660490 sshd[6512]: Connection closed by 139.178.89.65 port 35622 Sep 12 22:58:47.661620 sshd-session[6509]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:47.671352 systemd-logind[1971]: Session 26 logged out. Waiting for processes to exit. Sep 12 22:58:47.674637 systemd[1]: sshd@25-172.31.30.120:22-139.178.89.65:35622.service: Deactivated successfully. Sep 12 22:58:47.681497 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 22:58:47.688218 systemd-logind[1971]: Removed session 26. Sep 12 22:58:47.867185 containerd[2001]: time="2025-09-12T22:58:47.867128103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc2dd61d03edfb4a73348b0d9439d493a3ef50787f107e5bb387286b075cefa7\" id:\"212aaa3a289c5ff552878c0d0213bafddd42b5fed581bbdd8dbb7faa0b7ec583\" pid:6549 exited_at:{seconds:1757717927 nanos:866624131}" Sep 12 22:58:51.101071 containerd[2001]: time="2025-09-12T22:58:51.101011011Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a67f13a30bf26673e2613c8a27e5dc27ab1a31f6eb2c147a9012d310debddda\" id:\"181ade22fab420188655a61ef2147c2e32079cb445e464a1434db881275c26ce\" pid:6589 exited_at:{seconds:1757717931 nanos:100393302}" Sep 12 22:58:52.730172 systemd[1]: Started sshd@26-172.31.30.120:22-139.178.89.65:34498.service - OpenSSH per-connection server daemon (139.178.89.65:34498). Sep 12 22:58:52.969965 sshd[6615]: Accepted publickey for core from 139.178.89.65 port 34498 ssh2: RSA SHA256:6Cuckp9cFHLH3NTfBl1U/KSLCHTjBmHBde3uKlxnZHc Sep 12 22:58:52.971922 sshd-session[6615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:52.980264 systemd-logind[1971]: New session 27 of user core. Sep 12 22:58:52.988723 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 22:58:54.150272 sshd[6618]: Connection closed by 139.178.89.65 port 34498 Sep 12 22:58:54.150995 sshd-session[6615]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:54.157781 systemd[1]: sshd@26-172.31.30.120:22-139.178.89.65:34498.service: Deactivated successfully. Sep 12 22:58:54.163166 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 22:58:54.166679 systemd-logind[1971]: Session 27 logged out. Waiting for processes to exit. Sep 12 22:58:54.171928 systemd-logind[1971]: Removed session 27. Sep 12 22:58:55.711437 containerd[2001]: time="2025-09-12T22:58:55.711253550Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc2dd61d03edfb4a73348b0d9439d493a3ef50787f107e5bb387286b075cefa7\" id:\"76a26d3e2b137f15280abc1b7e0733c88a133bf2071e2b61ce2f060ef9db74df\" pid:6644 exited_at:{seconds:1757717935 nanos:710601648}" Sep 12 22:59:09.061754 systemd[1]: cri-containerd-b67c26d262f71fa0becc44a11e70f3f1277a9aab0e10c8a761b543b6e85ff98e.scope: Deactivated successfully. Sep 12 22:59:09.062539 systemd[1]: cri-containerd-b67c26d262f71fa0becc44a11e70f3f1277a9aab0e10c8a761b543b6e85ff98e.scope: Consumed 14.044s CPU time, 107.5M memory peak, 108.9M read from disk. Sep 12 22:59:09.291880 containerd[2001]: time="2025-09-12T22:59:09.291685244Z" level=info msg="received exit event container_id:\"b67c26d262f71fa0becc44a11e70f3f1277a9aab0e10c8a761b543b6e85ff98e\" id:\"b67c26d262f71fa0becc44a11e70f3f1277a9aab0e10c8a761b543b6e85ff98e\" pid:3948 exit_status:1 exited_at:{seconds:1757717949 nanos:189243026}" Sep 12 22:59:09.293344 containerd[2001]: time="2025-09-12T22:59:09.293248130Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b67c26d262f71fa0becc44a11e70f3f1277a9aab0e10c8a761b543b6e85ff98e\" id:\"b67c26d262f71fa0becc44a11e70f3f1277a9aab0e10c8a761b543b6e85ff98e\" pid:3948 exit_status:1 exited_at:{seconds:1757717949 nanos:189243026}" Sep 12 22:59:09.469073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b67c26d262f71fa0becc44a11e70f3f1277a9aab0e10c8a761b543b6e85ff98e-rootfs.mount: Deactivated successfully. Sep 12 22:59:10.108301 systemd[1]: cri-containerd-5a0d7058ffef0b9641e7f325e059ed4e47b45ba23aa0d4c5a623b8fda4bd956b.scope: Deactivated successfully. Sep 12 22:59:10.109202 systemd[1]: cri-containerd-5a0d7058ffef0b9641e7f325e059ed4e47b45ba23aa0d4c5a623b8fda4bd956b.scope: Consumed 4.046s CPU time, 93M memory peak, 131.8M read from disk. Sep 12 22:59:10.114069 containerd[2001]: time="2025-09-12T22:59:10.114013792Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a0d7058ffef0b9641e7f325e059ed4e47b45ba23aa0d4c5a623b8fda4bd956b\" id:\"5a0d7058ffef0b9641e7f325e059ed4e47b45ba23aa0d4c5a623b8fda4bd956b\" pid:3194 exit_status:1 exited_at:{seconds:1757717950 nanos:112853751}" Sep 12 22:59:10.114693 containerd[2001]: time="2025-09-12T22:59:10.114656045Z" level=info msg="received exit event container_id:\"5a0d7058ffef0b9641e7f325e059ed4e47b45ba23aa0d4c5a623b8fda4bd956b\" id:\"5a0d7058ffef0b9641e7f325e059ed4e47b45ba23aa0d4c5a623b8fda4bd956b\" pid:3194 exit_status:1 exited_at:{seconds:1757717950 nanos:112853751}" Sep 12 22:59:10.155548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a0d7058ffef0b9641e7f325e059ed4e47b45ba23aa0d4c5a623b8fda4bd956b-rootfs.mount: Deactivated successfully. Sep 12 22:59:10.572540 kubelet[3355]: I0912 22:59:10.570799 3355 scope.go:117] "RemoveContainer" containerID="5a0d7058ffef0b9641e7f325e059ed4e47b45ba23aa0d4c5a623b8fda4bd956b" Sep 12 22:59:10.579289 kubelet[3355]: I0912 22:59:10.579130 3355 scope.go:117] "RemoveContainer" containerID="b67c26d262f71fa0becc44a11e70f3f1277a9aab0e10c8a761b543b6e85ff98e" Sep 12 22:59:10.644151 containerd[2001]: time="2025-09-12T22:59:10.644107461Z" level=info msg="CreateContainer within sandbox \"f291e8cdd89b4c8bfd4759bfafe01bf37c55f53b04b3f25d87c078686b484141\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 22:59:10.644986 containerd[2001]: time="2025-09-12T22:59:10.644109272Z" level=info msg="CreateContainer within sandbox \"b8e9a16f5aee9a8afab05485c924ba8d3a33b1f49c7baf173d4be13f3ef782e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 22:59:10.753464 containerd[2001]: time="2025-09-12T22:59:10.752694649Z" level=info msg="Container 871277ee4cbaabb81b2bb8070c7621908da9a337bc77c4420ec346756305e5df: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:59:10.779254 containerd[2001]: time="2025-09-12T22:59:10.775170397Z" level=info msg="Container 2f8923e73c298377777b18e25aa6492a17ce59358e139a11dada2514cc5d45d1: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:59:10.780729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651992961.mount: Deactivated successfully. Sep 12 22:59:10.795028 containerd[2001]: time="2025-09-12T22:59:10.794961284Z" level=info msg="CreateContainer within sandbox \"f291e8cdd89b4c8bfd4759bfafe01bf37c55f53b04b3f25d87c078686b484141\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"871277ee4cbaabb81b2bb8070c7621908da9a337bc77c4420ec346756305e5df\"" Sep 12 22:59:10.803089 containerd[2001]: time="2025-09-12T22:59:10.802849090Z" level=info msg="CreateContainer within sandbox \"b8e9a16f5aee9a8afab05485c924ba8d3a33b1f49c7baf173d4be13f3ef782e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2f8923e73c298377777b18e25aa6492a17ce59358e139a11dada2514cc5d45d1\"" Sep 12 22:59:10.805283 containerd[2001]: time="2025-09-12T22:59:10.804982675Z" level=info msg="StartContainer for \"2f8923e73c298377777b18e25aa6492a17ce59358e139a11dada2514cc5d45d1\"" Sep 12 22:59:10.805923 containerd[2001]: time="2025-09-12T22:59:10.805790235Z" level=info msg="StartContainer for \"871277ee4cbaabb81b2bb8070c7621908da9a337bc77c4420ec346756305e5df\"" Sep 12 22:59:10.810864 containerd[2001]: time="2025-09-12T22:59:10.810810429Z" level=info msg="connecting to shim 2f8923e73c298377777b18e25aa6492a17ce59358e139a11dada2514cc5d45d1" address="unix:///run/containerd/s/ee11275b3823b08237a4626aeaa50f2b3111084d39ac2e668248813d03bb9f33" protocol=ttrpc version=3 Sep 12 22:59:10.812767 containerd[2001]: time="2025-09-12T22:59:10.812725635Z" level=info msg="connecting to shim 871277ee4cbaabb81b2bb8070c7621908da9a337bc77c4420ec346756305e5df" address="unix:///run/containerd/s/32fd309852600d6f87715f32264bb720dfdc537d805c9a6e3044d5320d4c156a" protocol=ttrpc version=3 Sep 12 22:59:10.904590 systemd[1]: Started cri-containerd-2f8923e73c298377777b18e25aa6492a17ce59358e139a11dada2514cc5d45d1.scope - libcontainer container 2f8923e73c298377777b18e25aa6492a17ce59358e139a11dada2514cc5d45d1. Sep 12 22:59:10.905708 systemd[1]: Started cri-containerd-871277ee4cbaabb81b2bb8070c7621908da9a337bc77c4420ec346756305e5df.scope - libcontainer container 871277ee4cbaabb81b2bb8070c7621908da9a337bc77c4420ec346756305e5df. Sep 12 22:59:11.006952 containerd[2001]: time="2025-09-12T22:59:11.006901000Z" level=info msg="StartContainer for \"871277ee4cbaabb81b2bb8070c7621908da9a337bc77c4420ec346756305e5df\" returns successfully" Sep 12 22:59:11.014122 containerd[2001]: time="2025-09-12T22:59:11.014064077Z" level=info msg="StartContainer for \"2f8923e73c298377777b18e25aa6492a17ce59358e139a11dada2514cc5d45d1\" returns successfully" Sep 12 22:59:15.525670 systemd[1]: cri-containerd-196d2a6772fc0edf237c02594bca49b97aca8985ad4abfdef9ebc6c2c70092e6.scope: Deactivated successfully. Sep 12 22:59:15.525929 systemd[1]: cri-containerd-196d2a6772fc0edf237c02594bca49b97aca8985ad4abfdef9ebc6c2c70092e6.scope: Consumed 2.205s CPU time, 39.1M memory peak, 90.7M read from disk. Sep 12 22:59:15.529248 containerd[2001]: time="2025-09-12T22:59:15.529158457Z" level=info msg="received exit event container_id:\"196d2a6772fc0edf237c02594bca49b97aca8985ad4abfdef9ebc6c2c70092e6\" id:\"196d2a6772fc0edf237c02594bca49b97aca8985ad4abfdef9ebc6c2c70092e6\" pid:3195 exit_status:1 exited_at:{seconds:1757717955 nanos:528772681}" Sep 12 22:59:15.529774 containerd[2001]: time="2025-09-12T22:59:15.529334819Z" level=info msg="TaskExit event in podsandbox handler container_id:\"196d2a6772fc0edf237c02594bca49b97aca8985ad4abfdef9ebc6c2c70092e6\" id:\"196d2a6772fc0edf237c02594bca49b97aca8985ad4abfdef9ebc6c2c70092e6\" pid:3195 exit_status:1 exited_at:{seconds:1757717955 nanos:528772681}" Sep 12 22:59:15.590390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-196d2a6772fc0edf237c02594bca49b97aca8985ad4abfdef9ebc6c2c70092e6-rootfs.mount: Deactivated successfully. Sep 12 22:59:15.798279 kubelet[3355]: E0912 22:59:15.796442 3355 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-120?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 12 22:59:16.537465 kubelet[3355]: I0912 22:59:16.537435 3355 scope.go:117] "RemoveContainer" containerID="196d2a6772fc0edf237c02594bca49b97aca8985ad4abfdef9ebc6c2c70092e6" Sep 12 22:59:16.539357 containerd[2001]: time="2025-09-12T22:59:16.539296468Z" level=info msg="CreateContainer within sandbox \"15ea8bd19c6e693dee068e7ab08975576eb2ee2ef3cf6eed00ff451b827b340a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 22:59:16.578589 containerd[2001]: time="2025-09-12T22:59:16.575663465Z" level=info msg="Container 4f9d446e3f96921018ab8c2199df7a06b15c5e622ff9515cfab11da58607fbbd: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:59:16.593070 containerd[2001]: time="2025-09-12T22:59:16.593019064Z" level=info msg="CreateContainer within sandbox \"15ea8bd19c6e693dee068e7ab08975576eb2ee2ef3cf6eed00ff451b827b340a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4f9d446e3f96921018ab8c2199df7a06b15c5e622ff9515cfab11da58607fbbd\"" Sep 12 22:59:16.594289 containerd[2001]: time="2025-09-12T22:59:16.593852874Z" level=info msg="StartContainer for \"4f9d446e3f96921018ab8c2199df7a06b15c5e622ff9515cfab11da58607fbbd\"" Sep 12 22:59:16.595633 containerd[2001]: time="2025-09-12T22:59:16.595601243Z" level=info msg="connecting to shim 4f9d446e3f96921018ab8c2199df7a06b15c5e622ff9515cfab11da58607fbbd" address="unix:///run/containerd/s/ccb67c0aab55444b5ffe5f9c33dc4df4c823ee07655d49b6c1b849b94d167206" protocol=ttrpc version=3 Sep 12 22:59:16.633771 systemd[1]: Started cri-containerd-4f9d446e3f96921018ab8c2199df7a06b15c5e622ff9515cfab11da58607fbbd.scope - libcontainer container 4f9d446e3f96921018ab8c2199df7a06b15c5e622ff9515cfab11da58607fbbd. Sep 12 22:59:16.650793 containerd[2001]: time="2025-09-12T22:59:16.650382782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ffbd7af468f05bb19c91fc4fc24e0a91f1e4c2e427c3cdc0fb3cc40e66bb300\" id:\"e59f337c5ac66447c3126e5ee45dd55cea02c75fbc369801083bb3bb1a730f71\" pid:6788 exit_status:1 exited_at:{seconds:1757717956 nanos:648926247}" Sep 12 22:59:16.698360 containerd[2001]: time="2025-09-12T22:59:16.698314734Z" level=info msg="StartContainer for \"4f9d446e3f96921018ab8c2199df7a06b15c5e622ff9515cfab11da58607fbbd\" returns successfully" Sep 12 22:59:16.867401 containerd[2001]: time="2025-09-12T22:59:16.867221792Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc2dd61d03edfb4a73348b0d9439d493a3ef50787f107e5bb387286b075cefa7\" id:\"213860bc3ddb169dc893f33924244e2e659e77961a1ab92bf52a330555644be1\" pid:6838 exited_at:{seconds:1757717956 nanos:866967613}"