Sep 16 05:03:07.902018 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 16 03:05:42 -00 2025 Sep 16 05:03:07.902058 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 05:03:07.902074 kernel: BIOS-provided physical RAM map: Sep 16 05:03:07.902087 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 16 05:03:07.902100 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 16 05:03:07.902112 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 16 05:03:07.902126 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 16 05:03:07.902138 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 16 05:03:07.902154 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 16 05:03:07.902168 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 16 05:03:07.902181 kernel: NX (Execute Disable) protection: active Sep 16 05:03:07.902193 kernel: APIC: Static calls initialized Sep 16 05:03:07.902204 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 16 05:03:07.902218 kernel: extended physical RAM map: Sep 16 05:03:07.902241 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 16 05:03:07.902257 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Sep 16 05:03:07.902273 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Sep 16 05:03:07.902287 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Sep 16 05:03:07.902299 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 16 05:03:07.902313 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 16 05:03:07.902325 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 16 05:03:07.902338 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 16 05:03:07.902350 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 16 05:03:07.902363 kernel: efi: EFI v2.7 by EDK II Sep 16 05:03:07.902380 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 16 05:03:07.902393 kernel: secureboot: Secure boot disabled Sep 16 05:03:07.902406 kernel: SMBIOS 2.7 present. Sep 16 05:03:07.902419 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 16 05:03:07.902432 kernel: DMI: Memory slots populated: 1/1 Sep 16 05:03:07.902445 kernel: Hypervisor detected: KVM Sep 16 05:03:07.902459 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 16 05:03:07.902472 kernel: kvm-clock: using sched offset of 5506532788 cycles Sep 16 05:03:07.902487 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 16 05:03:07.902501 kernel: tsc: Detected 2499.998 MHz processor Sep 16 05:03:07.902515 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 16 05:03:07.902532 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 16 05:03:07.902546 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 16 05:03:07.902560 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 16 05:03:07.902574 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 16 05:03:07.902588 kernel: Using GB pages for direct mapping Sep 16 05:03:07.902608 kernel: ACPI: Early table checksum verification disabled Sep 16 05:03:07.902625 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 16 05:03:07.902640 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 16 05:03:07.902654 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 16 05:03:07.902752 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 16 05:03:07.902765 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 16 05:03:07.902777 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 16 05:03:07.902791 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 16 05:03:07.902806 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 16 05:03:07.902825 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 16 05:03:07.902841 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 16 05:03:07.902854 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 16 05:03:07.902870 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 16 05:03:07.902885 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 16 05:03:07.902898 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 16 05:03:07.902911 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 16 05:03:07.902925 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 16 05:03:07.902942 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 16 05:03:07.902956 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 16 05:03:07.902973 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 16 05:03:07.902991 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 16 05:03:07.903003 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 16 05:03:07.903016 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 16 05:03:07.903028 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 16 05:03:07.903041 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 16 05:03:07.903054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 16 05:03:07.903068 kernel: NUMA: Initialized distance table, cnt=1 Sep 16 05:03:07.903086 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Sep 16 05:03:07.903100 kernel: Zone ranges: Sep 16 05:03:07.903115 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 16 05:03:07.903129 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 16 05:03:07.903144 kernel: Normal empty Sep 16 05:03:07.903158 kernel: Device empty Sep 16 05:03:07.903172 kernel: Movable zone start for each node Sep 16 05:03:07.903188 kernel: Early memory node ranges Sep 16 05:03:07.903203 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 16 05:03:07.903220 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 16 05:03:07.903234 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 16 05:03:07.903248 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 16 05:03:07.903261 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 16 05:03:07.903274 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 16 05:03:07.903287 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 16 05:03:07.903302 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 16 05:03:07.903317 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 16 05:03:07.903332 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 16 05:03:07.903350 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 16 05:03:07.903365 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 16 05:03:07.903380 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 16 05:03:07.903395 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 16 05:03:07.903407 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 16 05:03:07.903420 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 16 05:03:07.903435 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 16 05:03:07.903449 kernel: TSC deadline timer available Sep 16 05:03:07.903463 kernel: CPU topo: Max. logical packages: 1 Sep 16 05:03:07.903475 kernel: CPU topo: Max. logical dies: 1 Sep 16 05:03:07.903494 kernel: CPU topo: Max. dies per package: 1 Sep 16 05:03:07.903508 kernel: CPU topo: Max. threads per core: 2 Sep 16 05:03:07.903522 kernel: CPU topo: Num. cores per package: 1 Sep 16 05:03:07.903535 kernel: CPU topo: Num. threads per package: 2 Sep 16 05:03:07.903549 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 16 05:03:07.903564 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 16 05:03:07.903578 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 16 05:03:07.903592 kernel: Booting paravirtualized kernel on KVM Sep 16 05:03:07.903606 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 16 05:03:07.903623 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 16 05:03:07.903651 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 16 05:03:07.904407 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 16 05:03:07.904427 kernel: pcpu-alloc: [0] 0 1 Sep 16 05:03:07.904442 kernel: kvm-guest: PV spinlocks enabled Sep 16 05:03:07.904458 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 16 05:03:07.904476 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 05:03:07.904492 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 05:03:07.904512 kernel: random: crng init done Sep 16 05:03:07.904527 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 16 05:03:07.904542 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 16 05:03:07.904557 kernel: Fallback order for Node 0: 0 Sep 16 05:03:07.904572 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Sep 16 05:03:07.904588 kernel: Policy zone: DMA32 Sep 16 05:03:07.904614 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 05:03:07.904633 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 16 05:03:07.904648 kernel: Kernel/User page tables isolation: enabled Sep 16 05:03:07.905320 kernel: ftrace: allocating 40125 entries in 157 pages Sep 16 05:03:07.905341 kernel: ftrace: allocated 157 pages with 5 groups Sep 16 05:03:07.905362 kernel: Dynamic Preempt: voluntary Sep 16 05:03:07.905379 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 05:03:07.905396 kernel: rcu: RCU event tracing is enabled. Sep 16 05:03:07.905413 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 16 05:03:07.905429 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 05:03:07.905445 kernel: Rude variant of Tasks RCU enabled. Sep 16 05:03:07.905465 kernel: Tracing variant of Tasks RCU enabled. Sep 16 05:03:07.905481 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 05:03:07.905497 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 16 05:03:07.905513 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 05:03:07.905529 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 05:03:07.905545 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 05:03:07.905561 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 16 05:03:07.905577 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 05:03:07.905596 kernel: Console: colour dummy device 80x25 Sep 16 05:03:07.905612 kernel: printk: legacy console [tty0] enabled Sep 16 05:03:07.905628 kernel: printk: legacy console [ttyS0] enabled Sep 16 05:03:07.905643 kernel: ACPI: Core revision 20240827 Sep 16 05:03:07.905683 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 16 05:03:07.905699 kernel: APIC: Switch to symmetric I/O mode setup Sep 16 05:03:07.905714 kernel: x2apic enabled Sep 16 05:03:07.905729 kernel: APIC: Switched APIC routing to: physical x2apic Sep 16 05:03:07.905746 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 16 05:03:07.905762 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 16 05:03:07.905782 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 16 05:03:07.905798 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 16 05:03:07.905814 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 16 05:03:07.905830 kernel: Spectre V2 : Mitigation: Retpolines Sep 16 05:03:07.905846 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 16 05:03:07.905862 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 16 05:03:07.905880 kernel: RETBleed: Vulnerable Sep 16 05:03:07.905895 kernel: Speculative Store Bypass: Vulnerable Sep 16 05:03:07.905912 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 16 05:03:07.905928 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 16 05:03:07.905948 kernel: GDS: Unknown: Dependent on hypervisor status Sep 16 05:03:07.905964 kernel: active return thunk: its_return_thunk Sep 16 05:03:07.905980 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 16 05:03:07.905997 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 16 05:03:07.906014 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 16 05:03:07.906030 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 16 05:03:07.906046 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 16 05:03:07.906062 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 16 05:03:07.906078 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 16 05:03:07.906095 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 16 05:03:07.906111 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 16 05:03:07.906131 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 16 05:03:07.906148 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 16 05:03:07.906164 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 16 05:03:07.906179 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 16 05:03:07.906196 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 16 05:03:07.906212 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 16 05:03:07.906229 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 16 05:03:07.906245 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 16 05:03:07.906263 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 16 05:03:07.906280 kernel: Freeing SMP alternatives memory: 32K Sep 16 05:03:07.906296 kernel: pid_max: default: 32768 minimum: 301 Sep 16 05:03:07.906316 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 05:03:07.906333 kernel: landlock: Up and running. Sep 16 05:03:07.906350 kernel: SELinux: Initializing. Sep 16 05:03:07.906366 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 16 05:03:07.906382 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 16 05:03:07.906399 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 16 05:03:07.906416 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 16 05:03:07.906433 kernel: signal: max sigframe size: 3632 Sep 16 05:03:07.906450 kernel: rcu: Hierarchical SRCU implementation. Sep 16 05:03:07.906467 kernel: rcu: Max phase no-delay instances is 400. Sep 16 05:03:07.906485 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 05:03:07.906505 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 16 05:03:07.906522 kernel: smp: Bringing up secondary CPUs ... Sep 16 05:03:07.906538 kernel: smpboot: x86: Booting SMP configuration: Sep 16 05:03:07.906555 kernel: .... node #0, CPUs: #1 Sep 16 05:03:07.906573 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 16 05:03:07.906589 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 16 05:03:07.906606 kernel: smp: Brought up 1 node, 2 CPUs Sep 16 05:03:07.906622 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 16 05:03:07.906641 kernel: Memory: 1908060K/2037804K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54096K init, 2868K bss, 125188K reserved, 0K cma-reserved) Sep 16 05:03:07.908458 kernel: devtmpfs: initialized Sep 16 05:03:07.908490 kernel: x86/mm: Memory block size: 128MB Sep 16 05:03:07.908506 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 16 05:03:07.908521 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 05:03:07.908537 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 16 05:03:07.908551 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 05:03:07.908565 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 05:03:07.908580 kernel: audit: initializing netlink subsys (disabled) Sep 16 05:03:07.908601 kernel: audit: type=2000 audit(1757998986.626:1): state=initialized audit_enabled=0 res=1 Sep 16 05:03:07.908615 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 05:03:07.908630 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 16 05:03:07.908645 kernel: cpuidle: using governor menu Sep 16 05:03:07.908748 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 05:03:07.908764 kernel: dca service started, version 1.12.1 Sep 16 05:03:07.908779 kernel: PCI: Using configuration type 1 for base access Sep 16 05:03:07.908795 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 16 05:03:07.908812 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 05:03:07.908834 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 05:03:07.908850 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 05:03:07.908867 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 05:03:07.908884 kernel: ACPI: Added _OSI(Module Device) Sep 16 05:03:07.908901 kernel: ACPI: Added _OSI(Processor Device) Sep 16 05:03:07.908918 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 05:03:07.908934 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 16 05:03:07.908951 kernel: ACPI: Interpreter enabled Sep 16 05:03:07.908968 kernel: ACPI: PM: (supports S0 S5) Sep 16 05:03:07.908988 kernel: ACPI: Using IOAPIC for interrupt routing Sep 16 05:03:07.909005 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 16 05:03:07.909023 kernel: PCI: Using E820 reservations for host bridge windows Sep 16 05:03:07.909040 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 16 05:03:07.909056 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 16 05:03:07.909313 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 16 05:03:07.909472 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 16 05:03:07.909608 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 16 05:03:07.909627 kernel: acpiphp: Slot [3] registered Sep 16 05:03:07.909642 kernel: acpiphp: Slot [4] registered Sep 16 05:03:07.909672 kernel: acpiphp: Slot [5] registered Sep 16 05:03:07.909687 kernel: acpiphp: Slot [6] registered Sep 16 05:03:07.909700 kernel: acpiphp: Slot [7] registered Sep 16 05:03:07.909714 kernel: acpiphp: Slot [8] registered Sep 16 05:03:07.909727 kernel: acpiphp: Slot [9] registered Sep 16 05:03:07.909741 kernel: acpiphp: Slot [10] registered Sep 16 05:03:07.909761 kernel: acpiphp: Slot [11] registered Sep 16 05:03:07.909777 kernel: acpiphp: Slot [12] registered Sep 16 05:03:07.909793 kernel: acpiphp: Slot [13] registered Sep 16 05:03:07.909809 kernel: acpiphp: Slot [14] registered Sep 16 05:03:07.909823 kernel: acpiphp: Slot [15] registered Sep 16 05:03:07.909838 kernel: acpiphp: Slot [16] registered Sep 16 05:03:07.909855 kernel: acpiphp: Slot [17] registered Sep 16 05:03:07.909871 kernel: acpiphp: Slot [18] registered Sep 16 05:03:07.909887 kernel: acpiphp: Slot [19] registered Sep 16 05:03:07.909903 kernel: acpiphp: Slot [20] registered Sep 16 05:03:07.909922 kernel: acpiphp: Slot [21] registered Sep 16 05:03:07.909937 kernel: acpiphp: Slot [22] registered Sep 16 05:03:07.909953 kernel: acpiphp: Slot [23] registered Sep 16 05:03:07.909969 kernel: acpiphp: Slot [24] registered Sep 16 05:03:07.909985 kernel: acpiphp: Slot [25] registered Sep 16 05:03:07.910002 kernel: acpiphp: Slot [26] registered Sep 16 05:03:07.910018 kernel: acpiphp: Slot [27] registered Sep 16 05:03:07.910034 kernel: acpiphp: Slot [28] registered Sep 16 05:03:07.910050 kernel: acpiphp: Slot [29] registered Sep 16 05:03:07.910069 kernel: acpiphp: Slot [30] registered Sep 16 05:03:07.910085 kernel: acpiphp: Slot [31] registered Sep 16 05:03:07.910102 kernel: PCI host bridge to bus 0000:00 Sep 16 05:03:07.910261 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 16 05:03:07.910389 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 16 05:03:07.910513 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 16 05:03:07.910634 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 16 05:03:07.912487 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 16 05:03:07.912633 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 16 05:03:07.912811 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 16 05:03:07.912971 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 16 05:03:07.913121 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Sep 16 05:03:07.913259 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 16 05:03:07.913394 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 16 05:03:07.913532 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 16 05:03:07.913682 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 16 05:03:07.913817 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 16 05:03:07.913949 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 16 05:03:07.914084 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 16 05:03:07.914232 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Sep 16 05:03:07.914368 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Sep 16 05:03:07.914510 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 16 05:03:07.914636 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 16 05:03:07.914802 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Sep 16 05:03:07.914944 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Sep 16 05:03:07.915100 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Sep 16 05:03:07.915230 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Sep 16 05:03:07.915254 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 16 05:03:07.915270 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 16 05:03:07.915286 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 16 05:03:07.915301 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 16 05:03:07.915314 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 16 05:03:07.915330 kernel: iommu: Default domain type: Translated Sep 16 05:03:07.915345 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 16 05:03:07.915360 kernel: efivars: Registered efivars operations Sep 16 05:03:07.915376 kernel: PCI: Using ACPI for IRQ routing Sep 16 05:03:07.915395 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 16 05:03:07.915410 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Sep 16 05:03:07.915425 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 16 05:03:07.915440 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 16 05:03:07.915591 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 16 05:03:07.920415 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 16 05:03:07.920603 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 16 05:03:07.920627 kernel: vgaarb: loaded Sep 16 05:03:07.920652 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 16 05:03:07.920696 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 16 05:03:07.920714 kernel: clocksource: Switched to clocksource kvm-clock Sep 16 05:03:07.920731 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 05:03:07.920748 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 05:03:07.920766 kernel: pnp: PnP ACPI init Sep 16 05:03:07.920783 kernel: pnp: PnP ACPI: found 5 devices Sep 16 05:03:07.920800 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 16 05:03:07.920817 kernel: NET: Registered PF_INET protocol family Sep 16 05:03:07.920838 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 16 05:03:07.920852 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 16 05:03:07.920865 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 05:03:07.920880 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 16 05:03:07.920896 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 16 05:03:07.920912 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 16 05:03:07.920930 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 16 05:03:07.920947 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 16 05:03:07.920965 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 05:03:07.920985 kernel: NET: Registered PF_XDP protocol family Sep 16 05:03:07.921134 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 16 05:03:07.921258 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 16 05:03:07.921380 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 16 05:03:07.921503 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 16 05:03:07.921625 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 16 05:03:07.921799 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 16 05:03:07.921821 kernel: PCI: CLS 0 bytes, default 64 Sep 16 05:03:07.921845 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 16 05:03:07.921861 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 16 05:03:07.921878 kernel: clocksource: Switched to clocksource tsc Sep 16 05:03:07.921894 kernel: Initialise system trusted keyrings Sep 16 05:03:07.921909 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 16 05:03:07.921922 kernel: Key type asymmetric registered Sep 16 05:03:07.921937 kernel: Asymmetric key parser 'x509' registered Sep 16 05:03:07.921953 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 16 05:03:07.921968 kernel: io scheduler mq-deadline registered Sep 16 05:03:07.921987 kernel: io scheduler kyber registered Sep 16 05:03:07.922003 kernel: io scheduler bfq registered Sep 16 05:03:07.922019 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 16 05:03:07.922035 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 05:03:07.922051 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 16 05:03:07.922066 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 16 05:03:07.922082 kernel: i8042: Warning: Keylock active Sep 16 05:03:07.922097 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 16 05:03:07.922113 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 16 05:03:07.922311 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 16 05:03:07.922448 kernel: rtc_cmos 00:00: registered as rtc0 Sep 16 05:03:07.922581 kernel: rtc_cmos 00:00: setting system clock to 2025-09-16T05:03:07 UTC (1757998987) Sep 16 05:03:07.922738 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 16 05:03:07.922784 kernel: intel_pstate: CPU model not supported Sep 16 05:03:07.922803 kernel: efifb: probing for efifb Sep 16 05:03:07.922821 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 16 05:03:07.922838 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 16 05:03:07.922857 kernel: efifb: scrolling: redraw Sep 16 05:03:07.922872 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 16 05:03:07.922886 kernel: Console: switching to colour frame buffer device 100x37 Sep 16 05:03:07.922904 kernel: fb0: EFI VGA frame buffer device Sep 16 05:03:07.922920 kernel: pstore: Using crash dump compression: deflate Sep 16 05:03:07.922937 kernel: pstore: Registered efi_pstore as persistent store backend Sep 16 05:03:07.922954 kernel: NET: Registered PF_INET6 protocol family Sep 16 05:03:07.922970 kernel: Segment Routing with IPv6 Sep 16 05:03:07.922985 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 05:03:07.923005 kernel: NET: Registered PF_PACKET protocol family Sep 16 05:03:07.923022 kernel: Key type dns_resolver registered Sep 16 05:03:07.923035 kernel: IPI shorthand broadcast: enabled Sep 16 05:03:07.923051 kernel: sched_clock: Marking stable (2612003681, 181708128)->(2897119131, -103407322) Sep 16 05:03:07.923067 kernel: registered taskstats version 1 Sep 16 05:03:07.923083 kernel: Loading compiled-in X.509 certificates Sep 16 05:03:07.923100 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: d1d5b0d56b9b23dabf19e645632ff93bf659b3bf' Sep 16 05:03:07.923116 kernel: Demotion targets for Node 0: null Sep 16 05:03:07.923132 kernel: Key type .fscrypt registered Sep 16 05:03:07.923150 kernel: Key type fscrypt-provisioning registered Sep 16 05:03:07.923166 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 05:03:07.923182 kernel: ima: Allocated hash algorithm: sha1 Sep 16 05:03:07.923198 kernel: ima: No architecture policies found Sep 16 05:03:07.923214 kernel: clk: Disabling unused clocks Sep 16 05:03:07.923231 kernel: Warning: unable to open an initial console. Sep 16 05:03:07.923249 kernel: Freeing unused kernel image (initmem) memory: 54096K Sep 16 05:03:07.923267 kernel: Write protecting the kernel read-only data: 24576k Sep 16 05:03:07.923285 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 16 05:03:07.923306 kernel: Run /init as init process Sep 16 05:03:07.923323 kernel: with arguments: Sep 16 05:03:07.923339 kernel: /init Sep 16 05:03:07.923354 kernel: with environment: Sep 16 05:03:07.923370 kernel: HOME=/ Sep 16 05:03:07.923390 kernel: TERM=linux Sep 16 05:03:07.923407 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 05:03:07.923426 systemd[1]: Successfully made /usr/ read-only. Sep 16 05:03:07.923451 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 05:03:07.923469 systemd[1]: Detected virtualization amazon. Sep 16 05:03:07.923485 systemd[1]: Detected architecture x86-64. Sep 16 05:03:07.923502 systemd[1]: Running in initrd. Sep 16 05:03:07.923522 systemd[1]: No hostname configured, using default hostname. Sep 16 05:03:07.923540 systemd[1]: Hostname set to . Sep 16 05:03:07.923555 systemd[1]: Initializing machine ID from VM UUID. Sep 16 05:03:07.923570 systemd[1]: Queued start job for default target initrd.target. Sep 16 05:03:07.923586 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 05:03:07.923603 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 05:03:07.923625 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 05:03:07.923654 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 05:03:07.925731 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 05:03:07.925751 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 05:03:07.925769 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 05:03:07.925786 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 05:03:07.925804 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 05:03:07.925821 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 05:03:07.925838 systemd[1]: Reached target paths.target - Path Units. Sep 16 05:03:07.925857 systemd[1]: Reached target slices.target - Slice Units. Sep 16 05:03:07.925875 systemd[1]: Reached target swap.target - Swaps. Sep 16 05:03:07.925892 systemd[1]: Reached target timers.target - Timer Units. Sep 16 05:03:07.925909 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 05:03:07.925926 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 05:03:07.925943 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 05:03:07.925960 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 05:03:07.925978 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 05:03:07.925995 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 05:03:07.926015 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 05:03:07.926032 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 05:03:07.926049 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 05:03:07.926066 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 05:03:07.926083 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 05:03:07.926100 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 05:03:07.926118 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 05:03:07.926136 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 05:03:07.926157 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 05:03:07.926175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:03:07.926228 systemd-journald[207]: Collecting audit messages is disabled. Sep 16 05:03:07.926266 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 05:03:07.926287 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 05:03:07.926305 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 05:03:07.926322 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 05:03:07.926339 systemd-journald[207]: Journal started Sep 16 05:03:07.926376 systemd-journald[207]: Runtime Journal (/run/log/journal/ec25dfb74482cbd6cf79a123a33e8bcf) is 4.8M, max 38.4M, 33.6M free. Sep 16 05:03:07.931685 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 05:03:07.935507 systemd-modules-load[209]: Inserted module 'overlay' Sep 16 05:03:07.941183 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 05:03:07.954821 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:03:07.955936 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 05:03:07.961810 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 05:03:07.968848 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 05:03:07.980077 systemd-tmpfiles[221]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 05:03:07.987685 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 05:03:07.989132 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 05:03:07.993779 kernel: Bridge firewalling registered Sep 16 05:03:07.990338 systemd-modules-load[209]: Inserted module 'br_netfilter' Sep 16 05:03:07.995227 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 05:03:07.998824 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 05:03:08.000678 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 16 05:03:08.008457 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 05:03:08.016084 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 05:03:08.019908 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 05:03:08.021647 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:03:08.025741 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 05:03:08.046265 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 05:03:08.063119 systemd-resolved[246]: Positive Trust Anchors: Sep 16 05:03:08.063863 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 05:03:08.063903 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 05:03:08.069371 systemd-resolved[246]: Defaulting to hostname 'linux'. Sep 16 05:03:08.070382 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 05:03:08.070835 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 05:03:08.135736 kernel: SCSI subsystem initialized Sep 16 05:03:08.145689 kernel: Loading iSCSI transport class v2.0-870. Sep 16 05:03:08.157705 kernel: iscsi: registered transport (tcp) Sep 16 05:03:08.178947 kernel: iscsi: registered transport (qla4xxx) Sep 16 05:03:08.179033 kernel: QLogic iSCSI HBA Driver Sep 16 05:03:08.198517 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 05:03:08.215516 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 05:03:08.216752 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 05:03:08.264118 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 05:03:08.266283 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 05:03:08.318700 kernel: raid6: avx512x4 gen() 18101 MB/s Sep 16 05:03:08.336690 kernel: raid6: avx512x2 gen() 18068 MB/s Sep 16 05:03:08.354699 kernel: raid6: avx512x1 gen() 17944 MB/s Sep 16 05:03:08.372690 kernel: raid6: avx2x4 gen() 17904 MB/s Sep 16 05:03:08.390691 kernel: raid6: avx2x2 gen() 17884 MB/s Sep 16 05:03:08.408952 kernel: raid6: avx2x1 gen() 13807 MB/s Sep 16 05:03:08.409008 kernel: raid6: using algorithm avx512x4 gen() 18101 MB/s Sep 16 05:03:08.427875 kernel: raid6: .... xor() 7776 MB/s, rmw enabled Sep 16 05:03:08.427935 kernel: raid6: using avx512x2 recovery algorithm Sep 16 05:03:08.448697 kernel: xor: automatically using best checksumming function avx Sep 16 05:03:08.617692 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 05:03:08.624704 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 05:03:08.627019 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 05:03:08.651610 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 16 05:03:08.658358 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 05:03:08.662363 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 05:03:08.689600 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Sep 16 05:03:08.715482 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 05:03:08.717694 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 05:03:08.778031 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 05:03:08.782373 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 05:03:08.861847 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 16 05:03:08.862118 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 16 05:03:08.871986 kernel: cryptd: max_cpu_qlen set to 1000 Sep 16 05:03:08.872051 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 16 05:03:08.892752 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:e9:f0:c2:81:bb Sep 16 05:03:08.901696 kernel: AES CTR mode by8 optimization enabled Sep 16 05:03:08.940538 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 05:03:08.943919 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 16 05:03:08.944172 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 16 05:03:08.944271 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:03:08.945902 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:03:08.950685 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 16 05:03:08.953552 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:03:08.957609 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 16 05:03:08.959028 (udev-worker)[499]: Network interface NamePolicy= disabled on kernel command line. Sep 16 05:03:08.961961 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 05:03:08.978401 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 05:03:08.978441 kernel: GPT:9289727 != 16777215 Sep 16 05:03:08.978464 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 05:03:08.978493 kernel: GPT:9289727 != 16777215 Sep 16 05:03:08.978513 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 05:03:08.978533 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 05:03:08.991915 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:03:09.000709 kernel: nvme nvme0: using unchecked data buffer Sep 16 05:03:09.088543 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 16 05:03:09.098110 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 05:03:09.108758 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 16 05:03:09.109251 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 16 05:03:09.128262 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 16 05:03:09.139144 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 16 05:03:09.147551 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 05:03:09.148248 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 05:03:09.149475 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 05:03:09.151121 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 05:03:09.153840 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 05:03:09.180398 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 05:03:09.181795 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 05:03:09.182322 disk-uuid[691]: Primary Header is updated. Sep 16 05:03:09.182322 disk-uuid[691]: Secondary Entries is updated. Sep 16 05:03:09.182322 disk-uuid[691]: Secondary Header is updated. Sep 16 05:03:10.197684 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 05:03:10.197927 disk-uuid[699]: The operation has completed successfully. Sep 16 05:03:10.307250 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 05:03:10.307363 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 05:03:10.345852 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 05:03:10.358992 sh[959]: Success Sep 16 05:03:10.383100 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 05:03:10.383181 kernel: device-mapper: uevent: version 1.0.3 Sep 16 05:03:10.383203 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 05:03:10.395816 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 16 05:03:10.479888 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 05:03:10.483865 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 05:03:10.494219 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 05:03:10.521688 kernel: BTRFS: device fsid f1b91845-3914-4d21-a370-6d760ee45b2e devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (982) Sep 16 05:03:10.525646 kernel: BTRFS info (device dm-0): first mount of filesystem f1b91845-3914-4d21-a370-6d760ee45b2e Sep 16 05:03:10.525762 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 16 05:03:10.555028 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 16 05:03:10.555097 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 05:03:10.555111 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 05:03:10.559260 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 05:03:10.560601 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 05:03:10.561127 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 05:03:10.561873 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 05:03:10.564473 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 05:03:10.603774 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1015) Sep 16 05:03:10.608712 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:03:10.608780 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 16 05:03:10.618144 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 05:03:10.618225 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 05:03:10.626765 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:03:10.627589 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 05:03:10.630834 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 05:03:10.693088 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 05:03:10.696254 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 05:03:10.734782 systemd-networkd[1151]: lo: Link UP Sep 16 05:03:10.734794 systemd-networkd[1151]: lo: Gained carrier Sep 16 05:03:10.741308 systemd-networkd[1151]: Enumeration completed Sep 16 05:03:10.741795 systemd-networkd[1151]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:03:10.741801 systemd-networkd[1151]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 05:03:10.742847 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 05:03:10.745521 systemd[1]: Reached target network.target - Network. Sep 16 05:03:10.749131 systemd-networkd[1151]: eth0: Link UP Sep 16 05:03:10.749137 systemd-networkd[1151]: eth0: Gained carrier Sep 16 05:03:10.749155 systemd-networkd[1151]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:03:10.760780 systemd-networkd[1151]: eth0: DHCPv4 address 172.31.26.160/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 16 05:03:10.886891 ignition[1092]: Ignition 2.22.0 Sep 16 05:03:10.886902 ignition[1092]: Stage: fetch-offline Sep 16 05:03:10.887068 ignition[1092]: no configs at "/usr/lib/ignition/base.d" Sep 16 05:03:10.887076 ignition[1092]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 05:03:10.888011 ignition[1092]: Ignition finished successfully Sep 16 05:03:10.890020 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 05:03:10.891328 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 16 05:03:10.931405 ignition[1160]: Ignition 2.22.0 Sep 16 05:03:10.931416 ignition[1160]: Stage: fetch Sep 16 05:03:10.932881 ignition[1160]: no configs at "/usr/lib/ignition/base.d" Sep 16 05:03:10.932894 ignition[1160]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 05:03:10.932982 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 05:03:10.956824 ignition[1160]: PUT result: OK Sep 16 05:03:10.960774 ignition[1160]: parsed url from cmdline: "" Sep 16 05:03:10.960784 ignition[1160]: no config URL provided Sep 16 05:03:10.960793 ignition[1160]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 05:03:10.960804 ignition[1160]: no config at "/usr/lib/ignition/user.ign" Sep 16 05:03:10.960891 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 05:03:10.961703 ignition[1160]: PUT result: OK Sep 16 05:03:10.961747 ignition[1160]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 16 05:03:10.962777 ignition[1160]: GET result: OK Sep 16 05:03:10.962847 ignition[1160]: parsing config with SHA512: 3735cac2ecfab21efd1f83ca8294137f44b8966d318128d550738179a83ae02a06cbba0ffe4c7922086ffd85e55b956b35469bd91bd65ba5c417ede724a4707b Sep 16 05:03:10.966426 unknown[1160]: fetched base config from "system" Sep 16 05:03:10.966435 unknown[1160]: fetched base config from "system" Sep 16 05:03:10.966439 unknown[1160]: fetched user config from "aws" Sep 16 05:03:10.967056 ignition[1160]: fetch: fetch complete Sep 16 05:03:10.967061 ignition[1160]: fetch: fetch passed Sep 16 05:03:10.967105 ignition[1160]: Ignition finished successfully Sep 16 05:03:10.969388 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 16 05:03:10.970609 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 05:03:11.002442 ignition[1167]: Ignition 2.22.0 Sep 16 05:03:11.002459 ignition[1167]: Stage: kargs Sep 16 05:03:11.002760 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Sep 16 05:03:11.002769 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 05:03:11.002842 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 05:03:11.004054 ignition[1167]: PUT result: OK Sep 16 05:03:11.006619 ignition[1167]: kargs: kargs passed Sep 16 05:03:11.006688 ignition[1167]: Ignition finished successfully Sep 16 05:03:11.008582 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 05:03:11.010116 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 05:03:11.040752 ignition[1174]: Ignition 2.22.0 Sep 16 05:03:11.040764 ignition[1174]: Stage: disks Sep 16 05:03:11.041041 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Sep 16 05:03:11.041050 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 05:03:11.041119 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 05:03:11.041969 ignition[1174]: PUT result: OK Sep 16 05:03:11.044803 ignition[1174]: disks: disks passed Sep 16 05:03:11.044873 ignition[1174]: Ignition finished successfully Sep 16 05:03:11.046927 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 05:03:11.047463 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 05:03:11.047901 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 05:03:11.048339 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 05:03:11.048962 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 05:03:11.049478 systemd[1]: Reached target basic.target - Basic System. Sep 16 05:03:11.051142 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 05:03:11.087100 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 16 05:03:11.089970 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 05:03:11.091308 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 05:03:11.246676 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fb1cb44f-955b-4cd0-8849-33ce3640d547 r/w with ordered data mode. Quota mode: none. Sep 16 05:03:11.247529 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 05:03:11.248540 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 05:03:11.250836 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 05:03:11.253760 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 05:03:11.255018 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 16 05:03:11.255779 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 05:03:11.255809 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 05:03:11.261485 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 05:03:11.263371 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 05:03:11.279849 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Sep 16 05:03:11.279925 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:03:11.281542 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 16 05:03:11.288944 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 05:03:11.289016 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 05:03:11.292254 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 05:03:11.453742 initrd-setup-root[1226]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 05:03:11.469734 initrd-setup-root[1233]: cut: /sysroot/etc/group: No such file or directory Sep 16 05:03:11.474682 initrd-setup-root[1240]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 05:03:11.479083 initrd-setup-root[1247]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 05:03:11.640634 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 05:03:11.643094 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 05:03:11.645806 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 05:03:11.660961 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 05:03:11.663056 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:03:11.697919 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 05:03:11.702937 ignition[1315]: INFO : Ignition 2.22.0 Sep 16 05:03:11.702937 ignition[1315]: INFO : Stage: mount Sep 16 05:03:11.704545 ignition[1315]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 05:03:11.704545 ignition[1315]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 05:03:11.704545 ignition[1315]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 05:03:11.704545 ignition[1315]: INFO : PUT result: OK Sep 16 05:03:11.706877 ignition[1315]: INFO : mount: mount passed Sep 16 05:03:11.708308 ignition[1315]: INFO : Ignition finished successfully Sep 16 05:03:11.709180 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 05:03:11.710572 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 05:03:11.730429 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 05:03:11.758695 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1326) Sep 16 05:03:11.761837 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:03:11.761894 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 16 05:03:11.770286 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 05:03:11.770356 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 05:03:11.772543 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 05:03:11.807384 ignition[1343]: INFO : Ignition 2.22.0 Sep 16 05:03:11.807384 ignition[1343]: INFO : Stage: files Sep 16 05:03:11.808878 ignition[1343]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 05:03:11.808878 ignition[1343]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 05:03:11.808878 ignition[1343]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 05:03:11.810184 ignition[1343]: INFO : PUT result: OK Sep 16 05:03:11.811380 ignition[1343]: DEBUG : files: compiled without relabeling support, skipping Sep 16 05:03:11.812627 ignition[1343]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 05:03:11.812627 ignition[1343]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 05:03:11.816155 ignition[1343]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 05:03:11.816898 ignition[1343]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 05:03:11.817639 ignition[1343]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 05:03:11.817269 unknown[1343]: wrote ssh authorized keys file for user: core Sep 16 05:03:11.820238 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 16 05:03:11.821068 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 16 05:03:11.929710 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 05:03:12.134788 systemd-networkd[1151]: eth0: Gained IPv6LL Sep 16 05:03:12.230799 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 16 05:03:12.230799 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 16 05:03:12.232324 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 05:03:12.232324 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 05:03:12.232324 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 05:03:12.232324 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 05:03:12.232324 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 05:03:12.232324 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 05:03:12.232324 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 05:03:12.237005 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 05:03:12.237722 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 05:03:12.237722 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 05:03:12.239807 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 05:03:12.240814 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 05:03:12.240814 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 16 05:03:12.671837 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 16 05:03:13.099380 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 05:03:13.099380 ignition[1343]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 16 05:03:13.101326 ignition[1343]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 05:03:13.105065 ignition[1343]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 05:03:13.105065 ignition[1343]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 16 05:03:13.105065 ignition[1343]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 16 05:03:13.109167 ignition[1343]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 05:03:13.109167 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 05:03:13.109167 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 05:03:13.109167 ignition[1343]: INFO : files: files passed Sep 16 05:03:13.109167 ignition[1343]: INFO : Ignition finished successfully Sep 16 05:03:13.106866 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 05:03:13.110801 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 05:03:13.113790 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 05:03:13.122879 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 05:03:13.123726 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 05:03:13.131918 initrd-setup-root-after-ignition[1372]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 05:03:13.131918 initrd-setup-root-after-ignition[1372]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 05:03:13.134149 initrd-setup-root-after-ignition[1376]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 05:03:13.134357 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 05:03:13.135430 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 05:03:13.137239 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 05:03:13.190155 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 05:03:13.190269 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 05:03:13.191379 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 05:03:13.192470 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 05:03:13.193220 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 05:03:13.194024 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 05:03:13.233560 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 05:03:13.235414 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 05:03:13.255096 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 05:03:13.256209 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 05:03:13.257252 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 05:03:13.258114 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 05:03:13.258252 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 05:03:13.259336 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 05:03:13.260313 systemd[1]: Stopped target basic.target - Basic System. Sep 16 05:03:13.260780 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 05:03:13.261487 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 05:03:13.262165 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 05:03:13.262792 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 05:03:13.263398 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 05:03:13.264261 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 05:03:13.264923 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 05:03:13.265895 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 05:03:13.266538 systemd[1]: Stopped target swap.target - Swaps. Sep 16 05:03:13.267457 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 05:03:13.267783 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 05:03:13.268594 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 05:03:13.269309 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 05:03:13.269901 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 05:03:13.270018 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 05:03:13.270568 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 05:03:13.270719 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 05:03:13.272176 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 05:03:13.272369 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 05:03:13.272915 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 05:03:13.273040 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 05:03:13.275819 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 05:03:13.276191 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 05:03:13.276356 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 05:03:13.278848 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 05:03:13.279213 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 05:03:13.279362 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 05:03:13.280935 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 05:03:13.281379 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 05:03:13.286110 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 05:03:13.289755 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 05:03:13.308073 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 05:03:13.311457 ignition[1396]: INFO : Ignition 2.22.0 Sep 16 05:03:13.311457 ignition[1396]: INFO : Stage: umount Sep 16 05:03:13.312617 ignition[1396]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 05:03:13.312617 ignition[1396]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 05:03:13.312617 ignition[1396]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 05:03:13.314004 ignition[1396]: INFO : PUT result: OK Sep 16 05:03:13.316306 ignition[1396]: INFO : umount: umount passed Sep 16 05:03:13.316695 ignition[1396]: INFO : Ignition finished successfully Sep 16 05:03:13.316650 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 05:03:13.317707 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 05:03:13.318550 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 05:03:13.318655 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 05:03:13.320178 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 05:03:13.320240 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 05:03:13.320963 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 05:03:13.321014 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 05:03:13.321564 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 16 05:03:13.321618 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 16 05:03:13.322183 systemd[1]: Stopped target network.target - Network. Sep 16 05:03:13.322751 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 05:03:13.322813 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 05:03:13.323146 systemd[1]: Stopped target paths.target - Path Units. Sep 16 05:03:13.323413 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 05:03:13.326753 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 05:03:13.327114 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 05:03:13.328016 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 05:03:13.328580 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 05:03:13.328622 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 05:03:13.329162 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 05:03:13.329200 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 05:03:13.329696 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 05:03:13.329752 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 05:03:13.330267 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 05:03:13.330306 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 05:03:13.330849 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 05:03:13.330891 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 05:03:13.331538 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 05:03:13.332198 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 05:03:13.335278 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 05:03:13.335393 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 05:03:13.338445 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 05:03:13.339012 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 05:03:13.339093 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 05:03:13.343336 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 05:03:13.345177 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 05:03:13.345327 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 05:03:13.347424 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 05:03:13.348534 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 05:03:13.349027 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 05:03:13.349080 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 05:03:13.350734 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 05:03:13.351992 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 05:03:13.352058 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 05:03:13.355378 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 05:03:13.355443 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:03:13.356862 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 05:03:13.356908 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 05:03:13.357793 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 05:03:13.360439 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 05:03:13.377098 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 05:03:13.377458 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 05:03:13.379138 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 05:03:13.379257 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 05:03:13.380528 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 05:03:13.380574 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 05:03:13.381346 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 05:03:13.381414 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 05:03:13.382475 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 05:03:13.382539 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 05:03:13.383693 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 05:03:13.383760 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 05:03:13.385966 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 05:03:13.386731 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 05:03:13.386803 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 05:03:13.389208 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 05:03:13.389273 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 05:03:13.391888 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 05:03:13.391953 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:03:13.393809 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 05:03:13.396858 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 05:03:13.404209 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 05:03:13.404315 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 05:03:13.406032 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 05:03:13.407939 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 05:03:13.428946 systemd[1]: Switching root. Sep 16 05:03:13.468936 systemd-journald[207]: Journal stopped Sep 16 05:03:14.964784 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Sep 16 05:03:14.964878 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 05:03:14.964909 kernel: SELinux: policy capability open_perms=1 Sep 16 05:03:14.964929 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 05:03:14.964951 kernel: SELinux: policy capability always_check_network=0 Sep 16 05:03:14.964969 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 05:03:14.964987 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 05:03:14.965007 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 05:03:14.965025 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 05:03:14.965043 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 05:03:14.965063 kernel: audit: type=1403 audit(1757998993.798:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 05:03:14.965083 systemd[1]: Successfully loaded SELinux policy in 78.602ms. Sep 16 05:03:14.965120 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.592ms. Sep 16 05:03:14.965141 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 05:03:14.965159 systemd[1]: Detected virtualization amazon. Sep 16 05:03:14.965178 systemd[1]: Detected architecture x86-64. Sep 16 05:03:14.965204 systemd[1]: Detected first boot. Sep 16 05:03:14.965224 systemd[1]: Initializing machine ID from VM UUID. Sep 16 05:03:14.965245 zram_generator::config[1439]: No configuration found. Sep 16 05:03:14.965264 kernel: Guest personality initialized and is inactive Sep 16 05:03:14.965282 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 16 05:03:14.965313 kernel: Initialized host personality Sep 16 05:03:14.965331 kernel: NET: Registered PF_VSOCK protocol family Sep 16 05:03:14.965350 systemd[1]: Populated /etc with preset unit settings. Sep 16 05:03:14.965372 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 05:03:14.965394 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 05:03:14.965415 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 05:03:14.965437 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 05:03:14.965460 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 05:03:14.965486 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 05:03:14.965505 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 05:03:14.965525 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 05:03:14.965545 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 05:03:14.965565 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 05:03:14.965586 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 05:03:14.965607 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 05:03:14.965627 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 05:03:14.965649 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 05:03:14.977109 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 05:03:14.977148 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 05:03:14.977174 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 05:03:14.977203 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 05:03:14.977228 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 16 05:03:14.977253 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 05:03:14.977277 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 05:03:14.977302 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 05:03:14.977337 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 05:03:14.977360 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 05:03:14.977385 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 05:03:14.977410 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 05:03:14.977435 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 05:03:14.977458 systemd[1]: Reached target slices.target - Slice Units. Sep 16 05:03:14.977482 systemd[1]: Reached target swap.target - Swaps. Sep 16 05:03:14.977506 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 05:03:14.977530 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 05:03:14.977558 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 05:03:14.977583 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 05:03:14.977608 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 05:03:14.977631 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 05:03:14.981702 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 05:03:14.981783 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 05:03:14.981810 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 05:03:14.981835 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 05:03:14.981862 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:03:14.981897 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 05:03:14.981922 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 05:03:14.981946 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 05:03:14.981971 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 05:03:14.981993 systemd[1]: Reached target machines.target - Containers. Sep 16 05:03:14.982020 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 05:03:14.982047 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:03:14.982069 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 05:03:14.982096 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 05:03:14.982121 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 05:03:14.982144 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 05:03:14.982166 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 05:03:14.982191 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 05:03:14.982219 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 05:03:14.982244 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 05:03:14.982267 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 05:03:14.982293 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 05:03:14.982321 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 05:03:14.982344 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 05:03:14.982368 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:03:14.982393 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 05:03:14.982417 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 05:03:14.982445 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 05:03:14.982471 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 05:03:14.982495 kernel: loop: module loaded Sep 16 05:03:14.982523 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 05:03:14.982548 kernel: fuse: init (API version 7.41) Sep 16 05:03:14.982572 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 05:03:14.982596 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 05:03:14.982622 systemd[1]: Stopped verity-setup.service. Sep 16 05:03:14.982651 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:03:14.987687 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 05:03:14.987718 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 05:03:14.987740 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 05:03:14.987769 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 05:03:14.987790 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 05:03:14.987815 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 05:03:14.987835 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 05:03:14.987856 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 05:03:14.987877 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 05:03:14.987897 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 05:03:14.987917 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 05:03:14.987937 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 05:03:14.987959 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 05:03:14.987983 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 05:03:14.988005 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 05:03:14.988027 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 05:03:14.988048 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 05:03:14.988069 kernel: ACPI: bus type drm_connector registered Sep 16 05:03:14.988090 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 05:03:14.988112 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 05:03:14.988134 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 05:03:14.988155 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 05:03:14.988227 systemd-journald[1518]: Collecting audit messages is disabled. Sep 16 05:03:14.988270 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 05:03:14.988291 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 05:03:14.988317 systemd-journald[1518]: Journal started Sep 16 05:03:14.988358 systemd-journald[1518]: Runtime Journal (/run/log/journal/ec25dfb74482cbd6cf79a123a33e8bcf) is 4.8M, max 38.4M, 33.6M free. Sep 16 05:03:14.601960 systemd[1]: Queued start job for default target multi-user.target. Sep 16 05:03:14.627045 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 16 05:03:14.627471 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 05:03:14.993982 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 05:03:15.001685 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 05:03:15.005701 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 05:03:15.013691 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 05:03:15.019689 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 05:03:15.027691 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:03:15.031698 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 05:03:15.031783 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 05:03:15.049006 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 05:03:15.049086 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 05:03:15.052575 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 05:03:15.058696 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 05:03:15.066821 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 05:03:15.075154 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 05:03:15.077300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 05:03:15.079251 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 05:03:15.081028 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 05:03:15.104713 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 05:03:15.111514 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 05:03:15.115532 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 05:03:15.116846 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 05:03:15.123408 kernel: loop0: detected capacity change from 0 to 110984 Sep 16 05:03:15.120262 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 05:03:15.124056 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 05:03:15.128932 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 05:03:15.130794 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:03:15.160184 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 05:03:15.167126 systemd-journald[1518]: Time spent on flushing to /var/log/journal/ec25dfb74482cbd6cf79a123a33e8bcf is 50.658ms for 1020 entries. Sep 16 05:03:15.167126 systemd-journald[1518]: System Journal (/var/log/journal/ec25dfb74482cbd6cf79a123a33e8bcf) is 8M, max 195.6M, 187.6M free. Sep 16 05:03:15.227581 systemd-journald[1518]: Received client request to flush runtime journal. Sep 16 05:03:15.229930 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 05:03:15.230753 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 05:03:15.250354 kernel: loop1: detected capacity change from 0 to 221472 Sep 16 05:03:15.250760 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 05:03:15.255581 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 05:03:15.299196 systemd-tmpfiles[1590]: ACLs are not supported, ignoring. Sep 16 05:03:15.299556 systemd-tmpfiles[1590]: ACLs are not supported, ignoring. Sep 16 05:03:15.305984 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 05:03:15.365704 kernel: loop2: detected capacity change from 0 to 128016 Sep 16 05:03:15.423310 kernel: loop3: detected capacity change from 0 to 72368 Sep 16 05:03:15.521700 kernel: loop4: detected capacity change from 0 to 110984 Sep 16 05:03:15.547717 kernel: loop5: detected capacity change from 0 to 221472 Sep 16 05:03:15.581694 kernel: loop6: detected capacity change from 0 to 128016 Sep 16 05:03:15.618697 kernel: loop7: detected capacity change from 0 to 72368 Sep 16 05:03:15.630617 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 05:03:15.639857 (sd-merge)[1596]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 16 05:03:15.641757 (sd-merge)[1596]: Merged extensions into '/usr'. Sep 16 05:03:15.654204 systemd[1]: Reload requested from client PID 1551 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 05:03:15.654310 systemd[1]: Reloading... Sep 16 05:03:15.826687 zram_generator::config[1622]: No configuration found. Sep 16 05:03:16.172846 systemd[1]: Reloading finished in 517 ms. Sep 16 05:03:16.197934 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 05:03:16.206800 systemd[1]: Starting ensure-sysext.service... Sep 16 05:03:16.212326 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 05:03:16.251928 systemd[1]: Reload requested from client PID 1673 ('systemctl') (unit ensure-sysext.service)... Sep 16 05:03:16.251953 systemd[1]: Reloading... Sep 16 05:03:16.277154 systemd-tmpfiles[1674]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 05:03:16.277369 systemd-tmpfiles[1674]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 05:03:16.277960 systemd-tmpfiles[1674]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 05:03:16.278224 systemd-tmpfiles[1674]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 05:03:16.279193 systemd-tmpfiles[1674]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 05:03:16.279465 systemd-tmpfiles[1674]: ACLs are not supported, ignoring. Sep 16 05:03:16.279518 systemd-tmpfiles[1674]: ACLs are not supported, ignoring. Sep 16 05:03:16.285688 systemd-tmpfiles[1674]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 05:03:16.285967 systemd-tmpfiles[1674]: Skipping /boot Sep 16 05:03:16.298508 systemd-tmpfiles[1674]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 05:03:16.298770 systemd-tmpfiles[1674]: Skipping /boot Sep 16 05:03:16.337507 ldconfig[1543]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 05:03:16.341686 zram_generator::config[1702]: No configuration found. Sep 16 05:03:16.546916 systemd[1]: Reloading finished in 294 ms. Sep 16 05:03:16.568156 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 05:03:16.568910 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 05:03:16.574598 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 05:03:16.580767 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 05:03:16.586835 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 05:03:16.590256 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 05:03:16.596563 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 05:03:16.604904 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 05:03:16.608363 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 05:03:16.619887 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 05:03:16.623367 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:03:16.624066 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:03:16.626151 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 05:03:16.629388 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 05:03:16.642416 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 05:03:16.643151 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:03:16.643338 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:03:16.643484 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:03:16.651196 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:03:16.651466 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:03:16.651750 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:03:16.651893 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:03:16.652034 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:03:16.661181 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:03:16.661557 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:03:16.664122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 05:03:16.664881 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:03:16.665033 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:03:16.665293 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 05:03:16.667008 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:03:16.682248 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 05:03:16.683412 systemd[1]: Finished ensure-sysext.service. Sep 16 05:03:16.698806 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 05:03:16.699071 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 05:03:16.699898 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 05:03:16.700831 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 05:03:16.701060 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 05:03:16.720863 systemd-udevd[1763]: Using default interface naming scheme 'v255'. Sep 16 05:03:16.723636 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 05:03:16.724571 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 05:03:16.726593 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 05:03:16.727740 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 05:03:16.731581 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 05:03:16.742282 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 05:03:16.746825 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 05:03:16.773562 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 05:03:16.787420 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 05:03:16.800784 augenrules[1803]: No rules Sep 16 05:03:16.800750 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 05:03:16.801036 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 05:03:16.808463 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 05:03:16.816924 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 05:03:16.830725 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 05:03:16.831551 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 05:03:16.947393 (udev-worker)[1836]: Network interface NamePolicy= disabled on kernel command line. Sep 16 05:03:17.001815 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 16 05:03:17.003699 systemd-resolved[1762]: Positive Trust Anchors: Sep 16 05:03:17.004127 systemd-resolved[1762]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 05:03:17.004257 systemd-resolved[1762]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 05:03:17.018980 systemd-resolved[1762]: Defaulting to hostname 'linux'. Sep 16 05:03:17.028286 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 05:03:17.029213 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 05:03:17.031349 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 05:03:17.032882 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 05:03:17.033773 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 05:03:17.035742 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 16 05:03:17.036521 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 05:03:17.037933 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 05:03:17.042356 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 05:03:17.044182 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 05:03:17.044229 systemd[1]: Reached target paths.target - Path Units. Sep 16 05:03:17.045609 systemd[1]: Reached target timers.target - Timer Units. Sep 16 05:03:17.049062 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 05:03:17.052475 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 05:03:17.061455 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 05:03:17.062967 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 05:03:17.064749 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 05:03:17.076137 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 05:03:17.074445 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 05:03:17.078222 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 05:03:17.082714 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 05:03:17.086503 systemd-networkd[1814]: lo: Link UP Sep 16 05:03:17.086520 systemd-networkd[1814]: lo: Gained carrier Sep 16 05:03:17.088402 systemd-networkd[1814]: Enumeration completed Sep 16 05:03:17.089303 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 05:03:17.090822 systemd[1]: Reached target network.target - Network. Sep 16 05:03:17.090927 systemd-networkd[1814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:03:17.090933 systemd-networkd[1814]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 05:03:17.091654 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 05:03:17.092813 systemd[1]: Reached target basic.target - Basic System. Sep 16 05:03:17.094286 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 05:03:17.094323 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 05:03:17.096432 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 05:03:17.100568 systemd-networkd[1814]: eth0: Link UP Sep 16 05:03:17.100825 systemd-networkd[1814]: eth0: Gained carrier Sep 16 05:03:17.100856 systemd-networkd[1814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:03:17.101219 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 16 05:03:17.105776 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 05:03:17.109685 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 05:03:17.113852 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 05:03:17.117944 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 05:03:17.118551 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 05:03:17.126042 systemd-networkd[1814]: eth0: DHCPv4 address 172.31.26.160/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 16 05:03:17.128405 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 16 05:03:17.139492 jq[1856]: false Sep 16 05:03:17.135905 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 05:03:17.142994 systemd[1]: Started ntpd.service - Network Time Service. Sep 16 05:03:17.146830 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 05:03:17.160121 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 16 05:03:17.179785 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 05:03:17.183943 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 05:03:17.185275 google_oslogin_nss_cache[1858]: oslogin_cache_refresh[1858]: Refreshing passwd entry cache Sep 16 05:03:17.185619 oslogin_cache_refresh[1858]: Refreshing passwd entry cache Sep 16 05:03:17.196384 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 05:03:17.199485 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 05:03:17.218908 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 05:03:17.221491 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 05:03:17.222246 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 05:03:17.226963 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 05:03:17.233591 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 05:03:17.237405 google_oslogin_nss_cache[1858]: oslogin_cache_refresh[1858]: Failure getting users, quitting Sep 16 05:03:17.237405 google_oslogin_nss_cache[1858]: oslogin_cache_refresh[1858]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 05:03:17.237405 google_oslogin_nss_cache[1858]: oslogin_cache_refresh[1858]: Refreshing group entry cache Sep 16 05:03:17.236860 oslogin_cache_refresh[1858]: Failure getting users, quitting Sep 16 05:03:17.236882 oslogin_cache_refresh[1858]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 05:03:17.236935 oslogin_cache_refresh[1858]: Refreshing group entry cache Sep 16 05:03:17.238871 google_oslogin_nss_cache[1858]: oslogin_cache_refresh[1858]: Failure getting groups, quitting Sep 16 05:03:17.238958 oslogin_cache_refresh[1858]: Failure getting groups, quitting Sep 16 05:03:17.239033 google_oslogin_nss_cache[1858]: oslogin_cache_refresh[1858]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 05:03:17.239075 oslogin_cache_refresh[1858]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 05:03:17.246583 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 05:03:17.247549 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 05:03:17.247895 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 05:03:17.248286 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 16 05:03:17.248543 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 16 05:03:17.286787 jq[1870]: true Sep 16 05:03:17.320504 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 05:03:17.321702 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 05:03:17.321985 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 05:03:17.361343 dbus-daemon[1854]: [system] SELinux support is enabled Sep 16 05:03:17.361554 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 05:03:17.362469 tar[1878]: linux-amd64/helm Sep 16 05:03:17.375342 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 05:03:17.375387 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 05:03:17.377127 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 05:03:17.377151 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 05:03:17.378717 dbus-daemon[1854]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1814 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 16 05:03:17.388083 update_engine[1869]: I20250916 05:03:17.387977 1869 main.cc:92] Flatcar Update Engine starting Sep 16 05:03:17.392833 dbus-daemon[1854]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 16 05:03:17.402335 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 16 05:03:17.403831 systemd[1]: Started update-engine.service - Update Engine. Sep 16 05:03:17.404626 update_engine[1869]: I20250916 05:03:17.404564 1869 update_check_scheduler.cc:74] Next update check in 6m35s Sep 16 05:03:17.413697 (ntainerd)[1903]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 05:03:17.418712 jq[1893]: true Sep 16 05:03:17.419299 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 05:03:17.426731 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 16 05:03:17.482164 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 05:03:17.482449 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 05:03:17.600614 extend-filesystems[1857]: Found /dev/nvme0n1p6 Sep 16 05:03:17.602714 coreos-metadata[1853]: Sep 16 05:03:17.602 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 16 05:03:17.632276 coreos-metadata[1853]: Sep 16 05:03:17.632 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 16 05:03:17.633251 extend-filesystems[1857]: Found /dev/nvme0n1p9 Sep 16 05:03:17.638582 coreos-metadata[1853]: Sep 16 05:03:17.633 INFO Fetch successful Sep 16 05:03:17.640861 coreos-metadata[1853]: Sep 16 05:03:17.640 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 16 05:03:17.654548 bash[1948]: Updated "/home/core/.ssh/authorized_keys" Sep 16 05:03:17.654775 coreos-metadata[1853]: Sep 16 05:03:17.654 INFO Fetch successful Sep 16 05:03:17.654775 coreos-metadata[1853]: Sep 16 05:03:17.654 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 16 05:03:17.643897 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 05:03:17.660780 coreos-metadata[1853]: Sep 16 05:03:17.658 INFO Fetch successful Sep 16 05:03:17.660780 coreos-metadata[1853]: Sep 16 05:03:17.658 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 16 05:03:17.658845 systemd[1]: Starting sshkeys.service... Sep 16 05:03:17.662849 coreos-metadata[1853]: Sep 16 05:03:17.662 INFO Fetch successful Sep 16 05:03:17.662849 coreos-metadata[1853]: Sep 16 05:03:17.662 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 16 05:03:17.663706 extend-filesystems[1857]: Checking size of /dev/nvme0n1p9 Sep 16 05:03:17.665473 coreos-metadata[1853]: Sep 16 05:03:17.665 INFO Fetch failed with 404: resource not found Sep 16 05:03:17.665473 coreos-metadata[1853]: Sep 16 05:03:17.665 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 16 05:03:17.677726 coreos-metadata[1853]: Sep 16 05:03:17.674 INFO Fetch successful Sep 16 05:03:17.677726 coreos-metadata[1853]: Sep 16 05:03:17.674 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 16 05:03:17.684989 coreos-metadata[1853]: Sep 16 05:03:17.683 INFO Fetch successful Sep 16 05:03:17.684989 coreos-metadata[1853]: Sep 16 05:03:17.683 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 16 05:03:17.685591 coreos-metadata[1853]: Sep 16 05:03:17.685 INFO Fetch successful Sep 16 05:03:17.685591 coreos-metadata[1853]: Sep 16 05:03:17.685 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 16 05:03:17.686379 coreos-metadata[1853]: Sep 16 05:03:17.686 INFO Fetch successful Sep 16 05:03:17.686379 coreos-metadata[1853]: Sep 16 05:03:17.686 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 16 05:03:17.687263 coreos-metadata[1853]: Sep 16 05:03:17.687 INFO Fetch successful Sep 16 05:03:17.714711 extend-filesystems[1857]: Resized partition /dev/nvme0n1p9 Sep 16 05:03:17.718255 extend-filesystems[1999]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 05:03:17.758680 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 16 05:03:17.778584 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 16 05:03:17.783709 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 16 05:03:17.808245 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 16 05:03:17.812248 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 05:03:17.816288 ntpd[1860]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 05:03:17.816739 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 05:03:17.816739 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 05:03:17.816739 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: ---------------------------------------------------- Sep 16 05:03:17.816739 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: ntp-4 is maintained by Network Time Foundation, Sep 16 05:03:17.816739 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 05:03:17.816739 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: corporation. Support and training for ntp-4 are Sep 16 05:03:17.816739 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: available at https://www.nwtime.org/support Sep 16 05:03:17.816739 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: ---------------------------------------------------- Sep 16 05:03:17.816362 ntpd[1860]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 05:03:17.816373 ntpd[1860]: ---------------------------------------------------- Sep 16 05:03:17.816382 ntpd[1860]: ntp-4 is maintained by Network Time Foundation, Sep 16 05:03:17.816391 ntpd[1860]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 05:03:17.816400 ntpd[1860]: corporation. Support and training for ntp-4 are Sep 16 05:03:17.816409 ntpd[1860]: available at https://www.nwtime.org/support Sep 16 05:03:17.816417 ntpd[1860]: ---------------------------------------------------- Sep 16 05:03:17.825134 ntpd[1860]: proto: precision = 0.066 usec (-24) Sep 16 05:03:17.826175 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: proto: precision = 0.066 usec (-24) Sep 16 05:03:17.826621 ntpd[1860]: basedate set to 2025-09-04 Sep 16 05:03:17.827424 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: basedate set to 2025-09-04 Sep 16 05:03:17.827424 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: gps base set to 2025-09-07 (week 2383) Sep 16 05:03:17.827424 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 05:03:17.827424 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 05:03:17.827424 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 05:03:17.827424 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: Listen normally on 3 eth0 172.31.26.160:123 Sep 16 05:03:17.827424 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: Listen normally on 4 lo [::1]:123 Sep 16 05:03:17.827424 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: bind(21) AF_INET6 [fe80::4e9:f0ff:fec2:81bb%2]:123 flags 0x811 failed: Cannot assign requested address Sep 16 05:03:17.827424 ntpd[1860]: 16 Sep 05:03:17 ntpd[1860]: unable to create socket on eth0 (5) for [fe80::4e9:f0ff:fec2:81bb%2]:123 Sep 16 05:03:17.831235 kernel: ntpd[1860]: segfault at 24 ip 000055937c2e8aeb sp 00007fffac363cc0 error 4 in ntpd[68aeb,55937c286000+80000] likely on CPU 1 (core 0, socket 0) Sep 16 05:03:17.831272 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Sep 16 05:03:17.826645 ntpd[1860]: gps base set to 2025-09-07 (week 2383) Sep 16 05:03:17.826794 ntpd[1860]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 05:03:17.826826 ntpd[1860]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 05:03:17.827022 ntpd[1860]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 05:03:17.827050 ntpd[1860]: Listen normally on 3 eth0 172.31.26.160:123 Sep 16 05:03:17.827080 ntpd[1860]: Listen normally on 4 lo [::1]:123 Sep 16 05:03:17.827112 ntpd[1860]: bind(21) AF_INET6 [fe80::4e9:f0ff:fec2:81bb%2]:123 flags 0x811 failed: Cannot assign requested address Sep 16 05:03:17.827133 ntpd[1860]: unable to create socket on eth0 (5) for [fe80::4e9:f0ff:fec2:81bb%2]:123 Sep 16 05:03:17.866992 systemd-coredump[2015]: Process 1860 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Sep 16 05:03:17.872803 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Sep 16 05:03:17.878195 systemd[1]: Started systemd-coredump@0-2015-0.service - Process Core Dump (PID 2015/UID 0). Sep 16 05:03:17.898584 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 16 05:03:17.904616 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 16 05:03:17.914609 extend-filesystems[1999]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 16 05:03:17.914609 extend-filesystems[1999]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 16 05:03:17.914609 extend-filesystems[1999]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 16 05:03:17.925243 extend-filesystems[1857]: Resized filesystem in /dev/nvme0n1p9 Sep 16 05:03:17.915707 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 05:03:17.916765 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 05:03:17.940484 kernel: ACPI: button: Power Button [PWRF] Sep 16 05:03:17.963690 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 16 05:03:17.997114 coreos-metadata[2006]: Sep 16 05:03:17.995 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 16 05:03:18.004083 coreos-metadata[2006]: Sep 16 05:03:18.003 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 16 05:03:18.007398 coreos-metadata[2006]: Sep 16 05:03:18.007 INFO Fetch successful Sep 16 05:03:18.007687 coreos-metadata[2006]: Sep 16 05:03:18.007 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 16 05:03:18.008618 coreos-metadata[2006]: Sep 16 05:03:18.008 INFO Fetch successful Sep 16 05:03:18.009767 locksmithd[1917]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 05:03:18.015847 unknown[2006]: wrote ssh authorized keys file for user: core Sep 16 05:03:18.039717 kernel: ACPI: button: Sleep Button [SLPF] Sep 16 05:03:18.069642 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 16 05:03:18.088909 update-ssh-keys[2036]: Updated "/home/core/.ssh/authorized_keys" Sep 16 05:03:18.091026 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 16 05:03:18.097120 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 05:03:18.104935 systemd[1]: Finished sshkeys.service. Sep 16 05:03:18.153894 containerd[1903]: time="2025-09-16T05:03:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 05:03:18.184243 sshd_keygen[1899]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 05:03:18.195316 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 16 05:03:18.227766 containerd[1903]: time="2025-09-16T05:03:18.226821242Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 05:03:18.233038 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 05:03:18.240783 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 05:03:18.244479 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:03:18.246531 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 05:03:18.278888 containerd[1903]: time="2025-09-16T05:03:18.277276636Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.431µs" Sep 16 05:03:18.278888 containerd[1903]: time="2025-09-16T05:03:18.277323107Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 05:03:18.278888 containerd[1903]: time="2025-09-16T05:03:18.277348479Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 05:03:18.278888 containerd[1903]: time="2025-09-16T05:03:18.277527329Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 05:03:18.278888 containerd[1903]: time="2025-09-16T05:03:18.277546982Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 05:03:18.278888 containerd[1903]: time="2025-09-16T05:03:18.277580932Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 05:03:18.278888 containerd[1903]: time="2025-09-16T05:03:18.277647619Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 05:03:18.278888 containerd[1903]: time="2025-09-16T05:03:18.277679471Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 05:03:18.278888 containerd[1903]: time="2025-09-16T05:03:18.277948999Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 05:03:18.278888 containerd[1903]: time="2025-09-16T05:03:18.277970697Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 05:03:18.278888 containerd[1903]: time="2025-09-16T05:03:18.277990005Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 05:03:18.278888 containerd[1903]: time="2025-09-16T05:03:18.278002799Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 05:03:18.279332 containerd[1903]: time="2025-09-16T05:03:18.278105256Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 05:03:18.279332 containerd[1903]: time="2025-09-16T05:03:18.278348610Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 05:03:18.279332 containerd[1903]: time="2025-09-16T05:03:18.278387620Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 05:03:18.279332 containerd[1903]: time="2025-09-16T05:03:18.278403512Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 05:03:18.279589 containerd[1903]: time="2025-09-16T05:03:18.279511618Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 05:03:18.280067 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 05:03:18.280608 containerd[1903]: time="2025-09-16T05:03:18.280583922Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 05:03:18.280877 containerd[1903]: time="2025-09-16T05:03:18.280856882Z" level=info msg="metadata content store policy set" policy=shared Sep 16 05:03:18.280882 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 05:03:18.286684 containerd[1903]: time="2025-09-16T05:03:18.286402956Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 05:03:18.286684 containerd[1903]: time="2025-09-16T05:03:18.286481984Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 05:03:18.286684 containerd[1903]: time="2025-09-16T05:03:18.286502887Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 05:03:18.286684 containerd[1903]: time="2025-09-16T05:03:18.286559027Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 05:03:18.286684 containerd[1903]: time="2025-09-16T05:03:18.286577425Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 05:03:18.286684 containerd[1903]: time="2025-09-16T05:03:18.286593775Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 05:03:18.286684 containerd[1903]: time="2025-09-16T05:03:18.286615445Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 05:03:18.286684 containerd[1903]: time="2025-09-16T05:03:18.286644789Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 05:03:18.287539 containerd[1903]: time="2025-09-16T05:03:18.287027606Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 05:03:18.287539 containerd[1903]: time="2025-09-16T05:03:18.287051220Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 05:03:18.287539 containerd[1903]: time="2025-09-16T05:03:18.287068874Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 05:03:18.287539 containerd[1903]: time="2025-09-16T05:03:18.287086723Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 05:03:18.287539 containerd[1903]: time="2025-09-16T05:03:18.287231134Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 05:03:18.287539 containerd[1903]: time="2025-09-16T05:03:18.287254582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 05:03:18.287539 containerd[1903]: time="2025-09-16T05:03:18.287283571Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 05:03:18.287539 containerd[1903]: time="2025-09-16T05:03:18.287299463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 05:03:18.287539 containerd[1903]: time="2025-09-16T05:03:18.287316374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 05:03:18.287539 containerd[1903]: time="2025-09-16T05:03:18.287330711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 05:03:18.287539 containerd[1903]: time="2025-09-16T05:03:18.287347101Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 05:03:18.287539 containerd[1903]: time="2025-09-16T05:03:18.287360928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 05:03:18.287539 containerd[1903]: time="2025-09-16T05:03:18.287376248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 05:03:18.287539 containerd[1903]: time="2025-09-16T05:03:18.287389691Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 05:03:18.287539 containerd[1903]: time="2025-09-16T05:03:18.287402713Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 05:03:18.288125 containerd[1903]: time="2025-09-16T05:03:18.287480606Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 05:03:18.288125 containerd[1903]: time="2025-09-16T05:03:18.287506808Z" level=info msg="Start snapshots syncer" Sep 16 05:03:18.289167 containerd[1903]: time="2025-09-16T05:03:18.288224600Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 05:03:18.289167 containerd[1903]: time="2025-09-16T05:03:18.289008447Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 05:03:18.289405 containerd[1903]: time="2025-09-16T05:03:18.289094883Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 05:03:18.290186 containerd[1903]: time="2025-09-16T05:03:18.289744704Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 05:03:18.290186 containerd[1903]: time="2025-09-16T05:03:18.289952770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 05:03:18.290186 containerd[1903]: time="2025-09-16T05:03:18.289983637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 05:03:18.290186 containerd[1903]: time="2025-09-16T05:03:18.289999022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 05:03:18.290186 containerd[1903]: time="2025-09-16T05:03:18.290015373Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 05:03:18.290186 containerd[1903]: time="2025-09-16T05:03:18.290032535Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 05:03:18.290186 containerd[1903]: time="2025-09-16T05:03:18.290049307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 05:03:18.290186 containerd[1903]: time="2025-09-16T05:03:18.290065756Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 05:03:18.290186 containerd[1903]: time="2025-09-16T05:03:18.290106066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 05:03:18.290186 containerd[1903]: time="2025-09-16T05:03:18.290120355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 05:03:18.290186 containerd[1903]: time="2025-09-16T05:03:18.290136559Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 05:03:18.290719 containerd[1903]: time="2025-09-16T05:03:18.290697586Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 05:03:18.291203 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 05:03:18.291572 containerd[1903]: time="2025-09-16T05:03:18.291238641Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 05:03:18.291572 containerd[1903]: time="2025-09-16T05:03:18.291260665Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 05:03:18.291572 containerd[1903]: time="2025-09-16T05:03:18.291275486Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 05:03:18.291572 containerd[1903]: time="2025-09-16T05:03:18.291287420Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 05:03:18.291572 containerd[1903]: time="2025-09-16T05:03:18.291302506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 05:03:18.291572 containerd[1903]: time="2025-09-16T05:03:18.291318532Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 05:03:18.291572 containerd[1903]: time="2025-09-16T05:03:18.291339167Z" level=info msg="runtime interface created" Sep 16 05:03:18.291572 containerd[1903]: time="2025-09-16T05:03:18.291346260Z" level=info msg="created NRI interface" Sep 16 05:03:18.291572 containerd[1903]: time="2025-09-16T05:03:18.291357297Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 05:03:18.291572 containerd[1903]: time="2025-09-16T05:03:18.291373688Z" level=info msg="Connect containerd service" Sep 16 05:03:18.291572 containerd[1903]: time="2025-09-16T05:03:18.291424776Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 05:03:18.296250 containerd[1903]: time="2025-09-16T05:03:18.295655325Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 05:03:18.314990 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 05:03:18.315605 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:03:18.321873 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:03:18.341780 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 05:03:18.347845 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 05:03:18.351876 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 16 05:03:18.353766 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 05:03:18.561359 containerd[1903]: time="2025-09-16T05:03:18.561265379Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 05:03:18.561576 containerd[1903]: time="2025-09-16T05:03:18.561459326Z" level=info msg="Start subscribing containerd event" Sep 16 05:03:18.561717 containerd[1903]: time="2025-09-16T05:03:18.561684577Z" level=info msg="Start recovering state" Sep 16 05:03:18.561882 containerd[1903]: time="2025-09-16T05:03:18.561866297Z" level=info msg="Start event monitor" Sep 16 05:03:18.562112 containerd[1903]: time="2025-09-16T05:03:18.562091962Z" level=info msg="Start cni network conf syncer for default" Sep 16 05:03:18.562190 containerd[1903]: time="2025-09-16T05:03:18.562176164Z" level=info msg="Start streaming server" Sep 16 05:03:18.562477 containerd[1903]: time="2025-09-16T05:03:18.562368331Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 05:03:18.562477 containerd[1903]: time="2025-09-16T05:03:18.562382060Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 05:03:18.562477 containerd[1903]: time="2025-09-16T05:03:18.562421791Z" level=info msg="runtime interface starting up..." Sep 16 05:03:18.562477 containerd[1903]: time="2025-09-16T05:03:18.562430367Z" level=info msg="starting plugins..." Sep 16 05:03:18.562477 containerd[1903]: time="2025-09-16T05:03:18.562452397Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 05:03:18.562733 containerd[1903]: time="2025-09-16T05:03:18.562714035Z" level=info msg="containerd successfully booted in 0.412355s" Sep 16 05:03:18.562838 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 05:03:18.633054 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:03:18.716174 tar[1878]: linux-amd64/LICENSE Sep 16 05:03:18.716785 tar[1878]: linux-amd64/README.md Sep 16 05:03:18.763324 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 05:03:18.810976 systemd-logind[1866]: Watching system buttons on /dev/input/event2 (Power Button) Sep 16 05:03:18.811003 systemd-logind[1866]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 16 05:03:18.811027 systemd-logind[1866]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 16 05:03:18.813864 systemd-logind[1866]: New seat seat0. Sep 16 05:03:18.814694 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 05:03:18.930375 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 16 05:03:18.930872 systemd-coredump[2017]: Process 1860 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1860: #0 0x000055937c2e8aeb n/a (ntpd + 0x68aeb) #1 0x000055937c291cdf n/a (ntpd + 0x11cdf) #2 0x000055937c292575 n/a (ntpd + 0x12575) #3 0x000055937c28dd8a n/a (ntpd + 0xdd8a) #4 0x000055937c28f5d3 n/a (ntpd + 0xf5d3) #5 0x000055937c297fd1 n/a (ntpd + 0x17fd1) #6 0x000055937c288c2d n/a (ntpd + 0x8c2d) #7 0x00007f6dfce3a16c n/a (libc.so.6 + 0x2716c) #8 0x00007f6dfce3a229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055937c288c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Sep 16 05:03:18.934925 dbus-daemon[1854]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 16 05:03:18.936591 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Sep 16 05:03:18.936807 systemd[1]: ntpd.service: Failed with result 'core-dump'. Sep 16 05:03:18.942171 systemd[1]: systemd-coredump@0-2015-0.service: Deactivated successfully. Sep 16 05:03:18.942920 dbus-daemon[1854]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1912 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 16 05:03:18.970604 systemd[1]: Starting polkit.service - Authorization Manager... Sep 16 05:03:19.046830 systemd-networkd[1814]: eth0: Gained IPv6LL Sep 16 05:03:19.050731 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Sep 16 05:03:19.052296 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 05:03:19.054323 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 05:03:19.059938 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 16 05:03:19.062950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:19.067997 systemd[1]: Started ntpd.service - Network Time Service. Sep 16 05:03:19.074175 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 05:03:19.127624 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 05:03:19.128453 ntpd[2200]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 05:03:19.129741 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 05:03:19.129741 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 05:03:19.129741 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: ---------------------------------------------------- Sep 16 05:03:19.129741 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: ntp-4 is maintained by Network Time Foundation, Sep 16 05:03:19.129741 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 05:03:19.129741 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: corporation. Support and training for ntp-4 are Sep 16 05:03:19.129741 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: available at https://www.nwtime.org/support Sep 16 05:03:19.129741 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: ---------------------------------------------------- Sep 16 05:03:19.128520 ntpd[2200]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 05:03:19.128531 ntpd[2200]: ---------------------------------------------------- Sep 16 05:03:19.128540 ntpd[2200]: ntp-4 is maintained by Network Time Foundation, Sep 16 05:03:19.128549 ntpd[2200]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 05:03:19.128558 ntpd[2200]: corporation. Support and training for ntp-4 are Sep 16 05:03:19.128567 ntpd[2200]: available at https://www.nwtime.org/support Sep 16 05:03:19.128577 ntpd[2200]: ---------------------------------------------------- Sep 16 05:03:19.131805 ntpd[2200]: proto: precision = 0.100 usec (-23) Sep 16 05:03:19.132569 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: proto: precision = 0.100 usec (-23) Sep 16 05:03:19.132569 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: basedate set to 2025-09-04 Sep 16 05:03:19.132569 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: gps base set to 2025-09-07 (week 2383) Sep 16 05:03:19.132569 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 05:03:19.132569 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 05:03:19.132569 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 05:03:19.132569 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: Listen normally on 3 eth0 172.31.26.160:123 Sep 16 05:03:19.132569 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: Listen normally on 4 lo [::1]:123 Sep 16 05:03:19.132569 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: Listen normally on 5 eth0 [fe80::4e9:f0ff:fec2:81bb%2]:123 Sep 16 05:03:19.132569 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: Listening on routing socket on fd #22 for interface updates Sep 16 05:03:19.132064 ntpd[2200]: basedate set to 2025-09-04 Sep 16 05:03:19.132077 ntpd[2200]: gps base set to 2025-09-07 (week 2383) Sep 16 05:03:19.132162 ntpd[2200]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 05:03:19.132189 ntpd[2200]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 05:03:19.132377 ntpd[2200]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 05:03:19.132403 ntpd[2200]: Listen normally on 3 eth0 172.31.26.160:123 Sep 16 05:03:19.132431 ntpd[2200]: Listen normally on 4 lo [::1]:123 Sep 16 05:03:19.132458 ntpd[2200]: Listen normally on 5 eth0 [fe80::4e9:f0ff:fec2:81bb%2]:123 Sep 16 05:03:19.132485 ntpd[2200]: Listening on routing socket on fd #22 for interface updates Sep 16 05:03:19.138953 ntpd[2200]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 05:03:19.140803 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 05:03:19.140803 ntpd[2200]: 16 Sep 05:03:19 ntpd[2200]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 05:03:19.138987 ntpd[2200]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 05:03:19.149881 polkitd[2191]: Started polkitd version 126 Sep 16 05:03:19.155622 polkitd[2191]: Loading rules from directory /etc/polkit-1/rules.d Sep 16 05:03:19.156127 polkitd[2191]: Loading rules from directory /run/polkit-1/rules.d Sep 16 05:03:19.156193 polkitd[2191]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 16 05:03:19.156522 polkitd[2191]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 16 05:03:19.156560 polkitd[2191]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 16 05:03:19.156608 polkitd[2191]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 16 05:03:19.157991 polkitd[2191]: Finished loading, compiling and executing 2 rules Sep 16 05:03:19.158299 systemd[1]: Started polkit.service - Authorization Manager. Sep 16 05:03:19.161717 dbus-daemon[1854]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 16 05:03:19.162254 polkitd[2191]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 16 05:03:19.185615 systemd-resolved[1762]: System hostname changed to 'ip-172-31-26-160'. Sep 16 05:03:19.186034 systemd-hostnamed[1912]: Hostname set to (transient) Sep 16 05:03:19.214762 amazon-ssm-agent[2198]: Initializing new seelog logger Sep 16 05:03:19.215044 amazon-ssm-agent[2198]: New Seelog Logger Creation Complete Sep 16 05:03:19.215044 amazon-ssm-agent[2198]: 2025/09/16 05:03:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:03:19.215044 amazon-ssm-agent[2198]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:03:19.215276 amazon-ssm-agent[2198]: 2025/09/16 05:03:19 processing appconfig overrides Sep 16 05:03:19.215753 amazon-ssm-agent[2198]: 2025/09/16 05:03:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:03:19.215753 amazon-ssm-agent[2198]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:03:19.215840 amazon-ssm-agent[2198]: 2025/09/16 05:03:19 processing appconfig overrides Sep 16 05:03:19.216031 amazon-ssm-agent[2198]: 2025/09/16 05:03:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:03:19.216031 amazon-ssm-agent[2198]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:03:19.216086 amazon-ssm-agent[2198]: 2025/09/16 05:03:19 processing appconfig overrides Sep 16 05:03:19.216496 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.2154 INFO Proxy environment variables: Sep 16 05:03:19.218307 amazon-ssm-agent[2198]: 2025/09/16 05:03:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:03:19.218307 amazon-ssm-agent[2198]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:03:19.218413 amazon-ssm-agent[2198]: 2025/09/16 05:03:19 processing appconfig overrides Sep 16 05:03:19.315947 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.2155 INFO https_proxy: Sep 16 05:03:19.414192 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.2155 INFO http_proxy: Sep 16 05:03:19.512601 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.2156 INFO no_proxy: Sep 16 05:03:19.610863 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.2158 INFO Checking if agent identity type OnPrem can be assumed Sep 16 05:03:19.710069 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.2159 INFO Checking if agent identity type EC2 can be assumed Sep 16 05:03:19.710069 amazon-ssm-agent[2198]: 2025/09/16 05:03:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:03:19.710069 amazon-ssm-agent[2198]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:03:19.710069 amazon-ssm-agent[2198]: 2025/09/16 05:03:19 processing appconfig overrides Sep 16 05:03:19.734454 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.2548 INFO Agent will take identity from EC2 Sep 16 05:03:19.734454 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.2562 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 16 05:03:19.734454 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.2562 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 16 05:03:19.734454 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.2562 INFO [amazon-ssm-agent] Starting Core Agent Sep 16 05:03:19.734454 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.2562 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 16 05:03:19.734755 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.2562 INFO [Registrar] Starting registrar module Sep 16 05:03:19.734755 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.2576 INFO [EC2Identity] Checking disk for registration info Sep 16 05:03:19.734755 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.2577 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 16 05:03:19.734755 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.2577 INFO [EC2Identity] Generating registration keypair Sep 16 05:03:19.734755 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.6641 INFO [EC2Identity] Checking write access before registering Sep 16 05:03:19.734755 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.6649 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 16 05:03:19.734755 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.7092 INFO [EC2Identity] EC2 registration was successful. Sep 16 05:03:19.734755 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.7093 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 16 05:03:19.734755 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.7094 INFO [CredentialRefresher] credentialRefresher has started Sep 16 05:03:19.734755 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.7094 INFO [CredentialRefresher] Starting credentials refresher loop Sep 16 05:03:19.734755 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.7341 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 16 05:03:19.734755 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.7343 INFO [CredentialRefresher] Credentials ready Sep 16 05:03:19.807293 amazon-ssm-agent[2198]: 2025-09-16 05:03:19.7345 INFO [CredentialRefresher] Next credential rotation will be in 29.999993263516668 minutes Sep 16 05:03:20.746320 amazon-ssm-agent[2198]: 2025-09-16 05:03:20.7462 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 16 05:03:20.846862 amazon-ssm-agent[2198]: 2025-09-16 05:03:20.7514 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2230) started Sep 16 05:03:20.947453 amazon-ssm-agent[2198]: 2025-09-16 05:03:20.7514 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 16 05:03:21.223192 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 05:03:21.225145 systemd[1]: Started sshd@0-172.31.26.160:22-139.178.68.195:49196.service - OpenSSH per-connection server daemon (139.178.68.195:49196). Sep 16 05:03:21.360271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:21.361256 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 05:03:21.362586 systemd[1]: Startup finished in 2.675s (kernel) + 6.141s (initrd) + 7.640s (userspace) = 16.457s. Sep 16 05:03:21.367103 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 05:03:21.436400 sshd[2242]: Accepted publickey for core from 139.178.68.195 port 49196 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:03:21.437040 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:21.445916 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 05:03:21.446981 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 05:03:21.457360 systemd-logind[1866]: New session 1 of user core. Sep 16 05:03:21.471214 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 05:03:21.476950 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 05:03:21.494793 (systemd)[2257]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 05:03:21.498293 systemd-logind[1866]: New session c1 of user core. Sep 16 05:03:21.650690 systemd[2257]: Queued start job for default target default.target. Sep 16 05:03:21.659979 systemd[2257]: Created slice app.slice - User Application Slice. Sep 16 05:03:21.660014 systemd[2257]: Reached target paths.target - Paths. Sep 16 05:03:21.660154 systemd[2257]: Reached target timers.target - Timers. Sep 16 05:03:21.661759 systemd[2257]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 05:03:21.674255 systemd[2257]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 05:03:21.674509 systemd[2257]: Reached target sockets.target - Sockets. Sep 16 05:03:21.674756 systemd[2257]: Reached target basic.target - Basic System. Sep 16 05:03:21.674812 systemd[2257]: Reached target default.target - Main User Target. Sep 16 05:03:21.674842 systemd[2257]: Startup finished in 167ms. Sep 16 05:03:21.674968 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 05:03:21.679888 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 05:03:21.822345 systemd[1]: Started sshd@1-172.31.26.160:22-139.178.68.195:49210.service - OpenSSH per-connection server daemon (139.178.68.195:49210). Sep 16 05:03:21.996216 sshd[2272]: Accepted publickey for core from 139.178.68.195 port 49210 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:03:21.997778 sshd-session[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:22.003612 systemd-logind[1866]: New session 2 of user core. Sep 16 05:03:22.008873 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 05:03:22.130788 sshd[2275]: Connection closed by 139.178.68.195 port 49210 Sep 16 05:03:22.131878 sshd-session[2272]: pam_unix(sshd:session): session closed for user core Sep 16 05:03:22.135270 systemd[1]: sshd@1-172.31.26.160:22-139.178.68.195:49210.service: Deactivated successfully. Sep 16 05:03:22.136925 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 05:03:22.138963 systemd-logind[1866]: Session 2 logged out. Waiting for processes to exit. Sep 16 05:03:22.140317 systemd-logind[1866]: Removed session 2. Sep 16 05:03:22.164544 systemd[1]: Started sshd@2-172.31.26.160:22-139.178.68.195:49216.service - OpenSSH per-connection server daemon (139.178.68.195:49216). Sep 16 05:03:22.336132 sshd[2281]: Accepted publickey for core from 139.178.68.195 port 49216 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:03:22.337727 sshd-session[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:22.343821 systemd-logind[1866]: New session 3 of user core. Sep 16 05:03:22.347843 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 05:03:22.465757 sshd[2285]: Connection closed by 139.178.68.195 port 49216 Sep 16 05:03:22.466957 sshd-session[2281]: pam_unix(sshd:session): session closed for user core Sep 16 05:03:22.473061 systemd[1]: sshd@2-172.31.26.160:22-139.178.68.195:49216.service: Deactivated successfully. Sep 16 05:03:22.476277 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 05:03:22.477723 systemd-logind[1866]: Session 3 logged out. Waiting for processes to exit. Sep 16 05:03:22.480345 systemd-logind[1866]: Removed session 3. Sep 16 05:03:22.501819 systemd[1]: Started sshd@3-172.31.26.160:22-139.178.68.195:49232.service - OpenSSH per-connection server daemon (139.178.68.195:49232). Sep 16 05:03:22.508701 kubelet[2250]: E0916 05:03:22.507768 2250 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 05:03:22.510477 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 05:03:22.510697 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 05:03:22.512967 systemd[1]: kubelet.service: Consumed 1.047s CPU time, 266.6M memory peak. Sep 16 05:03:22.672437 sshd[2291]: Accepted publickey for core from 139.178.68.195 port 49232 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:03:22.674093 sshd-session[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:22.679147 systemd-logind[1866]: New session 4 of user core. Sep 16 05:03:22.688901 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 05:03:22.806097 sshd[2295]: Connection closed by 139.178.68.195 port 49232 Sep 16 05:03:22.807038 sshd-session[2291]: pam_unix(sshd:session): session closed for user core Sep 16 05:03:22.810410 systemd[1]: sshd@3-172.31.26.160:22-139.178.68.195:49232.service: Deactivated successfully. Sep 16 05:03:22.812371 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 05:03:22.813551 systemd-logind[1866]: Session 4 logged out. Waiting for processes to exit. Sep 16 05:03:22.815144 systemd-logind[1866]: Removed session 4. Sep 16 05:03:22.836473 systemd[1]: Started sshd@4-172.31.26.160:22-139.178.68.195:49236.service - OpenSSH per-connection server daemon (139.178.68.195:49236). Sep 16 05:03:23.005231 sshd[2301]: Accepted publickey for core from 139.178.68.195 port 49236 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:03:23.006475 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:23.012740 systemd-logind[1866]: New session 5 of user core. Sep 16 05:03:23.020920 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 05:03:23.142594 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 05:03:23.142881 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:03:23.154915 sudo[2305]: pam_unix(sudo:session): session closed for user root Sep 16 05:03:23.177629 sshd[2304]: Connection closed by 139.178.68.195 port 49236 Sep 16 05:03:23.178332 sshd-session[2301]: pam_unix(sshd:session): session closed for user core Sep 16 05:03:23.182443 systemd[1]: sshd@4-172.31.26.160:22-139.178.68.195:49236.service: Deactivated successfully. Sep 16 05:03:23.184333 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 05:03:23.185077 systemd-logind[1866]: Session 5 logged out. Waiting for processes to exit. Sep 16 05:03:23.186178 systemd-logind[1866]: Removed session 5. Sep 16 05:03:23.214575 systemd[1]: Started sshd@5-172.31.26.160:22-139.178.68.195:49248.service - OpenSSH per-connection server daemon (139.178.68.195:49248). Sep 16 05:03:23.379703 sshd[2311]: Accepted publickey for core from 139.178.68.195 port 49248 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:03:23.380935 sshd-session[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:23.386513 systemd-logind[1866]: New session 6 of user core. Sep 16 05:03:23.392901 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 05:03:23.490920 sudo[2316]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 05:03:23.491192 sudo[2316]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:03:23.497148 sudo[2316]: pam_unix(sudo:session): session closed for user root Sep 16 05:03:23.502903 sudo[2315]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 05:03:23.503170 sudo[2315]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:03:23.514408 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 05:03:23.552989 augenrules[2338]: No rules Sep 16 05:03:23.554293 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 05:03:23.554564 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 05:03:23.555719 sudo[2315]: pam_unix(sudo:session): session closed for user root Sep 16 05:03:23.578399 sshd[2314]: Connection closed by 139.178.68.195 port 49248 Sep 16 05:03:23.578912 sshd-session[2311]: pam_unix(sshd:session): session closed for user core Sep 16 05:03:23.583102 systemd[1]: sshd@5-172.31.26.160:22-139.178.68.195:49248.service: Deactivated successfully. Sep 16 05:03:23.584957 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 05:03:23.585709 systemd-logind[1866]: Session 6 logged out. Waiting for processes to exit. Sep 16 05:03:23.587224 systemd-logind[1866]: Removed session 6. Sep 16 05:03:23.610924 systemd[1]: Started sshd@6-172.31.26.160:22-139.178.68.195:49256.service - OpenSSH per-connection server daemon (139.178.68.195:49256). Sep 16 05:03:23.779108 sshd[2347]: Accepted publickey for core from 139.178.68.195 port 49256 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:03:23.781019 sshd-session[2347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:23.787438 systemd-logind[1866]: New session 7 of user core. Sep 16 05:03:23.796925 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 05:03:23.891639 sudo[2351]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 05:03:23.891939 sudo[2351]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:03:24.516028 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 05:03:24.526091 (dockerd)[2370]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 05:03:24.940348 dockerd[2370]: time="2025-09-16T05:03:24.940232802Z" level=info msg="Starting up" Sep 16 05:03:24.944687 dockerd[2370]: time="2025-09-16T05:03:24.944594014Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 05:03:24.955910 dockerd[2370]: time="2025-09-16T05:03:24.955756172Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 05:03:25.067456 dockerd[2370]: time="2025-09-16T05:03:25.067413358Z" level=info msg="Loading containers: start." Sep 16 05:03:25.078718 kernel: Initializing XFRM netlink socket Sep 16 05:03:25.304695 (udev-worker)[2390]: Network interface NamePolicy= disabled on kernel command line. Sep 16 05:03:25.347195 systemd-networkd[1814]: docker0: Link UP Sep 16 05:03:25.357044 dockerd[2370]: time="2025-09-16T05:03:25.356980393Z" level=info msg="Loading containers: done." Sep 16 05:03:25.371142 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck71181544-merged.mount: Deactivated successfully. Sep 16 05:03:25.376482 dockerd[2370]: time="2025-09-16T05:03:25.376438944Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 05:03:25.376642 dockerd[2370]: time="2025-09-16T05:03:25.376524695Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 05:03:25.376642 dockerd[2370]: time="2025-09-16T05:03:25.376608920Z" level=info msg="Initializing buildkit" Sep 16 05:03:25.401180 dockerd[2370]: time="2025-09-16T05:03:25.401124238Z" level=info msg="Completed buildkit initialization" Sep 16 05:03:25.408538 dockerd[2370]: time="2025-09-16T05:03:25.408463821Z" level=info msg="Daemon has completed initialization" Sep 16 05:03:25.408538 dockerd[2370]: time="2025-09-16T05:03:25.408524292Z" level=info msg="API listen on /run/docker.sock" Sep 16 05:03:25.408895 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 05:03:28.082565 systemd-resolved[1762]: Clock change detected. Flushing caches. Sep 16 05:03:28.567583 containerd[1903]: time="2025-09-16T05:03:28.567462026Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 16 05:03:29.138993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1019686849.mount: Deactivated successfully. Sep 16 05:03:30.397620 containerd[1903]: time="2025-09-16T05:03:30.397561526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:30.398918 containerd[1903]: time="2025-09-16T05:03:30.398835985Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 16 05:03:30.400299 containerd[1903]: time="2025-09-16T05:03:30.399868842Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:30.402724 containerd[1903]: time="2025-09-16T05:03:30.402583624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:30.403763 containerd[1903]: time="2025-09-16T05:03:30.403731126Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.836229652s" Sep 16 05:03:30.403931 containerd[1903]: time="2025-09-16T05:03:30.403909029Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 16 05:03:30.404597 containerd[1903]: time="2025-09-16T05:03:30.404562699Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 16 05:03:31.861583 containerd[1903]: time="2025-09-16T05:03:31.861532005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:31.862787 containerd[1903]: time="2025-09-16T05:03:31.862610467Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 16 05:03:31.863798 containerd[1903]: time="2025-09-16T05:03:31.863766857Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:31.866701 containerd[1903]: time="2025-09-16T05:03:31.866668160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:31.867828 containerd[1903]: time="2025-09-16T05:03:31.867642402Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.463044816s" Sep 16 05:03:31.867828 containerd[1903]: time="2025-09-16T05:03:31.867682980Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 16 05:03:31.868670 containerd[1903]: time="2025-09-16T05:03:31.868637245Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 16 05:03:33.105705 containerd[1903]: time="2025-09-16T05:03:33.105435781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:33.106695 containerd[1903]: time="2025-09-16T05:03:33.106532646Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 16 05:03:33.107711 containerd[1903]: time="2025-09-16T05:03:33.107657830Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:33.111507 containerd[1903]: time="2025-09-16T05:03:33.110541819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:33.111507 containerd[1903]: time="2025-09-16T05:03:33.111378303Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.242711958s" Sep 16 05:03:33.111507 containerd[1903]: time="2025-09-16T05:03:33.111423191Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 16 05:03:33.112214 containerd[1903]: time="2025-09-16T05:03:33.112185133Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 16 05:03:34.123415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4262631731.mount: Deactivated successfully. Sep 16 05:03:34.647033 containerd[1903]: time="2025-09-16T05:03:34.646971859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:34.648040 containerd[1903]: time="2025-09-16T05:03:34.647865566Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 16 05:03:34.649088 containerd[1903]: time="2025-09-16T05:03:34.649053546Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:34.651178 containerd[1903]: time="2025-09-16T05:03:34.651140067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:34.651669 containerd[1903]: time="2025-09-16T05:03:34.651642491Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.539319829s" Sep 16 05:03:34.651768 containerd[1903]: time="2025-09-16T05:03:34.651754477Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 16 05:03:34.652449 containerd[1903]: time="2025-09-16T05:03:34.652159733Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 16 05:03:34.712329 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 05:03:34.713895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:34.949002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:34.954260 (kubelet)[2661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 05:03:35.005456 kubelet[2661]: E0916 05:03:35.005365 2661 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 05:03:35.009506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 05:03:35.009682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 05:03:35.010292 systemd[1]: kubelet.service: Consumed 175ms CPU time, 111M memory peak. Sep 16 05:03:35.137653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3370982952.mount: Deactivated successfully. Sep 16 05:03:36.051287 containerd[1903]: time="2025-09-16T05:03:36.051226741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:36.052491 containerd[1903]: time="2025-09-16T05:03:36.052345812Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 16 05:03:36.053517 containerd[1903]: time="2025-09-16T05:03:36.053481670Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:36.056257 containerd[1903]: time="2025-09-16T05:03:36.056220762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:36.057551 containerd[1903]: time="2025-09-16T05:03:36.057274180Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.405088162s" Sep 16 05:03:36.057551 containerd[1903]: time="2025-09-16T05:03:36.057333191Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 16 05:03:36.057951 containerd[1903]: time="2025-09-16T05:03:36.057926761Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 05:03:36.467889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3108757645.mount: Deactivated successfully. Sep 16 05:03:36.474195 containerd[1903]: time="2025-09-16T05:03:36.474143387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 05:03:36.475459 containerd[1903]: time="2025-09-16T05:03:36.475303975Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 16 05:03:36.476901 containerd[1903]: time="2025-09-16T05:03:36.476867273Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 05:03:36.479439 containerd[1903]: time="2025-09-16T05:03:36.478815379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 05:03:36.479439 containerd[1903]: time="2025-09-16T05:03:36.479312346Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 421.352ms" Sep 16 05:03:36.479439 containerd[1903]: time="2025-09-16T05:03:36.479337142Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 16 05:03:36.479912 containerd[1903]: time="2025-09-16T05:03:36.479886902Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 16 05:03:36.929788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3198704226.mount: Deactivated successfully. Sep 16 05:03:39.118762 containerd[1903]: time="2025-09-16T05:03:39.118644640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:39.120045 containerd[1903]: time="2025-09-16T05:03:39.120007147Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 16 05:03:39.121837 containerd[1903]: time="2025-09-16T05:03:39.120998749Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:39.123930 containerd[1903]: time="2025-09-16T05:03:39.123873545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:39.124762 containerd[1903]: time="2025-09-16T05:03:39.124723794Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.644809224s" Sep 16 05:03:39.124762 containerd[1903]: time="2025-09-16T05:03:39.124759114Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 16 05:03:41.971633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:41.971904 systemd[1]: kubelet.service: Consumed 175ms CPU time, 111M memory peak. Sep 16 05:03:41.974941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:42.012684 systemd[1]: Reload requested from client PID 2806 ('systemctl') (unit session-7.scope)... Sep 16 05:03:42.012704 systemd[1]: Reloading... Sep 16 05:03:42.146833 zram_generator::config[2857]: No configuration found. Sep 16 05:03:42.400453 systemd[1]: Reloading finished in 387 ms. Sep 16 05:03:42.463353 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 05:03:42.463463 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 05:03:42.463856 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:42.463919 systemd[1]: kubelet.service: Consumed 124ms CPU time, 98.1M memory peak. Sep 16 05:03:42.465721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:42.679450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:42.692432 (kubelet)[2914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 05:03:42.738732 kubelet[2914]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:03:42.738732 kubelet[2914]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 16 05:03:42.738732 kubelet[2914]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:03:42.740524 kubelet[2914]: I0916 05:03:42.740314 2914 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 05:03:43.295328 kubelet[2914]: I0916 05:03:43.295273 2914 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 16 05:03:43.295328 kubelet[2914]: I0916 05:03:43.295311 2914 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 05:03:43.295606 kubelet[2914]: I0916 05:03:43.295569 2914 server.go:934] "Client rotation is on, will bootstrap in background" Sep 16 05:03:43.322611 kubelet[2914]: I0916 05:03:43.322451 2914 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 05:03:43.333598 kubelet[2914]: E0916 05:03:43.333541 2914 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.26.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.160:6443: connect: connection refused" logger="UnhandledError" Sep 16 05:03:43.348069 kubelet[2914]: I0916 05:03:43.348038 2914 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 05:03:43.354794 kubelet[2914]: I0916 05:03:43.354698 2914 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 05:03:43.359826 kubelet[2914]: I0916 05:03:43.359771 2914 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 16 05:03:43.360016 kubelet[2914]: I0916 05:03:43.359977 2914 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 05:03:43.360224 kubelet[2914]: I0916 05:03:43.360012 2914 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-160","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 05:03:43.360224 kubelet[2914]: I0916 05:03:43.360215 2914 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 05:03:43.360224 kubelet[2914]: I0916 05:03:43.360224 2914 container_manager_linux.go:300] "Creating device plugin manager" Sep 16 05:03:43.360439 kubelet[2914]: I0916 05:03:43.360324 2914 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:03:43.364851 kubelet[2914]: I0916 05:03:43.364594 2914 kubelet.go:408] "Attempting to sync node with API server" Sep 16 05:03:43.364851 kubelet[2914]: I0916 05:03:43.364629 2914 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 05:03:43.364851 kubelet[2914]: I0916 05:03:43.364668 2914 kubelet.go:314] "Adding apiserver pod source" Sep 16 05:03:43.364851 kubelet[2914]: I0916 05:03:43.364682 2914 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 05:03:43.373281 kubelet[2914]: W0916 05:03:43.373225 2914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-160&limit=500&resourceVersion=0": dial tcp 172.31.26.160:6443: connect: connection refused Sep 16 05:03:43.373381 kubelet[2914]: E0916 05:03:43.373289 2914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-160&limit=500&resourceVersion=0\": dial tcp 172.31.26.160:6443: connect: connection refused" logger="UnhandledError" Sep 16 05:03:43.373820 kubelet[2914]: I0916 05:03:43.373784 2914 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 05:03:43.375552 kubelet[2914]: W0916 05:03:43.374875 2914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.160:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.160:6443: connect: connection refused Sep 16 05:03:43.375552 kubelet[2914]: E0916 05:03:43.374920 2914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.160:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.160:6443: connect: connection refused" logger="UnhandledError" Sep 16 05:03:43.377369 kubelet[2914]: I0916 05:03:43.377346 2914 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 05:03:43.378761 kubelet[2914]: W0916 05:03:43.378720 2914 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 05:03:43.379315 kubelet[2914]: I0916 05:03:43.379298 2914 server.go:1274] "Started kubelet" Sep 16 05:03:43.380698 kubelet[2914]: I0916 05:03:43.380262 2914 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 05:03:43.381142 kubelet[2914]: I0916 05:03:43.381121 2914 server.go:449] "Adding debug handlers to kubelet server" Sep 16 05:03:43.385026 kubelet[2914]: I0916 05:03:43.384985 2914 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 05:03:43.388008 kubelet[2914]: I0916 05:03:43.387358 2914 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 05:03:43.388008 kubelet[2914]: I0916 05:03:43.387530 2914 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 05:03:43.390071 kubelet[2914]: E0916 05:03:43.387688 2914 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.160:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.160:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-160.1865aacc28016a03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-160,UID:ip-172-31-26-160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-160,},FirstTimestamp:2025-09-16 05:03:43.379278339 +0000 UTC m=+0.683002110,LastTimestamp:2025-09-16 05:03:43.379278339 +0000 UTC m=+0.683002110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-160,}" Sep 16 05:03:43.391021 kubelet[2914]: I0916 05:03:43.390999 2914 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 05:03:43.396243 kubelet[2914]: I0916 05:03:43.395888 2914 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 16 05:03:43.396243 kubelet[2914]: E0916 05:03:43.396115 2914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-160\" not found" Sep 16 05:03:43.399130 kubelet[2914]: E0916 05:03:43.399090 2914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-160?timeout=10s\": dial tcp 172.31.26.160:6443: connect: connection refused" interval="200ms" Sep 16 05:03:43.400283 kubelet[2914]: I0916 05:03:43.400165 2914 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 16 05:03:43.401438 kubelet[2914]: I0916 05:03:43.401417 2914 reconciler.go:26] "Reconciler: start to sync state" Sep 16 05:03:43.402305 kubelet[2914]: W0916 05:03:43.401773 2914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.160:6443: connect: connection refused Sep 16 05:03:43.403682 kubelet[2914]: E0916 05:03:43.403651 2914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.160:6443: connect: connection refused" logger="UnhandledError" Sep 16 05:03:43.405212 kubelet[2914]: I0916 05:03:43.405181 2914 factory.go:221] Registration of the systemd container factory successfully Sep 16 05:03:43.406140 kubelet[2914]: I0916 05:03:43.405270 2914 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 05:03:43.413840 kubelet[2914]: I0916 05:03:43.412700 2914 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 05:03:43.413840 kubelet[2914]: I0916 05:03:43.412844 2914 factory.go:221] Registration of the containerd container factory successfully Sep 16 05:03:43.415144 kubelet[2914]: I0916 05:03:43.415119 2914 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 05:03:43.415144 kubelet[2914]: I0916 05:03:43.415147 2914 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 16 05:03:43.415239 kubelet[2914]: I0916 05:03:43.415165 2914 kubelet.go:2321] "Starting kubelet main sync loop" Sep 16 05:03:43.415239 kubelet[2914]: E0916 05:03:43.415199 2914 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 05:03:43.415460 kubelet[2914]: E0916 05:03:43.415443 2914 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 05:03:43.427186 kubelet[2914]: W0916 05:03:43.427135 2914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.160:6443: connect: connection refused Sep 16 05:03:43.428109 kubelet[2914]: E0916 05:03:43.427203 2914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.160:6443: connect: connection refused" logger="UnhandledError" Sep 16 05:03:43.438739 kubelet[2914]: I0916 05:03:43.438423 2914 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 16 05:03:43.438739 kubelet[2914]: I0916 05:03:43.438438 2914 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 16 05:03:43.438739 kubelet[2914]: I0916 05:03:43.438453 2914 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:03:43.446191 kubelet[2914]: I0916 05:03:43.446163 2914 policy_none.go:49] "None policy: Start" Sep 16 05:03:43.447418 kubelet[2914]: I0916 05:03:43.447210 2914 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 16 05:03:43.447418 kubelet[2914]: I0916 05:03:43.447258 2914 state_mem.go:35] "Initializing new in-memory state store" Sep 16 05:03:43.458160 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 05:03:43.470996 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 05:03:43.474780 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 05:03:43.485839 kubelet[2914]: I0916 05:03:43.485800 2914 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 05:03:43.486958 kubelet[2914]: I0916 05:03:43.486373 2914 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 05:03:43.486958 kubelet[2914]: I0916 05:03:43.486388 2914 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 05:03:43.487395 kubelet[2914]: I0916 05:03:43.487383 2914 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 05:03:43.488788 kubelet[2914]: E0916 05:03:43.488764 2914 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-160\" not found" Sep 16 05:03:43.527619 systemd[1]: Created slice kubepods-burstable-podeee4d1faddb0a6349a8ceec8d0782363.slice - libcontainer container kubepods-burstable-podeee4d1faddb0a6349a8ceec8d0782363.slice. Sep 16 05:03:43.543320 systemd[1]: Created slice kubepods-burstable-podff1f3509adb08dff0222e0d3b6323f8d.slice - libcontainer container kubepods-burstable-podff1f3509adb08dff0222e0d3b6323f8d.slice. Sep 16 05:03:43.549440 systemd[1]: Created slice kubepods-burstable-pod4cb2e0590f1961ed68e1de6ea412ccbd.slice - libcontainer container kubepods-burstable-pod4cb2e0590f1961ed68e1de6ea412ccbd.slice. Sep 16 05:03:43.589012 kubelet[2914]: I0916 05:03:43.588975 2914 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-160" Sep 16 05:03:43.589343 kubelet[2914]: E0916 05:03:43.589309 2914 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.26.160:6443/api/v1/nodes\": dial tcp 172.31.26.160:6443: connect: connection refused" node="ip-172-31-26-160" Sep 16 05:03:43.599975 kubelet[2914]: E0916 05:03:43.599937 2914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-160?timeout=10s\": dial tcp 172.31.26.160:6443: connect: connection refused" interval="400ms" Sep 16 05:03:43.603167 kubelet[2914]: I0916 05:03:43.603130 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eee4d1faddb0a6349a8ceec8d0782363-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-160\" (UID: \"eee4d1faddb0a6349a8ceec8d0782363\") " pod="kube-system/kube-controller-manager-ip-172-31-26-160" Sep 16 05:03:43.603167 kubelet[2914]: I0916 05:03:43.603165 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eee4d1faddb0a6349a8ceec8d0782363-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-160\" (UID: \"eee4d1faddb0a6349a8ceec8d0782363\") " pod="kube-system/kube-controller-manager-ip-172-31-26-160" Sep 16 05:03:43.603167 kubelet[2914]: I0916 05:03:43.603184 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eee4d1faddb0a6349a8ceec8d0782363-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-160\" (UID: \"eee4d1faddb0a6349a8ceec8d0782363\") " pod="kube-system/kube-controller-manager-ip-172-31-26-160" Sep 16 05:03:43.603475 kubelet[2914]: I0916 05:03:43.603202 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eee4d1faddb0a6349a8ceec8d0782363-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-160\" (UID: \"eee4d1faddb0a6349a8ceec8d0782363\") " pod="kube-system/kube-controller-manager-ip-172-31-26-160" Sep 16 05:03:43.603475 kubelet[2914]: I0916 05:03:43.603222 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4cb2e0590f1961ed68e1de6ea412ccbd-ca-certs\") pod \"kube-apiserver-ip-172-31-26-160\" (UID: \"4cb2e0590f1961ed68e1de6ea412ccbd\") " pod="kube-system/kube-apiserver-ip-172-31-26-160" Sep 16 05:03:43.603475 kubelet[2914]: I0916 05:03:43.603236 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4cb2e0590f1961ed68e1de6ea412ccbd-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-160\" (UID: \"4cb2e0590f1961ed68e1de6ea412ccbd\") " pod="kube-system/kube-apiserver-ip-172-31-26-160" Sep 16 05:03:43.603475 kubelet[2914]: I0916 05:03:43.603252 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eee4d1faddb0a6349a8ceec8d0782363-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-160\" (UID: \"eee4d1faddb0a6349a8ceec8d0782363\") " pod="kube-system/kube-controller-manager-ip-172-31-26-160" Sep 16 05:03:43.603475 kubelet[2914]: I0916 05:03:43.603266 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff1f3509adb08dff0222e0d3b6323f8d-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-160\" (UID: \"ff1f3509adb08dff0222e0d3b6323f8d\") " pod="kube-system/kube-scheduler-ip-172-31-26-160" Sep 16 05:03:43.603611 kubelet[2914]: I0916 05:03:43.603282 2914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4cb2e0590f1961ed68e1de6ea412ccbd-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-160\" (UID: \"4cb2e0590f1961ed68e1de6ea412ccbd\") " pod="kube-system/kube-apiserver-ip-172-31-26-160" Sep 16 05:03:43.792121 kubelet[2914]: I0916 05:03:43.792091 2914 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-160" Sep 16 05:03:43.792550 kubelet[2914]: E0916 05:03:43.792485 2914 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.26.160:6443/api/v1/nodes\": dial tcp 172.31.26.160:6443: connect: connection refused" node="ip-172-31-26-160" Sep 16 05:03:43.841510 containerd[1903]: time="2025-09-16T05:03:43.841382424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-160,Uid:eee4d1faddb0a6349a8ceec8d0782363,Namespace:kube-system,Attempt:0,}" Sep 16 05:03:43.849341 containerd[1903]: time="2025-09-16T05:03:43.849299465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-160,Uid:ff1f3509adb08dff0222e0d3b6323f8d,Namespace:kube-system,Attempt:0,}" Sep 16 05:03:43.853245 containerd[1903]: time="2025-09-16T05:03:43.853200363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-160,Uid:4cb2e0590f1961ed68e1de6ea412ccbd,Namespace:kube-system,Attempt:0,}" Sep 16 05:03:43.987028 containerd[1903]: time="2025-09-16T05:03:43.986952999Z" level=info msg="connecting to shim d28536fa173817a6c2a7d2f0d75b61c723fe4944852536c0119c8dd2528b56bc" address="unix:///run/containerd/s/b5095885c375de7b866d97cbe2a1b695421354a5e522bc633fce77dabe7052a5" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:43.987986 containerd[1903]: time="2025-09-16T05:03:43.987960327Z" level=info msg="connecting to shim a8e6c453dcbbd4152b74cc205571df3f47794d4c65e22bd18d6f6d54fe1607b6" address="unix:///run/containerd/s/bf22eca44c29eded2b92fac8fdff5e848402ceb3938a09177ff8fde72b11e32d" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:43.990129 containerd[1903]: time="2025-09-16T05:03:43.990077072Z" level=info msg="connecting to shim b911716785126987af89e3f728632dcc22454d770e0b5e6cb19d2cfc42bbd320" address="unix:///run/containerd/s/2ea01f2252b1de61dca78ed156c5814e6d8431109a372ecb327c7ceb937e42bc" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:44.001855 kubelet[2914]: E0916 05:03:44.000849 2914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-160?timeout=10s\": dial tcp 172.31.26.160:6443: connect: connection refused" interval="800ms" Sep 16 05:03:44.114216 systemd[1]: Started cri-containerd-a8e6c453dcbbd4152b74cc205571df3f47794d4c65e22bd18d6f6d54fe1607b6.scope - libcontainer container a8e6c453dcbbd4152b74cc205571df3f47794d4c65e22bd18d6f6d54fe1607b6. Sep 16 05:03:44.129326 systemd[1]: Started cri-containerd-b911716785126987af89e3f728632dcc22454d770e0b5e6cb19d2cfc42bbd320.scope - libcontainer container b911716785126987af89e3f728632dcc22454d770e0b5e6cb19d2cfc42bbd320. Sep 16 05:03:44.130707 systemd[1]: Started cri-containerd-d28536fa173817a6c2a7d2f0d75b61c723fe4944852536c0119c8dd2528b56bc.scope - libcontainer container d28536fa173817a6c2a7d2f0d75b61c723fe4944852536c0119c8dd2528b56bc. Sep 16 05:03:44.198057 kubelet[2914]: I0916 05:03:44.198010 2914 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-160" Sep 16 05:03:44.199220 kubelet[2914]: E0916 05:03:44.199115 2914 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.26.160:6443/api/v1/nodes\": dial tcp 172.31.26.160:6443: connect: connection refused" node="ip-172-31-26-160" Sep 16 05:03:44.209755 containerd[1903]: time="2025-09-16T05:03:44.209369807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-160,Uid:ff1f3509adb08dff0222e0d3b6323f8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8e6c453dcbbd4152b74cc205571df3f47794d4c65e22bd18d6f6d54fe1607b6\"" Sep 16 05:03:44.216019 containerd[1903]: time="2025-09-16T05:03:44.215226850Z" level=info msg="CreateContainer within sandbox \"a8e6c453dcbbd4152b74cc205571df3f47794d4c65e22bd18d6f6d54fe1607b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 05:03:44.247057 containerd[1903]: time="2025-09-16T05:03:44.246759117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-160,Uid:4cb2e0590f1961ed68e1de6ea412ccbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"b911716785126987af89e3f728632dcc22454d770e0b5e6cb19d2cfc42bbd320\"" Sep 16 05:03:44.256920 containerd[1903]: time="2025-09-16T05:03:44.256879049Z" level=info msg="CreateContainer within sandbox \"b911716785126987af89e3f728632dcc22454d770e0b5e6cb19d2cfc42bbd320\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 05:03:44.258743 containerd[1903]: time="2025-09-16T05:03:44.258709707Z" level=info msg="Container eea8c6c535c82daa9405a46b96cbc205698b50679ca3cf696781b5a68f028af3: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:44.270231 containerd[1903]: time="2025-09-16T05:03:44.270180523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-160,Uid:eee4d1faddb0a6349a8ceec8d0782363,Namespace:kube-system,Attempt:0,} returns sandbox id \"d28536fa173817a6c2a7d2f0d75b61c723fe4944852536c0119c8dd2528b56bc\"" Sep 16 05:03:44.270372 containerd[1903]: time="2025-09-16T05:03:44.270338066Z" level=info msg="Container 306f3516d0a0050f78972e43532ea7a317fb7001c6d0daf81b302b1052ec7dc9: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:44.276732 containerd[1903]: time="2025-09-16T05:03:44.276608376Z" level=info msg="CreateContainer within sandbox \"d28536fa173817a6c2a7d2f0d75b61c723fe4944852536c0119c8dd2528b56bc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 05:03:44.284861 containerd[1903]: time="2025-09-16T05:03:44.284697013Z" level=info msg="CreateContainer within sandbox \"a8e6c453dcbbd4152b74cc205571df3f47794d4c65e22bd18d6f6d54fe1607b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eea8c6c535c82daa9405a46b96cbc205698b50679ca3cf696781b5a68f028af3\"" Sep 16 05:03:44.287748 containerd[1903]: time="2025-09-16T05:03:44.287709768Z" level=info msg="StartContainer for \"eea8c6c535c82daa9405a46b96cbc205698b50679ca3cf696781b5a68f028af3\"" Sep 16 05:03:44.289507 containerd[1903]: time="2025-09-16T05:03:44.289459121Z" level=info msg="connecting to shim eea8c6c535c82daa9405a46b96cbc205698b50679ca3cf696781b5a68f028af3" address="unix:///run/containerd/s/bf22eca44c29eded2b92fac8fdff5e848402ceb3938a09177ff8fde72b11e32d" protocol=ttrpc version=3 Sep 16 05:03:44.298095 containerd[1903]: time="2025-09-16T05:03:44.298035326Z" level=info msg="CreateContainer within sandbox \"b911716785126987af89e3f728632dcc22454d770e0b5e6cb19d2cfc42bbd320\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"306f3516d0a0050f78972e43532ea7a317fb7001c6d0daf81b302b1052ec7dc9\"" Sep 16 05:03:44.298793 containerd[1903]: time="2025-09-16T05:03:44.298761479Z" level=info msg="StartContainer for \"306f3516d0a0050f78972e43532ea7a317fb7001c6d0daf81b302b1052ec7dc9\"" Sep 16 05:03:44.303135 containerd[1903]: time="2025-09-16T05:03:44.303032500Z" level=info msg="connecting to shim 306f3516d0a0050f78972e43532ea7a317fb7001c6d0daf81b302b1052ec7dc9" address="unix:///run/containerd/s/2ea01f2252b1de61dca78ed156c5814e6d8431109a372ecb327c7ceb937e42bc" protocol=ttrpc version=3 Sep 16 05:03:44.304557 kubelet[2914]: W0916 05:03:44.304493 2914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.160:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.160:6443: connect: connection refused Sep 16 05:03:44.304724 kubelet[2914]: E0916 05:03:44.304568 2914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.160:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.160:6443: connect: connection refused" logger="UnhandledError" Sep 16 05:03:44.306211 containerd[1903]: time="2025-09-16T05:03:44.306134360Z" level=info msg="Container ac26c8fb4744aa27b932cc658eeccf987032c4279eb82966270d8f579c0f95c8: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:44.319412 containerd[1903]: time="2025-09-16T05:03:44.319365682Z" level=info msg="CreateContainer within sandbox \"d28536fa173817a6c2a7d2f0d75b61c723fe4944852536c0119c8dd2528b56bc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ac26c8fb4744aa27b932cc658eeccf987032c4279eb82966270d8f579c0f95c8\"" Sep 16 05:03:44.320985 containerd[1903]: time="2025-09-16T05:03:44.320949102Z" level=info msg="StartContainer for \"ac26c8fb4744aa27b932cc658eeccf987032c4279eb82966270d8f579c0f95c8\"" Sep 16 05:03:44.324125 containerd[1903]: time="2025-09-16T05:03:44.323895686Z" level=info msg="connecting to shim ac26c8fb4744aa27b932cc658eeccf987032c4279eb82966270d8f579c0f95c8" address="unix:///run/containerd/s/b5095885c375de7b866d97cbe2a1b695421354a5e522bc633fce77dabe7052a5" protocol=ttrpc version=3 Sep 16 05:03:44.332049 systemd[1]: Started cri-containerd-eea8c6c535c82daa9405a46b96cbc205698b50679ca3cf696781b5a68f028af3.scope - libcontainer container eea8c6c535c82daa9405a46b96cbc205698b50679ca3cf696781b5a68f028af3. Sep 16 05:03:44.342301 systemd[1]: Started cri-containerd-306f3516d0a0050f78972e43532ea7a317fb7001c6d0daf81b302b1052ec7dc9.scope - libcontainer container 306f3516d0a0050f78972e43532ea7a317fb7001c6d0daf81b302b1052ec7dc9. Sep 16 05:03:44.365244 systemd[1]: Started cri-containerd-ac26c8fb4744aa27b932cc658eeccf987032c4279eb82966270d8f579c0f95c8.scope - libcontainer container ac26c8fb4744aa27b932cc658eeccf987032c4279eb82966270d8f579c0f95c8. Sep 16 05:03:44.478915 kubelet[2914]: W0916 05:03:44.478870 2914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.160:6443: connect: connection refused Sep 16 05:03:44.479139 kubelet[2914]: E0916 05:03:44.478934 2914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.160:6443: connect: connection refused" logger="UnhandledError" Sep 16 05:03:44.479754 containerd[1903]: time="2025-09-16T05:03:44.479309157Z" level=info msg="StartContainer for \"306f3516d0a0050f78972e43532ea7a317fb7001c6d0daf81b302b1052ec7dc9\" returns successfully" Sep 16 05:03:44.480146 kubelet[2914]: W0916 05:03:44.480082 2914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.160:6443: connect: connection refused Sep 16 05:03:44.480372 kubelet[2914]: E0916 05:03:44.480158 2914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.160:6443: connect: connection refused" logger="UnhandledError" Sep 16 05:03:44.495389 containerd[1903]: time="2025-09-16T05:03:44.495330711Z" level=info msg="StartContainer for \"ac26c8fb4744aa27b932cc658eeccf987032c4279eb82966270d8f579c0f95c8\" returns successfully" Sep 16 05:03:44.495650 containerd[1903]: time="2025-09-16T05:03:44.495551428Z" level=info msg="StartContainer for \"eea8c6c535c82daa9405a46b96cbc205698b50679ca3cf696781b5a68f028af3\" returns successfully" Sep 16 05:03:44.721465 kubelet[2914]: W0916 05:03:44.721261 2914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-160&limit=500&resourceVersion=0": dial tcp 172.31.26.160:6443: connect: connection refused Sep 16 05:03:44.721465 kubelet[2914]: E0916 05:03:44.721358 2914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-160&limit=500&resourceVersion=0\": dial tcp 172.31.26.160:6443: connect: connection refused" logger="UnhandledError" Sep 16 05:03:44.781497 kubelet[2914]: E0916 05:03:44.781364 2914 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.160:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.160:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-160.1865aacc28016a03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-160,UID:ip-172-31-26-160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-160,},FirstTimestamp:2025-09-16 05:03:43.379278339 +0000 UTC m=+0.683002110,LastTimestamp:2025-09-16 05:03:43.379278339 +0000 UTC m=+0.683002110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-160,}" Sep 16 05:03:44.802363 kubelet[2914]: E0916 05:03:44.802307 2914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-160?timeout=10s\": dial tcp 172.31.26.160:6443: connect: connection refused" interval="1.6s" Sep 16 05:03:45.001667 kubelet[2914]: I0916 05:03:45.001441 2914 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-160" Sep 16 05:03:45.002112 kubelet[2914]: E0916 05:03:45.002078 2914 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.26.160:6443/api/v1/nodes\": dial tcp 172.31.26.160:6443: connect: connection refused" node="ip-172-31-26-160" Sep 16 05:03:45.359717 kubelet[2914]: E0916 05:03:45.359594 2914 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.26.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.160:6443: connect: connection refused" logger="UnhandledError" Sep 16 05:03:46.606110 kubelet[2914]: I0916 05:03:46.606080 2914 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-160" Sep 16 05:03:47.218473 kubelet[2914]: E0916 05:03:47.218421 2914 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-160\" not found" node="ip-172-31-26-160" Sep 16 05:03:47.325996 kubelet[2914]: I0916 05:03:47.325890 2914 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-26-160" Sep 16 05:03:47.325996 kubelet[2914]: E0916 05:03:47.325941 2914 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-26-160\": node \"ip-172-31-26-160\" not found" Sep 16 05:03:47.340852 kubelet[2914]: E0916 05:03:47.340799 2914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-160\" not found" Sep 16 05:03:47.440936 kubelet[2914]: E0916 05:03:47.440883 2914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-160\" not found" Sep 16 05:03:48.377444 kubelet[2914]: I0916 05:03:48.377405 2914 apiserver.go:52] "Watching apiserver" Sep 16 05:03:48.401214 kubelet[2914]: I0916 05:03:48.401163 2914 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 16 05:03:49.141781 systemd[1]: Reload requested from client PID 3183 ('systemctl') (unit session-7.scope)... Sep 16 05:03:49.141800 systemd[1]: Reloading... Sep 16 05:03:49.278833 zram_generator::config[3231]: No configuration found. Sep 16 05:03:49.574912 systemd[1]: Reloading finished in 432 ms. Sep 16 05:03:49.616529 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:49.630665 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 05:03:49.630989 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:49.631053 systemd[1]: kubelet.service: Consumed 1.044s CPU time, 126.6M memory peak. Sep 16 05:03:49.635556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:49.899490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:49.911360 (kubelet)[3288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 05:03:49.983129 kubelet[3288]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:03:49.983129 kubelet[3288]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 16 05:03:49.983129 kubelet[3288]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:03:49.983589 kubelet[3288]: I0916 05:03:49.983162 3288 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 05:03:49.991976 kubelet[3288]: I0916 05:03:49.991718 3288 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 16 05:03:49.991976 kubelet[3288]: I0916 05:03:49.991750 3288 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 05:03:49.992395 kubelet[3288]: I0916 05:03:49.992247 3288 server.go:934] "Client rotation is on, will bootstrap in background" Sep 16 05:03:49.995248 kubelet[3288]: I0916 05:03:49.995141 3288 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 16 05:03:50.014733 kubelet[3288]: I0916 05:03:50.013678 3288 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 05:03:50.018720 kubelet[3288]: I0916 05:03:50.018691 3288 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 05:03:50.020957 kubelet[3288]: I0916 05:03:50.020885 3288 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 05:03:50.021242 kubelet[3288]: I0916 05:03:50.021211 3288 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 16 05:03:50.021787 kubelet[3288]: I0916 05:03:50.021400 3288 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 05:03:50.021787 kubelet[3288]: I0916 05:03:50.021427 3288 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-160","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 05:03:50.021787 kubelet[3288]: I0916 05:03:50.021617 3288 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 05:03:50.021787 kubelet[3288]: I0916 05:03:50.021632 3288 container_manager_linux.go:300] "Creating device plugin manager" Sep 16 05:03:50.021990 kubelet[3288]: I0916 05:03:50.021661 3288 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:03:50.021990 kubelet[3288]: I0916 05:03:50.021755 3288 kubelet.go:408] "Attempting to sync node with API server" Sep 16 05:03:50.022064 kubelet[3288]: I0916 05:03:50.022054 3288 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 05:03:50.022126 kubelet[3288]: I0916 05:03:50.022120 3288 kubelet.go:314] "Adding apiserver pod source" Sep 16 05:03:50.022190 kubelet[3288]: I0916 05:03:50.022164 3288 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 05:03:50.025412 kubelet[3288]: I0916 05:03:50.025392 3288 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 05:03:50.025834 kubelet[3288]: I0916 05:03:50.025799 3288 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 05:03:50.026206 kubelet[3288]: I0916 05:03:50.026186 3288 server.go:1274] "Started kubelet" Sep 16 05:03:50.030335 kubelet[3288]: I0916 05:03:50.030310 3288 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 05:03:50.046387 kubelet[3288]: I0916 05:03:50.046336 3288 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 05:03:50.048830 kubelet[3288]: I0916 05:03:50.047616 3288 server.go:449] "Adding debug handlers to kubelet server" Sep 16 05:03:50.049716 kubelet[3288]: I0916 05:03:50.049684 3288 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 05:03:50.050032 kubelet[3288]: I0916 05:03:50.050019 3288 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 05:03:50.050491 kubelet[3288]: I0916 05:03:50.050474 3288 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 05:03:50.052741 kubelet[3288]: I0916 05:03:50.052713 3288 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 16 05:03:50.054323 kubelet[3288]: I0916 05:03:50.053897 3288 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 16 05:03:50.055423 kubelet[3288]: I0916 05:03:50.055411 3288 reconciler.go:26] "Reconciler: start to sync state" Sep 16 05:03:50.059149 kubelet[3288]: I0916 05:03:50.059125 3288 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 05:03:50.062714 kubelet[3288]: I0916 05:03:50.062658 3288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 05:03:50.063961 kubelet[3288]: I0916 05:03:50.063934 3288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 05:03:50.064050 kubelet[3288]: I0916 05:03:50.063974 3288 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 16 05:03:50.064050 kubelet[3288]: I0916 05:03:50.063993 3288 kubelet.go:2321] "Starting kubelet main sync loop" Sep 16 05:03:50.064105 kubelet[3288]: E0916 05:03:50.064030 3288 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 05:03:50.064351 kubelet[3288]: E0916 05:03:50.064335 3288 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 05:03:50.065836 kubelet[3288]: I0916 05:03:50.065532 3288 factory.go:221] Registration of the containerd container factory successfully Sep 16 05:03:50.065937 kubelet[3288]: I0916 05:03:50.065927 3288 factory.go:221] Registration of the systemd container factory successfully Sep 16 05:03:50.115050 kubelet[3288]: I0916 05:03:50.115028 3288 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 16 05:03:50.115286 kubelet[3288]: I0916 05:03:50.115273 3288 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 16 05:03:50.115387 kubelet[3288]: I0916 05:03:50.115379 3288 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:03:50.115572 kubelet[3288]: I0916 05:03:50.115561 3288 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 05:03:50.115629 kubelet[3288]: I0916 05:03:50.115613 3288 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 05:03:50.115670 kubelet[3288]: I0916 05:03:50.115665 3288 policy_none.go:49] "None policy: Start" Sep 16 05:03:50.116335 kubelet[3288]: I0916 05:03:50.116314 3288 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 16 05:03:50.116414 kubelet[3288]: I0916 05:03:50.116350 3288 state_mem.go:35] "Initializing new in-memory state store" Sep 16 05:03:50.116821 kubelet[3288]: I0916 05:03:50.116786 3288 state_mem.go:75] "Updated machine memory state" Sep 16 05:03:50.126952 kubelet[3288]: I0916 05:03:50.126802 3288 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 05:03:50.127067 kubelet[3288]: I0916 05:03:50.126980 3288 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 05:03:50.127067 kubelet[3288]: I0916 05:03:50.126989 3288 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 05:03:50.129617 kubelet[3288]: I0916 05:03:50.127879 3288 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 05:03:50.238336 kubelet[3288]: I0916 05:03:50.238282 3288 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-160" Sep 16 05:03:50.251886 kubelet[3288]: I0916 05:03:50.251839 3288 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-26-160" Sep 16 05:03:50.252017 kubelet[3288]: I0916 05:03:50.251904 3288 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-26-160" Sep 16 05:03:50.256044 kubelet[3288]: I0916 05:03:50.256012 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eee4d1faddb0a6349a8ceec8d0782363-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-160\" (UID: \"eee4d1faddb0a6349a8ceec8d0782363\") " pod="kube-system/kube-controller-manager-ip-172-31-26-160" Sep 16 05:03:50.256044 kubelet[3288]: I0916 05:03:50.256045 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eee4d1faddb0a6349a8ceec8d0782363-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-160\" (UID: \"eee4d1faddb0a6349a8ceec8d0782363\") " pod="kube-system/kube-controller-manager-ip-172-31-26-160" Sep 16 05:03:50.256044 kubelet[3288]: I0916 05:03:50.256064 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eee4d1faddb0a6349a8ceec8d0782363-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-160\" (UID: \"eee4d1faddb0a6349a8ceec8d0782363\") " pod="kube-system/kube-controller-manager-ip-172-31-26-160" Sep 16 05:03:50.256344 kubelet[3288]: I0916 05:03:50.256083 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4cb2e0590f1961ed68e1de6ea412ccbd-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-160\" (UID: \"4cb2e0590f1961ed68e1de6ea412ccbd\") " pod="kube-system/kube-apiserver-ip-172-31-26-160" Sep 16 05:03:50.256344 kubelet[3288]: I0916 05:03:50.256102 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4cb2e0590f1961ed68e1de6ea412ccbd-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-160\" (UID: \"4cb2e0590f1961ed68e1de6ea412ccbd\") " pod="kube-system/kube-apiserver-ip-172-31-26-160" Sep 16 05:03:50.256344 kubelet[3288]: I0916 05:03:50.256115 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eee4d1faddb0a6349a8ceec8d0782363-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-160\" (UID: \"eee4d1faddb0a6349a8ceec8d0782363\") " pod="kube-system/kube-controller-manager-ip-172-31-26-160" Sep 16 05:03:50.256344 kubelet[3288]: I0916 05:03:50.256129 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eee4d1faddb0a6349a8ceec8d0782363-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-160\" (UID: \"eee4d1faddb0a6349a8ceec8d0782363\") " pod="kube-system/kube-controller-manager-ip-172-31-26-160" Sep 16 05:03:50.256344 kubelet[3288]: I0916 05:03:50.256143 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff1f3509adb08dff0222e0d3b6323f8d-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-160\" (UID: \"ff1f3509adb08dff0222e0d3b6323f8d\") " pod="kube-system/kube-scheduler-ip-172-31-26-160" Sep 16 05:03:50.257971 kubelet[3288]: I0916 05:03:50.256158 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4cb2e0590f1961ed68e1de6ea412ccbd-ca-certs\") pod \"kube-apiserver-ip-172-31-26-160\" (UID: \"4cb2e0590f1961ed68e1de6ea412ccbd\") " pod="kube-system/kube-apiserver-ip-172-31-26-160" Sep 16 05:03:51.026504 kubelet[3288]: I0916 05:03:51.025977 3288 apiserver.go:52] "Watching apiserver" Sep 16 05:03:51.055151 kubelet[3288]: I0916 05:03:51.054972 3288 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 16 05:03:51.094395 kubelet[3288]: I0916 05:03:51.094310 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-160" podStartSLOduration=1.094287911 podStartE2EDuration="1.094287911s" podCreationTimestamp="2025-09-16 05:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:03:51.079151726 +0000 UTC m=+1.158489185" watchObservedRunningTime="2025-09-16 05:03:51.094287911 +0000 UTC m=+1.173625357" Sep 16 05:03:51.115145 kubelet[3288]: I0916 05:03:51.115094 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-160" podStartSLOduration=1.115060133 podStartE2EDuration="1.115060133s" podCreationTimestamp="2025-09-16 05:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:03:51.096336916 +0000 UTC m=+1.175674371" watchObservedRunningTime="2025-09-16 05:03:51.115060133 +0000 UTC m=+1.194397579" Sep 16 05:03:51.128435 kubelet[3288]: I0916 05:03:51.128373 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-160" podStartSLOduration=1.128350477 podStartE2EDuration="1.128350477s" podCreationTimestamp="2025-09-16 05:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:03:51.115550858 +0000 UTC m=+1.194888313" watchObservedRunningTime="2025-09-16 05:03:51.128350477 +0000 UTC m=+1.207687931" Sep 16 05:03:51.170329 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 16 05:03:54.197982 kubelet[3288]: I0916 05:03:54.197942 3288 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 05:03:54.198556 containerd[1903]: time="2025-09-16T05:03:54.198525565Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 05:03:54.198917 kubelet[3288]: I0916 05:03:54.198695 3288 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 05:03:55.072299 systemd[1]: Created slice kubepods-besteffort-pod8db275ff_0d88_4299_8966_b472d61d38d5.slice - libcontainer container kubepods-besteffort-pod8db275ff_0d88_4299_8966_b472d61d38d5.slice. Sep 16 05:03:55.090512 kubelet[3288]: I0916 05:03:55.090351 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8db275ff-0d88-4299-8966-b472d61d38d5-kube-proxy\") pod \"kube-proxy-r5px5\" (UID: \"8db275ff-0d88-4299-8966-b472d61d38d5\") " pod="kube-system/kube-proxy-r5px5" Sep 16 05:03:55.090674 kubelet[3288]: I0916 05:03:55.090572 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8db275ff-0d88-4299-8966-b472d61d38d5-lib-modules\") pod \"kube-proxy-r5px5\" (UID: \"8db275ff-0d88-4299-8966-b472d61d38d5\") " pod="kube-system/kube-proxy-r5px5" Sep 16 05:03:55.090674 kubelet[3288]: I0916 05:03:55.090631 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k26t5\" (UniqueName: \"kubernetes.io/projected/8db275ff-0d88-4299-8966-b472d61d38d5-kube-api-access-k26t5\") pod \"kube-proxy-r5px5\" (UID: \"8db275ff-0d88-4299-8966-b472d61d38d5\") " pod="kube-system/kube-proxy-r5px5" Sep 16 05:03:55.090739 kubelet[3288]: I0916 05:03:55.090653 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8db275ff-0d88-4299-8966-b472d61d38d5-xtables-lock\") pod \"kube-proxy-r5px5\" (UID: \"8db275ff-0d88-4299-8966-b472d61d38d5\") " pod="kube-system/kube-proxy-r5px5" Sep 16 05:03:55.145468 systemd[1]: Created slice kubepods-besteffort-pod46f88c94_fdb0_4b4b_b248_80080211ff5c.slice - libcontainer container kubepods-besteffort-pod46f88c94_fdb0_4b4b_b248_80080211ff5c.slice. Sep 16 05:03:55.191045 kubelet[3288]: I0916 05:03:55.191006 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtgzs\" (UniqueName: \"kubernetes.io/projected/46f88c94-fdb0-4b4b-b248-80080211ff5c-kube-api-access-jtgzs\") pod \"tigera-operator-58fc44c59b-sfj2w\" (UID: \"46f88c94-fdb0-4b4b-b248-80080211ff5c\") " pod="tigera-operator/tigera-operator-58fc44c59b-sfj2w" Sep 16 05:03:55.191045 kubelet[3288]: I0916 05:03:55.191050 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/46f88c94-fdb0-4b4b-b248-80080211ff5c-var-lib-calico\") pod \"tigera-operator-58fc44c59b-sfj2w\" (UID: \"46f88c94-fdb0-4b4b-b248-80080211ff5c\") " pod="tigera-operator/tigera-operator-58fc44c59b-sfj2w" Sep 16 05:03:55.381978 containerd[1903]: time="2025-09-16T05:03:55.381868897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r5px5,Uid:8db275ff-0d88-4299-8966-b472d61d38d5,Namespace:kube-system,Attempt:0,}" Sep 16 05:03:55.411614 containerd[1903]: time="2025-09-16T05:03:55.411543546Z" level=info msg="connecting to shim 82a15a05d6fe2aaa70aaa1c8559c6b06e89bd62c13f7e205d227a70c7f49bb98" address="unix:///run/containerd/s/3f1502d486b282c1a014e89ea29086ab8f2a13a84342a31f473c8d082e8423b7" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:55.437015 systemd[1]: Started cri-containerd-82a15a05d6fe2aaa70aaa1c8559c6b06e89bd62c13f7e205d227a70c7f49bb98.scope - libcontainer container 82a15a05d6fe2aaa70aaa1c8559c6b06e89bd62c13f7e205d227a70c7f49bb98. Sep 16 05:03:55.453258 containerd[1903]: time="2025-09-16T05:03:55.453208737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-sfj2w,Uid:46f88c94-fdb0-4b4b-b248-80080211ff5c,Namespace:tigera-operator,Attempt:0,}" Sep 16 05:03:55.475260 containerd[1903]: time="2025-09-16T05:03:55.475217328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r5px5,Uid:8db275ff-0d88-4299-8966-b472d61d38d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"82a15a05d6fe2aaa70aaa1c8559c6b06e89bd62c13f7e205d227a70c7f49bb98\"" Sep 16 05:03:55.478836 containerd[1903]: time="2025-09-16T05:03:55.478780313Z" level=info msg="CreateContainer within sandbox \"82a15a05d6fe2aaa70aaa1c8559c6b06e89bd62c13f7e205d227a70c7f49bb98\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 05:03:55.501792 containerd[1903]: time="2025-09-16T05:03:55.501737129Z" level=info msg="connecting to shim 36941ded6aa3d1c846ce04ad2fe29a4b2e229ac1ddcc9b26762ceecf5ca7f7e8" address="unix:///run/containerd/s/367b364d278228f9286fb1bd186841c35846f2a8691ebfb322a6fb18c7ab92b9" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:55.508116 containerd[1903]: time="2025-09-16T05:03:55.508073817Z" level=info msg="Container b8393d193764bccf0b82b29c6abaa934be87eb27412841b4b30c8e42627c2e4e: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:55.534161 containerd[1903]: time="2025-09-16T05:03:55.534114732Z" level=info msg="CreateContainer within sandbox \"82a15a05d6fe2aaa70aaa1c8559c6b06e89bd62c13f7e205d227a70c7f49bb98\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b8393d193764bccf0b82b29c6abaa934be87eb27412841b4b30c8e42627c2e4e\"" Sep 16 05:03:55.535180 containerd[1903]: time="2025-09-16T05:03:55.535135812Z" level=info msg="StartContainer for \"b8393d193764bccf0b82b29c6abaa934be87eb27412841b4b30c8e42627c2e4e\"" Sep 16 05:03:55.541244 containerd[1903]: time="2025-09-16T05:03:55.540598215Z" level=info msg="connecting to shim b8393d193764bccf0b82b29c6abaa934be87eb27412841b4b30c8e42627c2e4e" address="unix:///run/containerd/s/3f1502d486b282c1a014e89ea29086ab8f2a13a84342a31f473c8d082e8423b7" protocol=ttrpc version=3 Sep 16 05:03:55.555076 systemd[1]: Started cri-containerd-36941ded6aa3d1c846ce04ad2fe29a4b2e229ac1ddcc9b26762ceecf5ca7f7e8.scope - libcontainer container 36941ded6aa3d1c846ce04ad2fe29a4b2e229ac1ddcc9b26762ceecf5ca7f7e8. Sep 16 05:03:55.568686 systemd[1]: Started cri-containerd-b8393d193764bccf0b82b29c6abaa934be87eb27412841b4b30c8e42627c2e4e.scope - libcontainer container b8393d193764bccf0b82b29c6abaa934be87eb27412841b4b30c8e42627c2e4e. Sep 16 05:03:55.635174 containerd[1903]: time="2025-09-16T05:03:55.634602961Z" level=info msg="StartContainer for \"b8393d193764bccf0b82b29c6abaa934be87eb27412841b4b30c8e42627c2e4e\" returns successfully" Sep 16 05:03:55.643585 containerd[1903]: time="2025-09-16T05:03:55.643529690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-sfj2w,Uid:46f88c94-fdb0-4b4b-b248-80080211ff5c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"36941ded6aa3d1c846ce04ad2fe29a4b2e229ac1ddcc9b26762ceecf5ca7f7e8\"" Sep 16 05:03:55.646029 containerd[1903]: time="2025-09-16T05:03:55.645999068Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 16 05:03:56.154205 kubelet[3288]: I0916 05:03:56.154052 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r5px5" podStartSLOduration=1.154036052 podStartE2EDuration="1.154036052s" podCreationTimestamp="2025-09-16 05:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:03:56.137948216 +0000 UTC m=+6.217285667" watchObservedRunningTime="2025-09-16 05:03:56.154036052 +0000 UTC m=+6.233373508" Sep 16 05:03:57.000676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1038080470.mount: Deactivated successfully. Sep 16 05:03:59.222530 containerd[1903]: time="2025-09-16T05:03:59.222461988Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:59.224363 containerd[1903]: time="2025-09-16T05:03:59.224227330Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 16 05:03:59.226617 containerd[1903]: time="2025-09-16T05:03:59.226580080Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:59.229930 containerd[1903]: time="2025-09-16T05:03:59.229893523Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:59.230736 containerd[1903]: time="2025-09-16T05:03:59.230707530Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.584538632s" Sep 16 05:03:59.230849 containerd[1903]: time="2025-09-16T05:03:59.230834568Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 16 05:03:59.233609 containerd[1903]: time="2025-09-16T05:03:59.233575796Z" level=info msg="CreateContainer within sandbox \"36941ded6aa3d1c846ce04ad2fe29a4b2e229ac1ddcc9b26762ceecf5ca7f7e8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 16 05:03:59.250131 containerd[1903]: time="2025-09-16T05:03:59.247712618Z" level=info msg="Container 02aa8cf9b02cbec1c83f6415d2793ef5988e91631a3e322e0d0ab6ab437bbeb1: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:59.260910 containerd[1903]: time="2025-09-16T05:03:59.260870385Z" level=info msg="CreateContainer within sandbox \"36941ded6aa3d1c846ce04ad2fe29a4b2e229ac1ddcc9b26762ceecf5ca7f7e8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"02aa8cf9b02cbec1c83f6415d2793ef5988e91631a3e322e0d0ab6ab437bbeb1\"" Sep 16 05:03:59.261363 containerd[1903]: time="2025-09-16T05:03:59.261342564Z" level=info msg="StartContainer for \"02aa8cf9b02cbec1c83f6415d2793ef5988e91631a3e322e0d0ab6ab437bbeb1\"" Sep 16 05:03:59.262311 containerd[1903]: time="2025-09-16T05:03:59.262254687Z" level=info msg="connecting to shim 02aa8cf9b02cbec1c83f6415d2793ef5988e91631a3e322e0d0ab6ab437bbeb1" address="unix:///run/containerd/s/367b364d278228f9286fb1bd186841c35846f2a8691ebfb322a6fb18c7ab92b9" protocol=ttrpc version=3 Sep 16 05:03:59.284004 systemd[1]: Started cri-containerd-02aa8cf9b02cbec1c83f6415d2793ef5988e91631a3e322e0d0ab6ab437bbeb1.scope - libcontainer container 02aa8cf9b02cbec1c83f6415d2793ef5988e91631a3e322e0d0ab6ab437bbeb1. Sep 16 05:03:59.315707 containerd[1903]: time="2025-09-16T05:03:59.315662838Z" level=info msg="StartContainer for \"02aa8cf9b02cbec1c83f6415d2793ef5988e91631a3e322e0d0ab6ab437bbeb1\" returns successfully" Sep 16 05:04:04.861581 update_engine[1869]: I20250916 05:04:04.860857 1869 update_attempter.cc:509] Updating boot flags... Sep 16 05:04:06.586107 sudo[2351]: pam_unix(sudo:session): session closed for user root Sep 16 05:04:06.609840 sshd[2350]: Connection closed by 139.178.68.195 port 49256 Sep 16 05:04:06.610927 sshd-session[2347]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:06.618422 systemd-logind[1866]: Session 7 logged out. Waiting for processes to exit. Sep 16 05:04:06.620290 systemd[1]: sshd@6-172.31.26.160:22-139.178.68.195:49256.service: Deactivated successfully. Sep 16 05:04:06.626378 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 05:04:06.628212 systemd[1]: session-7.scope: Consumed 5.053s CPU time, 149.7M memory peak. Sep 16 05:04:06.634901 systemd-logind[1866]: Removed session 7. Sep 16 05:04:11.554539 kubelet[3288]: I0916 05:04:11.553322 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-sfj2w" podStartSLOduration=12.966489514 podStartE2EDuration="16.55329873s" podCreationTimestamp="2025-09-16 05:03:55 +0000 UTC" firstStartedPulling="2025-09-16 05:03:55.644871352 +0000 UTC m=+5.724208794" lastFinishedPulling="2025-09-16 05:03:59.231680565 +0000 UTC m=+9.311018010" observedRunningTime="2025-09-16 05:04:00.136854886 +0000 UTC m=+10.216192341" watchObservedRunningTime="2025-09-16 05:04:11.55329873 +0000 UTC m=+21.632636262" Sep 16 05:04:11.565388 systemd[1]: Created slice kubepods-besteffort-pod31fd13cf_def5_40de_809d_9ffe4834bb69.slice - libcontainer container kubepods-besteffort-pod31fd13cf_def5_40de_809d_9ffe4834bb69.slice. Sep 16 05:04:11.708822 kubelet[3288]: I0916 05:04:11.708632 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31fd13cf-def5-40de-809d-9ffe4834bb69-tigera-ca-bundle\") pod \"calico-typha-6648df47b8-fbgxr\" (UID: \"31fd13cf-def5-40de-809d-9ffe4834bb69\") " pod="calico-system/calico-typha-6648df47b8-fbgxr" Sep 16 05:04:11.708822 kubelet[3288]: I0916 05:04:11.708695 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/31fd13cf-def5-40de-809d-9ffe4834bb69-typha-certs\") pod \"calico-typha-6648df47b8-fbgxr\" (UID: \"31fd13cf-def5-40de-809d-9ffe4834bb69\") " pod="calico-system/calico-typha-6648df47b8-fbgxr" Sep 16 05:04:11.708822 kubelet[3288]: I0916 05:04:11.708722 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5smb\" (UniqueName: \"kubernetes.io/projected/31fd13cf-def5-40de-809d-9ffe4834bb69-kube-api-access-l5smb\") pod \"calico-typha-6648df47b8-fbgxr\" (UID: \"31fd13cf-def5-40de-809d-9ffe4834bb69\") " pod="calico-system/calico-typha-6648df47b8-fbgxr" Sep 16 05:04:11.811199 systemd[1]: Created slice kubepods-besteffort-podbfd3c62b_1470_4e33_a58e_e0dab1be7d5a.slice - libcontainer container kubepods-besteffort-podbfd3c62b_1470_4e33_a58e_e0dab1be7d5a.slice. Sep 16 05:04:11.877202 containerd[1903]: time="2025-09-16T05:04:11.877163233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6648df47b8-fbgxr,Uid:31fd13cf-def5-40de-809d-9ffe4834bb69,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:11.909937 kubelet[3288]: I0916 05:04:11.909417 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfd3c62b-1470-4e33-a58e-e0dab1be7d5a-xtables-lock\") pod \"calico-node-pvqsk\" (UID: \"bfd3c62b-1470-4e33-a58e-e0dab1be7d5a\") " pod="calico-system/calico-node-pvqsk" Sep 16 05:04:11.909937 kubelet[3288]: I0916 05:04:11.909461 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bfd3c62b-1470-4e33-a58e-e0dab1be7d5a-policysync\") pod \"calico-node-pvqsk\" (UID: \"bfd3c62b-1470-4e33-a58e-e0dab1be7d5a\") " pod="calico-system/calico-node-pvqsk" Sep 16 05:04:11.909937 kubelet[3288]: I0916 05:04:11.909478 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bfd3c62b-1470-4e33-a58e-e0dab1be7d5a-flexvol-driver-host\") pod \"calico-node-pvqsk\" (UID: \"bfd3c62b-1470-4e33-a58e-e0dab1be7d5a\") " pod="calico-system/calico-node-pvqsk" Sep 16 05:04:11.909937 kubelet[3288]: I0916 05:04:11.909496 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bfd3c62b-1470-4e33-a58e-e0dab1be7d5a-node-certs\") pod \"calico-node-pvqsk\" (UID: \"bfd3c62b-1470-4e33-a58e-e0dab1be7d5a\") " pod="calico-system/calico-node-pvqsk" Sep 16 05:04:11.909937 kubelet[3288]: I0916 05:04:11.909511 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bfd3c62b-1470-4e33-a58e-e0dab1be7d5a-var-run-calico\") pod \"calico-node-pvqsk\" (UID: \"bfd3c62b-1470-4e33-a58e-e0dab1be7d5a\") " pod="calico-system/calico-node-pvqsk" Sep 16 05:04:11.910145 kubelet[3288]: I0916 05:04:11.909525 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bfd3c62b-1470-4e33-a58e-e0dab1be7d5a-cni-log-dir\") pod \"calico-node-pvqsk\" (UID: \"bfd3c62b-1470-4e33-a58e-e0dab1be7d5a\") " pod="calico-system/calico-node-pvqsk" Sep 16 05:04:11.910145 kubelet[3288]: I0916 05:04:11.909539 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bfd3c62b-1470-4e33-a58e-e0dab1be7d5a-cni-bin-dir\") pod \"calico-node-pvqsk\" (UID: \"bfd3c62b-1470-4e33-a58e-e0dab1be7d5a\") " pod="calico-system/calico-node-pvqsk" Sep 16 05:04:11.910145 kubelet[3288]: I0916 05:04:11.909552 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bfd3c62b-1470-4e33-a58e-e0dab1be7d5a-var-lib-calico\") pod \"calico-node-pvqsk\" (UID: \"bfd3c62b-1470-4e33-a58e-e0dab1be7d5a\") " pod="calico-system/calico-node-pvqsk" Sep 16 05:04:11.910145 kubelet[3288]: I0916 05:04:11.909568 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfd3c62b-1470-4e33-a58e-e0dab1be7d5a-tigera-ca-bundle\") pod \"calico-node-pvqsk\" (UID: \"bfd3c62b-1470-4e33-a58e-e0dab1be7d5a\") " pod="calico-system/calico-node-pvqsk" Sep 16 05:04:11.910145 kubelet[3288]: I0916 05:04:11.909585 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bfd3c62b-1470-4e33-a58e-e0dab1be7d5a-cni-net-dir\") pod \"calico-node-pvqsk\" (UID: \"bfd3c62b-1470-4e33-a58e-e0dab1be7d5a\") " pod="calico-system/calico-node-pvqsk" Sep 16 05:04:11.910276 kubelet[3288]: I0916 05:04:11.909599 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwrqk\" (UniqueName: \"kubernetes.io/projected/bfd3c62b-1470-4e33-a58e-e0dab1be7d5a-kube-api-access-hwrqk\") pod \"calico-node-pvqsk\" (UID: \"bfd3c62b-1470-4e33-a58e-e0dab1be7d5a\") " pod="calico-system/calico-node-pvqsk" Sep 16 05:04:11.910276 kubelet[3288]: I0916 05:04:11.909613 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfd3c62b-1470-4e33-a58e-e0dab1be7d5a-lib-modules\") pod \"calico-node-pvqsk\" (UID: \"bfd3c62b-1470-4e33-a58e-e0dab1be7d5a\") " pod="calico-system/calico-node-pvqsk" Sep 16 05:04:11.922831 containerd[1903]: time="2025-09-16T05:04:11.921306988Z" level=info msg="connecting to shim 6e05724c85a8cf69408adb41a05d0bd64633f8c7a67fc26e5f8f4ec5bdabd5f6" address="unix:///run/containerd/s/162b62d05951a0f41d1c3ba3b8a5a1261a4863b94f02238f0dd49a67757f78a4" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:11.969154 systemd[1]: Started cri-containerd-6e05724c85a8cf69408adb41a05d0bd64633f8c7a67fc26e5f8f4ec5bdabd5f6.scope - libcontainer container 6e05724c85a8cf69408adb41a05d0bd64633f8c7a67fc26e5f8f4ec5bdabd5f6. Sep 16 05:04:12.023563 kubelet[3288]: E0916 05:04:12.023532 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.023563 kubelet[3288]: W0916 05:04:12.023556 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.023700 kubelet[3288]: E0916 05:04:12.023683 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.028616 kubelet[3288]: E0916 05:04:12.028274 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.028616 kubelet[3288]: W0916 05:04:12.028293 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.028616 kubelet[3288]: E0916 05:04:12.028322 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.066099 containerd[1903]: time="2025-09-16T05:04:12.065509658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6648df47b8-fbgxr,Uid:31fd13cf-def5-40de-809d-9ffe4834bb69,Namespace:calico-system,Attempt:0,} returns sandbox id \"6e05724c85a8cf69408adb41a05d0bd64633f8c7a67fc26e5f8f4ec5bdabd5f6\"" Sep 16 05:04:12.070046 containerd[1903]: time="2025-09-16T05:04:12.069995172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 16 05:04:12.118837 containerd[1903]: time="2025-09-16T05:04:12.118784197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pvqsk,Uid:bfd3c62b-1470-4e33-a58e-e0dab1be7d5a,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:12.138663 kubelet[3288]: E0916 05:04:12.138311 3288 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dd2vk" podUID="dd82fc49-e12c-4f36-9356-cba04aba62de" Sep 16 05:04:12.160721 containerd[1903]: time="2025-09-16T05:04:12.159975369Z" level=info msg="connecting to shim 8ed68935c1e1e9766e3710a622cd39799c4ae1a9f9248146ce1f9ffb682e1607" address="unix:///run/containerd/s/14d84abf407c801fe05e6ba44978863502687ed526d661442ad33f0a436edb2e" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:12.190998 systemd[1]: Started cri-containerd-8ed68935c1e1e9766e3710a622cd39799c4ae1a9f9248146ce1f9ffb682e1607.scope - libcontainer container 8ed68935c1e1e9766e3710a622cd39799c4ae1a9f9248146ce1f9ffb682e1607. Sep 16 05:04:12.228853 kubelet[3288]: E0916 05:04:12.228751 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.228853 kubelet[3288]: W0916 05:04:12.228771 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.228853 kubelet[3288]: E0916 05:04:12.228792 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.233107 kubelet[3288]: E0916 05:04:12.233067 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.233283 kubelet[3288]: W0916 05:04:12.233268 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.233366 kubelet[3288]: E0916 05:04:12.233355 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.233683 kubelet[3288]: E0916 05:04:12.233672 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.233855 kubelet[3288]: W0916 05:04:12.233743 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.233855 kubelet[3288]: E0916 05:04:12.233756 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.234153 kubelet[3288]: E0916 05:04:12.234109 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.234153 kubelet[3288]: W0916 05:04:12.234120 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.234419 kubelet[3288]: E0916 05:04:12.234405 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.234796 kubelet[3288]: E0916 05:04:12.234703 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.234796 kubelet[3288]: W0916 05:04:12.234728 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.234796 kubelet[3288]: E0916 05:04:12.234742 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.235121 kubelet[3288]: E0916 05:04:12.235093 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.235235 kubelet[3288]: W0916 05:04:12.235174 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.235235 kubelet[3288]: E0916 05:04:12.235210 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.235554 kubelet[3288]: E0916 05:04:12.235544 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.235733 kubelet[3288]: W0916 05:04:12.235617 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.235733 kubelet[3288]: E0916 05:04:12.235671 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.236074 kubelet[3288]: E0916 05:04:12.235998 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.236074 kubelet[3288]: W0916 05:04:12.236009 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.236074 kubelet[3288]: E0916 05:04:12.236019 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.236440 kubelet[3288]: E0916 05:04:12.236408 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.236440 kubelet[3288]: W0916 05:04:12.236418 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.236512 containerd[1903]: time="2025-09-16T05:04:12.236484341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pvqsk,Uid:bfd3c62b-1470-4e33-a58e-e0dab1be7d5a,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ed68935c1e1e9766e3710a622cd39799c4ae1a9f9248146ce1f9ffb682e1607\"" Sep 16 05:04:12.236678 kubelet[3288]: E0916 05:04:12.236560 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.236826 kubelet[3288]: E0916 05:04:12.236799 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.236971 kubelet[3288]: W0916 05:04:12.236876 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.236971 kubelet[3288]: E0916 05:04:12.236904 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.237202 kubelet[3288]: E0916 05:04:12.237123 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.237202 kubelet[3288]: W0916 05:04:12.237132 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.237202 kubelet[3288]: E0916 05:04:12.237141 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.237621 kubelet[3288]: E0916 05:04:12.237589 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.237792 kubelet[3288]: W0916 05:04:12.237600 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.237792 kubelet[3288]: E0916 05:04:12.237697 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.238112 kubelet[3288]: E0916 05:04:12.238066 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.238112 kubelet[3288]: W0916 05:04:12.238075 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.238112 kubelet[3288]: E0916 05:04:12.238087 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.238506 kubelet[3288]: E0916 05:04:12.238438 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.238506 kubelet[3288]: W0916 05:04:12.238448 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.238506 kubelet[3288]: E0916 05:04:12.238458 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.238903 kubelet[3288]: E0916 05:04:12.238848 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.238903 kubelet[3288]: W0916 05:04:12.238858 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.238903 kubelet[3288]: E0916 05:04:12.238868 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.239308 kubelet[3288]: E0916 05:04:12.239208 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.239308 kubelet[3288]: W0916 05:04:12.239218 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.239308 kubelet[3288]: E0916 05:04:12.239228 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.239561 kubelet[3288]: E0916 05:04:12.239499 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.239561 kubelet[3288]: W0916 05:04:12.239509 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.239561 kubelet[3288]: E0916 05:04:12.239518 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.240051 kubelet[3288]: E0916 05:04:12.239929 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.240051 kubelet[3288]: W0916 05:04:12.239942 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.240051 kubelet[3288]: E0916 05:04:12.239951 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.240523 kubelet[3288]: E0916 05:04:12.240390 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.240523 kubelet[3288]: W0916 05:04:12.240402 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.240523 kubelet[3288]: E0916 05:04:12.240412 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.240938 kubelet[3288]: E0916 05:04:12.240817 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.240938 kubelet[3288]: W0916 05:04:12.240828 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.240938 kubelet[3288]: E0916 05:04:12.240837 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.315594 kubelet[3288]: E0916 05:04:12.315552 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.315594 kubelet[3288]: W0916 05:04:12.315584 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.315794 kubelet[3288]: E0916 05:04:12.315613 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.315794 kubelet[3288]: I0916 05:04:12.315654 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dd82fc49-e12c-4f36-9356-cba04aba62de-socket-dir\") pod \"csi-node-driver-dd2vk\" (UID: \"dd82fc49-e12c-4f36-9356-cba04aba62de\") " pod="calico-system/csi-node-driver-dd2vk" Sep 16 05:04:12.316728 kubelet[3288]: E0916 05:04:12.316611 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.316728 kubelet[3288]: W0916 05:04:12.316636 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.316728 kubelet[3288]: E0916 05:04:12.316674 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.317132 kubelet[3288]: I0916 05:04:12.316704 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dd82fc49-e12c-4f36-9356-cba04aba62de-varrun\") pod \"csi-node-driver-dd2vk\" (UID: \"dd82fc49-e12c-4f36-9356-cba04aba62de\") " pod="calico-system/csi-node-driver-dd2vk" Sep 16 05:04:12.320786 kubelet[3288]: E0916 05:04:12.320758 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.320786 kubelet[3288]: W0916 05:04:12.320779 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.321428 kubelet[3288]: E0916 05:04:12.321395 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.321428 kubelet[3288]: W0916 05:04:12.321417 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.321547 kubelet[3288]: E0916 05:04:12.321439 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.321547 kubelet[3288]: I0916 05:04:12.321479 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx67d\" (UniqueName: \"kubernetes.io/projected/dd82fc49-e12c-4f36-9356-cba04aba62de-kube-api-access-xx67d\") pod \"csi-node-driver-dd2vk\" (UID: \"dd82fc49-e12c-4f36-9356-cba04aba62de\") " pod="calico-system/csi-node-driver-dd2vk" Sep 16 05:04:12.321897 kubelet[3288]: E0916 05:04:12.321866 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.323923 kubelet[3288]: E0916 05:04:12.323899 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.323923 kubelet[3288]: W0916 05:04:12.323916 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.324040 kubelet[3288]: E0916 05:04:12.323951 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.324660 kubelet[3288]: I0916 05:04:12.324626 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dd82fc49-e12c-4f36-9356-cba04aba62de-registration-dir\") pod \"csi-node-driver-dd2vk\" (UID: \"dd82fc49-e12c-4f36-9356-cba04aba62de\") " pod="calico-system/csi-node-driver-dd2vk" Sep 16 05:04:12.324743 kubelet[3288]: E0916 05:04:12.324711 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.324743 kubelet[3288]: W0916 05:04:12.324723 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.324861 kubelet[3288]: E0916 05:04:12.324752 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.325305 kubelet[3288]: E0916 05:04:12.325274 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.325305 kubelet[3288]: W0916 05:04:12.325292 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.325423 kubelet[3288]: E0916 05:04:12.325312 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.327533 kubelet[3288]: E0916 05:04:12.327514 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.327533 kubelet[3288]: W0916 05:04:12.327532 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.327649 kubelet[3288]: E0916 05:04:12.327618 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.327984 kubelet[3288]: E0916 05:04:12.327878 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.327984 kubelet[3288]: W0916 05:04:12.327892 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.328765 kubelet[3288]: E0916 05:04:12.328744 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.328921 kubelet[3288]: E0916 05:04:12.328750 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.328921 kubelet[3288]: W0916 05:04:12.328778 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.329013 kubelet[3288]: E0916 05:04:12.329003 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.329056 kubelet[3288]: W0916 05:04:12.329014 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.329204 kubelet[3288]: E0916 05:04:12.329118 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.329204 kubelet[3288]: E0916 05:04:12.329140 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.329204 kubelet[3288]: I0916 05:04:12.329183 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd82fc49-e12c-4f36-9356-cba04aba62de-kubelet-dir\") pod \"csi-node-driver-dd2vk\" (UID: \"dd82fc49-e12c-4f36-9356-cba04aba62de\") " pod="calico-system/csi-node-driver-dd2vk" Sep 16 05:04:12.329379 kubelet[3288]: E0916 05:04:12.329298 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.329379 kubelet[3288]: W0916 05:04:12.329308 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.329379 kubelet[3288]: E0916 05:04:12.329321 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.329684 kubelet[3288]: E0916 05:04:12.329666 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.329982 kubelet[3288]: W0916 05:04:12.329864 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.329982 kubelet[3288]: E0916 05:04:12.329891 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.331685 kubelet[3288]: E0916 05:04:12.331662 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.332333 kubelet[3288]: W0916 05:04:12.332301 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.332333 kubelet[3288]: E0916 05:04:12.332329 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.333388 kubelet[3288]: E0916 05:04:12.333362 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.333388 kubelet[3288]: W0916 05:04:12.333378 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.333498 kubelet[3288]: E0916 05:04:12.333394 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.434897 kubelet[3288]: E0916 05:04:12.434866 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.434897 kubelet[3288]: W0916 05:04:12.434889 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.435060 kubelet[3288]: E0916 05:04:12.434908 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.435197 kubelet[3288]: E0916 05:04:12.435169 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.435197 kubelet[3288]: W0916 05:04:12.435189 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.435368 kubelet[3288]: E0916 05:04:12.435209 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.435474 kubelet[3288]: E0916 05:04:12.435457 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.435474 kubelet[3288]: W0916 05:04:12.435471 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.435536 kubelet[3288]: E0916 05:04:12.435500 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.435710 kubelet[3288]: E0916 05:04:12.435692 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.435710 kubelet[3288]: W0916 05:04:12.435701 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.435821 kubelet[3288]: E0916 05:04:12.435722 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.435917 kubelet[3288]: E0916 05:04:12.435904 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.435917 kubelet[3288]: W0916 05:04:12.435914 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.435978 kubelet[3288]: E0916 05:04:12.435930 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.436165 kubelet[3288]: E0916 05:04:12.436152 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.436165 kubelet[3288]: W0916 05:04:12.436162 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.436226 kubelet[3288]: E0916 05:04:12.436183 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.436364 kubelet[3288]: E0916 05:04:12.436351 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.436364 kubelet[3288]: W0916 05:04:12.436361 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.436472 kubelet[3288]: E0916 05:04:12.436447 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.436536 kubelet[3288]: E0916 05:04:12.436523 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.436536 kubelet[3288]: W0916 05:04:12.436532 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.436800 kubelet[3288]: E0916 05:04:12.436650 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.436800 kubelet[3288]: E0916 05:04:12.436720 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.436800 kubelet[3288]: W0916 05:04:12.436726 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.436800 kubelet[3288]: E0916 05:04:12.436755 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.436949 kubelet[3288]: E0916 05:04:12.436888 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.436949 kubelet[3288]: W0916 05:04:12.436894 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.437007 kubelet[3288]: E0916 05:04:12.436966 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.437328 kubelet[3288]: E0916 05:04:12.437051 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.437328 kubelet[3288]: W0916 05:04:12.437061 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.437328 kubelet[3288]: E0916 05:04:12.437172 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.437328 kubelet[3288]: E0916 05:04:12.437207 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.437328 kubelet[3288]: W0916 05:04:12.437212 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.437328 kubelet[3288]: E0916 05:04:12.437228 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.437967 kubelet[3288]: E0916 05:04:12.437451 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.437967 kubelet[3288]: W0916 05:04:12.437458 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.437967 kubelet[3288]: E0916 05:04:12.437505 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.437967 kubelet[3288]: E0916 05:04:12.437675 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.437967 kubelet[3288]: W0916 05:04:12.437681 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.437967 kubelet[3288]: E0916 05:04:12.437696 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.437967 kubelet[3288]: E0916 05:04:12.437884 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.437967 kubelet[3288]: W0916 05:04:12.437906 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.437967 kubelet[3288]: E0916 05:04:12.437921 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.438192 kubelet[3288]: E0916 05:04:12.438131 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.438192 kubelet[3288]: W0916 05:04:12.438138 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.438192 kubelet[3288]: E0916 05:04:12.438147 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.438674 kubelet[3288]: E0916 05:04:12.438403 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.438674 kubelet[3288]: W0916 05:04:12.438415 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.438674 kubelet[3288]: E0916 05:04:12.438423 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.438952 kubelet[3288]: E0916 05:04:12.438799 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.438952 kubelet[3288]: W0916 05:04:12.438829 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.438952 kubelet[3288]: E0916 05:04:12.438842 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.439069 kubelet[3288]: E0916 05:04:12.439031 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.439069 kubelet[3288]: W0916 05:04:12.439038 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.439069 kubelet[3288]: E0916 05:04:12.439054 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.439372 kubelet[3288]: E0916 05:04:12.439226 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.439372 kubelet[3288]: W0916 05:04:12.439240 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.439372 kubelet[3288]: E0916 05:04:12.439255 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.439776 kubelet[3288]: E0916 05:04:12.439431 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.439776 kubelet[3288]: W0916 05:04:12.439447 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.439776 kubelet[3288]: E0916 05:04:12.439479 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.440286 kubelet[3288]: E0916 05:04:12.440025 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.440286 kubelet[3288]: W0916 05:04:12.440034 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.440402 kubelet[3288]: E0916 05:04:12.440378 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.440756 kubelet[3288]: E0916 05:04:12.440740 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.440756 kubelet[3288]: W0916 05:04:12.440753 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.440859 kubelet[3288]: E0916 05:04:12.440768 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.441008 kubelet[3288]: E0916 05:04:12.440994 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.441008 kubelet[3288]: W0916 05:04:12.441005 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.441092 kubelet[3288]: E0916 05:04:12.441078 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.441851 kubelet[3288]: E0916 05:04:12.441405 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.441851 kubelet[3288]: W0916 05:04:12.441416 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.441851 kubelet[3288]: E0916 05:04:12.441427 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:12.449754 kubelet[3288]: E0916 05:04:12.449728 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:12.449754 kubelet[3288]: W0916 05:04:12.449746 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:12.449919 kubelet[3288]: E0916 05:04:12.449765 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:13.340763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2635850197.mount: Deactivated successfully. Sep 16 05:04:14.065830 kubelet[3288]: E0916 05:04:14.065767 3288 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dd2vk" podUID="dd82fc49-e12c-4f36-9356-cba04aba62de" Sep 16 05:04:14.482458 containerd[1903]: time="2025-09-16T05:04:14.482406368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:14.484224 containerd[1903]: time="2025-09-16T05:04:14.484188095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 16 05:04:14.486630 containerd[1903]: time="2025-09-16T05:04:14.486569016Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:14.491671 containerd[1903]: time="2025-09-16T05:04:14.491048818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:14.491671 containerd[1903]: time="2025-09-16T05:04:14.491559084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.421532006s" Sep 16 05:04:14.491671 containerd[1903]: time="2025-09-16T05:04:14.491586628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 16 05:04:14.493341 containerd[1903]: time="2025-09-16T05:04:14.493302857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 16 05:04:14.510283 containerd[1903]: time="2025-09-16T05:04:14.510166110Z" level=info msg="CreateContainer within sandbox \"6e05724c85a8cf69408adb41a05d0bd64633f8c7a67fc26e5f8f4ec5bdabd5f6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 16 05:04:14.527972 containerd[1903]: time="2025-09-16T05:04:14.527921001Z" level=info msg="Container a12c8ac2cfe66ab28b1dc1857aac8671a2d1673bbb9d7767c72be3f5abbebcd1: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:14.533699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3367538.mount: Deactivated successfully. Sep 16 05:04:14.547873 containerd[1903]: time="2025-09-16T05:04:14.547829865Z" level=info msg="CreateContainer within sandbox \"6e05724c85a8cf69408adb41a05d0bd64633f8c7a67fc26e5f8f4ec5bdabd5f6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a12c8ac2cfe66ab28b1dc1857aac8671a2d1673bbb9d7767c72be3f5abbebcd1\"" Sep 16 05:04:14.548445 containerd[1903]: time="2025-09-16T05:04:14.548417340Z" level=info msg="StartContainer for \"a12c8ac2cfe66ab28b1dc1857aac8671a2d1673bbb9d7767c72be3f5abbebcd1\"" Sep 16 05:04:14.549439 containerd[1903]: time="2025-09-16T05:04:14.549377268Z" level=info msg="connecting to shim a12c8ac2cfe66ab28b1dc1857aac8671a2d1673bbb9d7767c72be3f5abbebcd1" address="unix:///run/containerd/s/162b62d05951a0f41d1c3ba3b8a5a1261a4863b94f02238f0dd49a67757f78a4" protocol=ttrpc version=3 Sep 16 05:04:14.573004 systemd[1]: Started cri-containerd-a12c8ac2cfe66ab28b1dc1857aac8671a2d1673bbb9d7767c72be3f5abbebcd1.scope - libcontainer container a12c8ac2cfe66ab28b1dc1857aac8671a2d1673bbb9d7767c72be3f5abbebcd1. Sep 16 05:04:14.631895 containerd[1903]: time="2025-09-16T05:04:14.631857687Z" level=info msg="StartContainer for \"a12c8ac2cfe66ab28b1dc1857aac8671a2d1673bbb9d7767c72be3f5abbebcd1\" returns successfully" Sep 16 05:04:15.183477 kubelet[3288]: I0916 05:04:15.183420 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6648df47b8-fbgxr" podStartSLOduration=1.7600204179999999 podStartE2EDuration="4.183396681s" podCreationTimestamp="2025-09-16 05:04:11 +0000 UTC" firstStartedPulling="2025-09-16 05:04:12.069232309 +0000 UTC m=+22.148569743" lastFinishedPulling="2025-09-16 05:04:14.492608572 +0000 UTC m=+24.571946006" observedRunningTime="2025-09-16 05:04:15.183080377 +0000 UTC m=+25.262417832" watchObservedRunningTime="2025-09-16 05:04:15.183396681 +0000 UTC m=+25.262734138" Sep 16 05:04:15.266020 kubelet[3288]: E0916 05:04:15.265972 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.266020 kubelet[3288]: W0916 05:04:15.266008 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.266277 kubelet[3288]: E0916 05:04:15.266034 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.266887 kubelet[3288]: E0916 05:04:15.266791 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.266887 kubelet[3288]: W0916 05:04:15.266832 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.266887 kubelet[3288]: E0916 05:04:15.266849 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.267439 kubelet[3288]: E0916 05:04:15.267411 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.267439 kubelet[3288]: W0916 05:04:15.267425 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.267439 kubelet[3288]: E0916 05:04:15.267437 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.267725 kubelet[3288]: E0916 05:04:15.267684 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.267725 kubelet[3288]: W0916 05:04:15.267692 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.267777 kubelet[3288]: E0916 05:04:15.267734 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.268343 kubelet[3288]: E0916 05:04:15.267932 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.268343 kubelet[3288]: W0916 05:04:15.267941 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.268343 kubelet[3288]: E0916 05:04:15.267950 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.268343 kubelet[3288]: E0916 05:04:15.268103 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.268343 kubelet[3288]: W0916 05:04:15.268109 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.268343 kubelet[3288]: E0916 05:04:15.268117 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.268343 kubelet[3288]: E0916 05:04:15.268265 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.268343 kubelet[3288]: W0916 05:04:15.268272 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.268343 kubelet[3288]: E0916 05:04:15.268280 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.268581 kubelet[3288]: E0916 05:04:15.268432 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.268581 kubelet[3288]: W0916 05:04:15.268439 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.268581 kubelet[3288]: E0916 05:04:15.268446 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.268581 kubelet[3288]: E0916 05:04:15.268573 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.268581 kubelet[3288]: W0916 05:04:15.268578 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.268737 kubelet[3288]: E0916 05:04:15.268584 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.268737 kubelet[3288]: E0916 05:04:15.268705 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.268737 kubelet[3288]: W0916 05:04:15.268711 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.268737 kubelet[3288]: E0916 05:04:15.268717 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.268870 kubelet[3288]: E0916 05:04:15.268838 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.268870 kubelet[3288]: W0916 05:04:15.268843 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.268870 kubelet[3288]: E0916 05:04:15.268851 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.268991 kubelet[3288]: E0916 05:04:15.268973 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.268991 kubelet[3288]: W0916 05:04:15.268982 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.268991 kubelet[3288]: E0916 05:04:15.268988 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.269174 kubelet[3288]: E0916 05:04:15.269149 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.269174 kubelet[3288]: W0916 05:04:15.269167 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.269285 kubelet[3288]: E0916 05:04:15.269182 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.269386 kubelet[3288]: E0916 05:04:15.269367 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.269386 kubelet[3288]: W0916 05:04:15.269380 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.269462 kubelet[3288]: E0916 05:04:15.269391 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.269573 kubelet[3288]: E0916 05:04:15.269555 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.269573 kubelet[3288]: W0916 05:04:15.269568 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.269657 kubelet[3288]: E0916 05:04:15.269579 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.361595 kubelet[3288]: E0916 05:04:15.361555 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.361595 kubelet[3288]: W0916 05:04:15.361583 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.361595 kubelet[3288]: E0916 05:04:15.361604 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.362022 kubelet[3288]: E0916 05:04:15.361838 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.362022 kubelet[3288]: W0916 05:04:15.361845 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.362022 kubelet[3288]: E0916 05:04:15.361856 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.362022 kubelet[3288]: E0916 05:04:15.362019 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.362022 kubelet[3288]: W0916 05:04:15.362026 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.362169 kubelet[3288]: E0916 05:04:15.362047 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.362406 kubelet[3288]: E0916 05:04:15.362380 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.362406 kubelet[3288]: W0916 05:04:15.362400 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.362548 kubelet[3288]: E0916 05:04:15.362420 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.362605 kubelet[3288]: E0916 05:04:15.362591 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.362605 kubelet[3288]: W0916 05:04:15.362601 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.362681 kubelet[3288]: E0916 05:04:15.362614 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.362796 kubelet[3288]: E0916 05:04:15.362774 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.362796 kubelet[3288]: W0916 05:04:15.362790 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.362909 kubelet[3288]: E0916 05:04:15.362823 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.363021 kubelet[3288]: E0916 05:04:15.363007 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.363021 kubelet[3288]: W0916 05:04:15.363018 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.363076 kubelet[3288]: E0916 05:04:15.363026 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.363372 kubelet[3288]: E0916 05:04:15.363351 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.363372 kubelet[3288]: W0916 05:04:15.363365 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.363456 kubelet[3288]: E0916 05:04:15.363381 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.363549 kubelet[3288]: E0916 05:04:15.363535 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.363549 kubelet[3288]: W0916 05:04:15.363545 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.363605 kubelet[3288]: E0916 05:04:15.363552 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.363735 kubelet[3288]: E0916 05:04:15.363715 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.363735 kubelet[3288]: W0916 05:04:15.363729 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.363839 kubelet[3288]: E0916 05:04:15.363745 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.363957 kubelet[3288]: E0916 05:04:15.363941 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.363957 kubelet[3288]: W0916 05:04:15.363953 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.364118 kubelet[3288]: E0916 05:04:15.363962 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.364420 kubelet[3288]: E0916 05:04:15.364187 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.364420 kubelet[3288]: W0916 05:04:15.364205 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.364420 kubelet[3288]: E0916 05:04:15.364225 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.364515 kubelet[3288]: E0916 05:04:15.364481 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.364515 kubelet[3288]: W0916 05:04:15.364490 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.364567 kubelet[3288]: E0916 05:04:15.364513 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.364732 kubelet[3288]: E0916 05:04:15.364718 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.364732 kubelet[3288]: W0916 05:04:15.364731 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.364802 kubelet[3288]: E0916 05:04:15.364749 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.365104 kubelet[3288]: E0916 05:04:15.365091 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.365104 kubelet[3288]: W0916 05:04:15.365102 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.365176 kubelet[3288]: E0916 05:04:15.365113 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.365367 kubelet[3288]: E0916 05:04:15.365285 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.365367 kubelet[3288]: W0916 05:04:15.365294 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.365367 kubelet[3288]: E0916 05:04:15.365302 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.365858 kubelet[3288]: E0916 05:04:15.365836 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.365858 kubelet[3288]: W0916 05:04:15.365853 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.365938 kubelet[3288]: E0916 05:04:15.365868 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.366090 kubelet[3288]: E0916 05:04:15.366041 3288 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:04:15.366090 kubelet[3288]: W0916 05:04:15.366054 3288 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:04:15.366090 kubelet[3288]: E0916 05:04:15.366066 3288 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:04:15.722339 containerd[1903]: time="2025-09-16T05:04:15.722156919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:15.725675 containerd[1903]: time="2025-09-16T05:04:15.725631834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 16 05:04:15.730262 containerd[1903]: time="2025-09-16T05:04:15.730117192Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:15.735231 containerd[1903]: time="2025-09-16T05:04:15.735146771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:15.735971 containerd[1903]: time="2025-09-16T05:04:15.735918533Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.242416247s" Sep 16 05:04:15.735971 containerd[1903]: time="2025-09-16T05:04:15.735956274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 16 05:04:15.738799 containerd[1903]: time="2025-09-16T05:04:15.738765899Z" level=info msg="CreateContainer within sandbox \"8ed68935c1e1e9766e3710a622cd39799c4ae1a9f9248146ce1f9ffb682e1607\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 16 05:04:15.765115 containerd[1903]: time="2025-09-16T05:04:15.765076607Z" level=info msg="Container f4a3c1c875b0dbc304651a4cb32e63c46e3836a3fa984632af0ca7e0333c0b79: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:15.779720 containerd[1903]: time="2025-09-16T05:04:15.779672535Z" level=info msg="CreateContainer within sandbox \"8ed68935c1e1e9766e3710a622cd39799c4ae1a9f9248146ce1f9ffb682e1607\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f4a3c1c875b0dbc304651a4cb32e63c46e3836a3fa984632af0ca7e0333c0b79\"" Sep 16 05:04:15.780364 containerd[1903]: time="2025-09-16T05:04:15.780337178Z" level=info msg="StartContainer for \"f4a3c1c875b0dbc304651a4cb32e63c46e3836a3fa984632af0ca7e0333c0b79\"" Sep 16 05:04:15.781682 containerd[1903]: time="2025-09-16T05:04:15.781648079Z" level=info msg="connecting to shim f4a3c1c875b0dbc304651a4cb32e63c46e3836a3fa984632af0ca7e0333c0b79" address="unix:///run/containerd/s/14d84abf407c801fe05e6ba44978863502687ed526d661442ad33f0a436edb2e" protocol=ttrpc version=3 Sep 16 05:04:15.805042 systemd[1]: Started cri-containerd-f4a3c1c875b0dbc304651a4cb32e63c46e3836a3fa984632af0ca7e0333c0b79.scope - libcontainer container f4a3c1c875b0dbc304651a4cb32e63c46e3836a3fa984632af0ca7e0333c0b79. Sep 16 05:04:15.857288 containerd[1903]: time="2025-09-16T05:04:15.857244955Z" level=info msg="StartContainer for \"f4a3c1c875b0dbc304651a4cb32e63c46e3836a3fa984632af0ca7e0333c0b79\" returns successfully" Sep 16 05:04:15.867479 systemd[1]: cri-containerd-f4a3c1c875b0dbc304651a4cb32e63c46e3836a3fa984632af0ca7e0333c0b79.scope: Deactivated successfully. Sep 16 05:04:15.884974 containerd[1903]: time="2025-09-16T05:04:15.884927157Z" level=info msg="received exit event container_id:\"f4a3c1c875b0dbc304651a4cb32e63c46e3836a3fa984632af0ca7e0333c0b79\" id:\"f4a3c1c875b0dbc304651a4cb32e63c46e3836a3fa984632af0ca7e0333c0b79\" pid:4233 exited_at:{seconds:1757999055 nanos:874707742}" Sep 16 05:04:15.902037 containerd[1903]: time="2025-09-16T05:04:15.901979885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4a3c1c875b0dbc304651a4cb32e63c46e3836a3fa984632af0ca7e0333c0b79\" id:\"f4a3c1c875b0dbc304651a4cb32e63c46e3836a3fa984632af0ca7e0333c0b79\" pid:4233 exited_at:{seconds:1757999055 nanos:874707742}" Sep 16 05:04:15.934165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4a3c1c875b0dbc304651a4cb32e63c46e3836a3fa984632af0ca7e0333c0b79-rootfs.mount: Deactivated successfully. Sep 16 05:04:16.073243 kubelet[3288]: E0916 05:04:16.073124 3288 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dd2vk" podUID="dd82fc49-e12c-4f36-9356-cba04aba62de" Sep 16 05:04:16.169392 kubelet[3288]: I0916 05:04:16.169087 3288 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 05:04:16.170328 containerd[1903]: time="2025-09-16T05:04:16.170162128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 16 05:04:18.065661 kubelet[3288]: E0916 05:04:18.065439 3288 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dd2vk" podUID="dd82fc49-e12c-4f36-9356-cba04aba62de" Sep 16 05:04:19.162436 containerd[1903]: time="2025-09-16T05:04:19.162379402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:19.163497 containerd[1903]: time="2025-09-16T05:04:19.163349616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 16 05:04:19.164366 containerd[1903]: time="2025-09-16T05:04:19.164331424Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:19.167748 containerd[1903]: time="2025-09-16T05:04:19.167714955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:19.170516 containerd[1903]: time="2025-09-16T05:04:19.170014409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.999182536s" Sep 16 05:04:19.170516 containerd[1903]: time="2025-09-16T05:04:19.170052182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 16 05:04:19.173988 containerd[1903]: time="2025-09-16T05:04:19.173961155Z" level=info msg="CreateContainer within sandbox \"8ed68935c1e1e9766e3710a622cd39799c4ae1a9f9248146ce1f9ffb682e1607\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 16 05:04:19.198782 containerd[1903]: time="2025-09-16T05:04:19.198744255Z" level=info msg="Container 06d6ac63d937346c89aa4ce6b9dd0e85e135a93a37e9c68ac860545f4b20cdaf: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:19.210523 containerd[1903]: time="2025-09-16T05:04:19.210479470Z" level=info msg="CreateContainer within sandbox \"8ed68935c1e1e9766e3710a622cd39799c4ae1a9f9248146ce1f9ffb682e1607\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"06d6ac63d937346c89aa4ce6b9dd0e85e135a93a37e9c68ac860545f4b20cdaf\"" Sep 16 05:04:19.211894 containerd[1903]: time="2025-09-16T05:04:19.211146008Z" level=info msg="StartContainer for \"06d6ac63d937346c89aa4ce6b9dd0e85e135a93a37e9c68ac860545f4b20cdaf\"" Sep 16 05:04:19.213800 containerd[1903]: time="2025-09-16T05:04:19.213762401Z" level=info msg="connecting to shim 06d6ac63d937346c89aa4ce6b9dd0e85e135a93a37e9c68ac860545f4b20cdaf" address="unix:///run/containerd/s/14d84abf407c801fe05e6ba44978863502687ed526d661442ad33f0a436edb2e" protocol=ttrpc version=3 Sep 16 05:04:19.244036 systemd[1]: Started cri-containerd-06d6ac63d937346c89aa4ce6b9dd0e85e135a93a37e9c68ac860545f4b20cdaf.scope - libcontainer container 06d6ac63d937346c89aa4ce6b9dd0e85e135a93a37e9c68ac860545f4b20cdaf. Sep 16 05:04:19.290597 containerd[1903]: time="2025-09-16T05:04:19.290530034Z" level=info msg="StartContainer for \"06d6ac63d937346c89aa4ce6b9dd0e85e135a93a37e9c68ac860545f4b20cdaf\" returns successfully" Sep 16 05:04:20.066665 kubelet[3288]: E0916 05:04:20.066611 3288 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dd2vk" podUID="dd82fc49-e12c-4f36-9356-cba04aba62de" Sep 16 05:04:20.459707 systemd[1]: cri-containerd-06d6ac63d937346c89aa4ce6b9dd0e85e135a93a37e9c68ac860545f4b20cdaf.scope: Deactivated successfully. Sep 16 05:04:20.460141 systemd[1]: cri-containerd-06d6ac63d937346c89aa4ce6b9dd0e85e135a93a37e9c68ac860545f4b20cdaf.scope: Consumed 582ms CPU time, 166.8M memory peak, 10.1M read from disk, 171.3M written to disk. Sep 16 05:04:20.463022 containerd[1903]: time="2025-09-16T05:04:20.462884626Z" level=info msg="received exit event container_id:\"06d6ac63d937346c89aa4ce6b9dd0e85e135a93a37e9c68ac860545f4b20cdaf\" id:\"06d6ac63d937346c89aa4ce6b9dd0e85e135a93a37e9c68ac860545f4b20cdaf\" pid:4291 exited_at:{seconds:1757999060 nanos:461378317}" Sep 16 05:04:20.463449 containerd[1903]: time="2025-09-16T05:04:20.463401801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06d6ac63d937346c89aa4ce6b9dd0e85e135a93a37e9c68ac860545f4b20cdaf\" id:\"06d6ac63d937346c89aa4ce6b9dd0e85e135a93a37e9c68ac860545f4b20cdaf\" pid:4291 exited_at:{seconds:1757999060 nanos:461378317}" Sep 16 05:04:20.566135 kubelet[3288]: I0916 05:04:20.565755 3288 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 16 05:04:20.584183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06d6ac63d937346c89aa4ce6b9dd0e85e135a93a37e9c68ac860545f4b20cdaf-rootfs.mount: Deactivated successfully. Sep 16 05:04:20.699409 systemd[1]: Created slice kubepods-besteffort-pod369f054e_4d5b_4c67_87e2_df1787272db4.slice - libcontainer container kubepods-besteffort-pod369f054e_4d5b_4c67_87e2_df1787272db4.slice. Sep 16 05:04:20.709417 systemd[1]: Created slice kubepods-burstable-poda33e2693_caa3_4c5a_931a_c092b2ac0ca3.slice - libcontainer container kubepods-burstable-poda33e2693_caa3_4c5a_931a_c092b2ac0ca3.slice. Sep 16 05:04:20.721291 systemd[1]: Created slice kubepods-besteffort-podbff306de_bd7e_4453_aba3_b35487bc8935.slice - libcontainer container kubepods-besteffort-podbff306de_bd7e_4453_aba3_b35487bc8935.slice. Sep 16 05:04:20.730277 systemd[1]: Created slice kubepods-besteffort-pod86d4db9f_1b1a_4dc0_afd7_9b9364edc17e.slice - libcontainer container kubepods-besteffort-pod86d4db9f_1b1a_4dc0_afd7_9b9364edc17e.slice. Sep 16 05:04:20.742186 systemd[1]: Created slice kubepods-burstable-pod97844c74_d2ee_4414_82c7_5b2cecb1de63.slice - libcontainer container kubepods-burstable-pod97844c74_d2ee_4414_82c7_5b2cecb1de63.slice. Sep 16 05:04:20.749676 systemd[1]: Created slice kubepods-besteffort-pod833a7020_2682_4481_acbc_82870da6246d.slice - libcontainer container kubepods-besteffort-pod833a7020_2682_4481_acbc_82870da6246d.slice. Sep 16 05:04:20.759213 systemd[1]: Created slice kubepods-besteffort-poddc3cc30b_6a97_4224_a3b0_28026a64f7bd.slice - libcontainer container kubepods-besteffort-poddc3cc30b_6a97_4224_a3b0_28026a64f7bd.slice. Sep 16 05:04:20.800355 kubelet[3288]: I0916 05:04:20.800308 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/86d4db9f-1b1a-4dc0-afd7-9b9364edc17e-calico-apiserver-certs\") pod \"calico-apiserver-d6f88447f-252dq\" (UID: \"86d4db9f-1b1a-4dc0-afd7-9b9364edc17e\") " pod="calico-apiserver/calico-apiserver-d6f88447f-252dq" Sep 16 05:04:20.800355 kubelet[3288]: I0916 05:04:20.800362 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lfrv\" (UniqueName: \"kubernetes.io/projected/97844c74-d2ee-4414-82c7-5b2cecb1de63-kube-api-access-4lfrv\") pod \"coredns-7c65d6cfc9-hscsv\" (UID: \"97844c74-d2ee-4414-82c7-5b2cecb1de63\") " pod="kube-system/coredns-7c65d6cfc9-hscsv" Sep 16 05:04:20.800611 kubelet[3288]: I0916 05:04:20.800388 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/dc3cc30b-6a97-4224-a3b0-28026a64f7bd-goldmane-key-pair\") pod \"goldmane-7988f88666-zg8mj\" (UID: \"dc3cc30b-6a97-4224-a3b0-28026a64f7bd\") " pod="calico-system/goldmane-7988f88666-zg8mj" Sep 16 05:04:20.800611 kubelet[3288]: I0916 05:04:20.800413 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvhg\" (UniqueName: \"kubernetes.io/projected/369f054e-4d5b-4c67-87e2-df1787272db4-kube-api-access-6vvhg\") pod \"calico-kube-controllers-5d8d6cbbb-tqsgv\" (UID: \"369f054e-4d5b-4c67-87e2-df1787272db4\") " pod="calico-system/calico-kube-controllers-5d8d6cbbb-tqsgv" Sep 16 05:04:20.800611 kubelet[3288]: I0916 05:04:20.800439 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a33e2693-caa3-4c5a-931a-c092b2ac0ca3-config-volume\") pod \"coredns-7c65d6cfc9-p7rrw\" (UID: \"a33e2693-caa3-4c5a-931a-c092b2ac0ca3\") " pod="kube-system/coredns-7c65d6cfc9-p7rrw" Sep 16 05:04:20.800611 kubelet[3288]: I0916 05:04:20.800585 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72n9t\" (UniqueName: \"kubernetes.io/projected/a33e2693-caa3-4c5a-931a-c092b2ac0ca3-kube-api-access-72n9t\") pod \"coredns-7c65d6cfc9-p7rrw\" (UID: \"a33e2693-caa3-4c5a-931a-c092b2ac0ca3\") " pod="kube-system/coredns-7c65d6cfc9-p7rrw" Sep 16 05:04:20.800885 kubelet[3288]: I0916 05:04:20.800614 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97844c74-d2ee-4414-82c7-5b2cecb1de63-config-volume\") pod \"coredns-7c65d6cfc9-hscsv\" (UID: \"97844c74-d2ee-4414-82c7-5b2cecb1de63\") " pod="kube-system/coredns-7c65d6cfc9-hscsv" Sep 16 05:04:20.800885 kubelet[3288]: I0916 05:04:20.800677 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqhrf\" (UniqueName: \"kubernetes.io/projected/86d4db9f-1b1a-4dc0-afd7-9b9364edc17e-kube-api-access-hqhrf\") pod \"calico-apiserver-d6f88447f-252dq\" (UID: \"86d4db9f-1b1a-4dc0-afd7-9b9364edc17e\") " pod="calico-apiserver/calico-apiserver-d6f88447f-252dq" Sep 16 05:04:20.800885 kubelet[3288]: I0916 05:04:20.800736 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bff306de-bd7e-4453-aba3-b35487bc8935-whisker-backend-key-pair\") pod \"whisker-55bff48fc7-nbmlz\" (UID: \"bff306de-bd7e-4453-aba3-b35487bc8935\") " pod="calico-system/whisker-55bff48fc7-nbmlz" Sep 16 05:04:20.800885 kubelet[3288]: I0916 05:04:20.800767 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc3cc30b-6a97-4224-a3b0-28026a64f7bd-config\") pod \"goldmane-7988f88666-zg8mj\" (UID: \"dc3cc30b-6a97-4224-a3b0-28026a64f7bd\") " pod="calico-system/goldmane-7988f88666-zg8mj" Sep 16 05:04:20.801060 kubelet[3288]: I0916 05:04:20.800869 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hwkp\" (UniqueName: \"kubernetes.io/projected/833a7020-2682-4481-acbc-82870da6246d-kube-api-access-9hwkp\") pod \"calico-apiserver-d6f88447f-fwsnq\" (UID: \"833a7020-2682-4481-acbc-82870da6246d\") " pod="calico-apiserver/calico-apiserver-d6f88447f-fwsnq" Sep 16 05:04:20.801060 kubelet[3288]: I0916 05:04:20.800933 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc3cc30b-6a97-4224-a3b0-28026a64f7bd-goldmane-ca-bundle\") pod \"goldmane-7988f88666-zg8mj\" (UID: \"dc3cc30b-6a97-4224-a3b0-28026a64f7bd\") " pod="calico-system/goldmane-7988f88666-zg8mj" Sep 16 05:04:20.801060 kubelet[3288]: I0916 05:04:20.800997 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bff306de-bd7e-4453-aba3-b35487bc8935-whisker-ca-bundle\") pod \"whisker-55bff48fc7-nbmlz\" (UID: \"bff306de-bd7e-4453-aba3-b35487bc8935\") " pod="calico-system/whisker-55bff48fc7-nbmlz" Sep 16 05:04:20.801196 kubelet[3288]: I0916 05:04:20.801028 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmdxq\" (UniqueName: \"kubernetes.io/projected/bff306de-bd7e-4453-aba3-b35487bc8935-kube-api-access-mmdxq\") pod \"whisker-55bff48fc7-nbmlz\" (UID: \"bff306de-bd7e-4453-aba3-b35487bc8935\") " pod="calico-system/whisker-55bff48fc7-nbmlz" Sep 16 05:04:20.801196 kubelet[3288]: I0916 05:04:20.801103 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/833a7020-2682-4481-acbc-82870da6246d-calico-apiserver-certs\") pod \"calico-apiserver-d6f88447f-fwsnq\" (UID: \"833a7020-2682-4481-acbc-82870da6246d\") " pod="calico-apiserver/calico-apiserver-d6f88447f-fwsnq" Sep 16 05:04:20.801196 kubelet[3288]: I0916 05:04:20.801171 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4hm7\" (UniqueName: \"kubernetes.io/projected/dc3cc30b-6a97-4224-a3b0-28026a64f7bd-kube-api-access-n4hm7\") pod \"goldmane-7988f88666-zg8mj\" (UID: \"dc3cc30b-6a97-4224-a3b0-28026a64f7bd\") " pod="calico-system/goldmane-7988f88666-zg8mj" Sep 16 05:04:20.801324 kubelet[3288]: I0916 05:04:20.801196 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/369f054e-4d5b-4c67-87e2-df1787272db4-tigera-ca-bundle\") pod \"calico-kube-controllers-5d8d6cbbb-tqsgv\" (UID: \"369f054e-4d5b-4c67-87e2-df1787272db4\") " pod="calico-system/calico-kube-controllers-5d8d6cbbb-tqsgv" Sep 16 05:04:21.004235 containerd[1903]: time="2025-09-16T05:04:21.004101825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d8d6cbbb-tqsgv,Uid:369f054e-4d5b-4c67-87e2-df1787272db4,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:21.018678 containerd[1903]: time="2025-09-16T05:04:21.018622071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p7rrw,Uid:a33e2693-caa3-4c5a-931a-c092b2ac0ca3,Namespace:kube-system,Attempt:0,}" Sep 16 05:04:21.036688 containerd[1903]: time="2025-09-16T05:04:21.036073795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6f88447f-252dq,Uid:86d4db9f-1b1a-4dc0-afd7-9b9364edc17e,Namespace:calico-apiserver,Attempt:0,}" Sep 16 05:04:21.050968 containerd[1903]: time="2025-09-16T05:04:21.050922758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55bff48fc7-nbmlz,Uid:bff306de-bd7e-4453-aba3-b35487bc8935,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:21.055388 containerd[1903]: time="2025-09-16T05:04:21.055340576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6f88447f-fwsnq,Uid:833a7020-2682-4481-acbc-82870da6246d,Namespace:calico-apiserver,Attempt:0,}" Sep 16 05:04:21.055563 containerd[1903]: time="2025-09-16T05:04:21.055538451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hscsv,Uid:97844c74-d2ee-4414-82c7-5b2cecb1de63,Namespace:kube-system,Attempt:0,}" Sep 16 05:04:21.078540 containerd[1903]: time="2025-09-16T05:04:21.078456425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zg8mj,Uid:dc3cc30b-6a97-4224-a3b0-28026a64f7bd,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:21.245556 containerd[1903]: time="2025-09-16T05:04:21.245516741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 16 05:04:21.476529 containerd[1903]: time="2025-09-16T05:04:21.476466926Z" level=error msg="Failed to destroy network for sandbox \"a547a2e0d4c3fc93ebcb0036e35e18d180266d1dd313d8200ee0f594b168d685\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.483531 containerd[1903]: time="2025-09-16T05:04:21.483234324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p7rrw,Uid:a33e2693-caa3-4c5a-931a-c092b2ac0ca3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a547a2e0d4c3fc93ebcb0036e35e18d180266d1dd313d8200ee0f594b168d685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.485386 kubelet[3288]: E0916 05:04:21.484900 3288 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a547a2e0d4c3fc93ebcb0036e35e18d180266d1dd313d8200ee0f594b168d685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.485386 kubelet[3288]: E0916 05:04:21.484992 3288 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a547a2e0d4c3fc93ebcb0036e35e18d180266d1dd313d8200ee0f594b168d685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-p7rrw" Sep 16 05:04:21.485386 kubelet[3288]: E0916 05:04:21.485022 3288 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a547a2e0d4c3fc93ebcb0036e35e18d180266d1dd313d8200ee0f594b168d685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-p7rrw" Sep 16 05:04:21.486089 kubelet[3288]: E0916 05:04:21.485091 3288 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-p7rrw_kube-system(a33e2693-caa3-4c5a-931a-c092b2ac0ca3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-p7rrw_kube-system(a33e2693-caa3-4c5a-931a-c092b2ac0ca3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a547a2e0d4c3fc93ebcb0036e35e18d180266d1dd313d8200ee0f594b168d685\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-p7rrw" podUID="a33e2693-caa3-4c5a-931a-c092b2ac0ca3" Sep 16 05:04:21.490355 containerd[1903]: time="2025-09-16T05:04:21.490308855Z" level=error msg="Failed to destroy network for sandbox \"f254591e04803cafd02de925728157f8c9f1c280997b5d3bd3c6ebd2fd2580a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.495322 containerd[1903]: time="2025-09-16T05:04:21.494550273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zg8mj,Uid:dc3cc30b-6a97-4224-a3b0-28026a64f7bd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f254591e04803cafd02de925728157f8c9f1c280997b5d3bd3c6ebd2fd2580a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.515962 kubelet[3288]: E0916 05:04:21.515911 3288 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f254591e04803cafd02de925728157f8c9f1c280997b5d3bd3c6ebd2fd2580a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.516139 kubelet[3288]: E0916 05:04:21.515995 3288 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f254591e04803cafd02de925728157f8c9f1c280997b5d3bd3c6ebd2fd2580a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-zg8mj" Sep 16 05:04:21.516139 kubelet[3288]: E0916 05:04:21.516054 3288 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f254591e04803cafd02de925728157f8c9f1c280997b5d3bd3c6ebd2fd2580a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-zg8mj" Sep 16 05:04:21.516551 kubelet[3288]: E0916 05:04:21.516130 3288 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-zg8mj_calico-system(dc3cc30b-6a97-4224-a3b0-28026a64f7bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-zg8mj_calico-system(dc3cc30b-6a97-4224-a3b0-28026a64f7bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f254591e04803cafd02de925728157f8c9f1c280997b5d3bd3c6ebd2fd2580a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-zg8mj" podUID="dc3cc30b-6a97-4224-a3b0-28026a64f7bd" Sep 16 05:04:21.525565 containerd[1903]: time="2025-09-16T05:04:21.525515185Z" level=error msg="Failed to destroy network for sandbox \"b31968ffcedbed5fc56c588924c1131d8c66b900001e15e0ee5e471bad84be41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.528968 containerd[1903]: time="2025-09-16T05:04:21.528913556Z" level=error msg="Failed to destroy network for sandbox \"e1336a5af16f27240e72b8c7ab2c0a2552dc2dd69d97056d02e00f0d01e5c408\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.529279 containerd[1903]: time="2025-09-16T05:04:21.528924150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6f88447f-fwsnq,Uid:833a7020-2682-4481-acbc-82870da6246d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b31968ffcedbed5fc56c588924c1131d8c66b900001e15e0ee5e471bad84be41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.529916 kubelet[3288]: E0916 05:04:21.529479 3288 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b31968ffcedbed5fc56c588924c1131d8c66b900001e15e0ee5e471bad84be41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.530021 kubelet[3288]: E0916 05:04:21.529980 3288 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b31968ffcedbed5fc56c588924c1131d8c66b900001e15e0ee5e471bad84be41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d6f88447f-fwsnq" Sep 16 05:04:21.530021 kubelet[3288]: E0916 05:04:21.530012 3288 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b31968ffcedbed5fc56c588924c1131d8c66b900001e15e0ee5e471bad84be41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d6f88447f-fwsnq" Sep 16 05:04:21.530112 kubelet[3288]: E0916 05:04:21.530064 3288 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d6f88447f-fwsnq_calico-apiserver(833a7020-2682-4481-acbc-82870da6246d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d6f88447f-fwsnq_calico-apiserver(833a7020-2682-4481-acbc-82870da6246d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b31968ffcedbed5fc56c588924c1131d8c66b900001e15e0ee5e471bad84be41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d6f88447f-fwsnq" podUID="833a7020-2682-4481-acbc-82870da6246d" Sep 16 05:04:21.536466 containerd[1903]: time="2025-09-16T05:04:21.536400578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hscsv,Uid:97844c74-d2ee-4414-82c7-5b2cecb1de63,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1336a5af16f27240e72b8c7ab2c0a2552dc2dd69d97056d02e00f0d01e5c408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.536675 kubelet[3288]: E0916 05:04:21.536627 3288 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1336a5af16f27240e72b8c7ab2c0a2552dc2dd69d97056d02e00f0d01e5c408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.536777 kubelet[3288]: E0916 05:04:21.536688 3288 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1336a5af16f27240e72b8c7ab2c0a2552dc2dd69d97056d02e00f0d01e5c408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hscsv" Sep 16 05:04:21.536777 kubelet[3288]: E0916 05:04:21.536714 3288 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1336a5af16f27240e72b8c7ab2c0a2552dc2dd69d97056d02e00f0d01e5c408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hscsv" Sep 16 05:04:21.536922 kubelet[3288]: E0916 05:04:21.536767 3288 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hscsv_kube-system(97844c74-d2ee-4414-82c7-5b2cecb1de63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hscsv_kube-system(97844c74-d2ee-4414-82c7-5b2cecb1de63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1336a5af16f27240e72b8c7ab2c0a2552dc2dd69d97056d02e00f0d01e5c408\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hscsv" podUID="97844c74-d2ee-4414-82c7-5b2cecb1de63" Sep 16 05:04:21.545146 containerd[1903]: time="2025-09-16T05:04:21.545032068Z" level=error msg="Failed to destroy network for sandbox \"dd0c83f68dd8c873060b15a1a3b3fef7ab418ed454c2c1870150a4f099e128bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.547301 containerd[1903]: time="2025-09-16T05:04:21.547012178Z" level=error msg="Failed to destroy network for sandbox \"5f85b3e6d9cf6b8e7d29171efdf8719cfada8516f66f3ca9bf93cc79aaa9095d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.547578 containerd[1903]: time="2025-09-16T05:04:21.547539690Z" level=error msg="Failed to destroy network for sandbox \"12041ee1d322f5d7704381b62f4f729084e4efa66f1f7eab048945425b7513e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.549162 containerd[1903]: time="2025-09-16T05:04:21.549121811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6f88447f-252dq,Uid:86d4db9f-1b1a-4dc0-afd7-9b9364edc17e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0c83f68dd8c873060b15a1a3b3fef7ab418ed454c2c1870150a4f099e128bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.549533 kubelet[3288]: E0916 05:04:21.549495 3288 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0c83f68dd8c873060b15a1a3b3fef7ab418ed454c2c1870150a4f099e128bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.549627 kubelet[3288]: E0916 05:04:21.549553 3288 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0c83f68dd8c873060b15a1a3b3fef7ab418ed454c2c1870150a4f099e128bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d6f88447f-252dq" Sep 16 05:04:21.549627 kubelet[3288]: E0916 05:04:21.549606 3288 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0c83f68dd8c873060b15a1a3b3fef7ab418ed454c2c1870150a4f099e128bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d6f88447f-252dq" Sep 16 05:04:21.549718 kubelet[3288]: E0916 05:04:21.549665 3288 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d6f88447f-252dq_calico-apiserver(86d4db9f-1b1a-4dc0-afd7-9b9364edc17e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d6f88447f-252dq_calico-apiserver(86d4db9f-1b1a-4dc0-afd7-9b9364edc17e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd0c83f68dd8c873060b15a1a3b3fef7ab418ed454c2c1870150a4f099e128bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d6f88447f-252dq" podUID="86d4db9f-1b1a-4dc0-afd7-9b9364edc17e" Sep 16 05:04:21.551048 containerd[1903]: time="2025-09-16T05:04:21.550981096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d8d6cbbb-tqsgv,Uid:369f054e-4d5b-4c67-87e2-df1787272db4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f85b3e6d9cf6b8e7d29171efdf8719cfada8516f66f3ca9bf93cc79aaa9095d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.551400 kubelet[3288]: E0916 05:04:21.551335 3288 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f85b3e6d9cf6b8e7d29171efdf8719cfada8516f66f3ca9bf93cc79aaa9095d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.551480 kubelet[3288]: E0916 05:04:21.551418 3288 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f85b3e6d9cf6b8e7d29171efdf8719cfada8516f66f3ca9bf93cc79aaa9095d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d8d6cbbb-tqsgv" Sep 16 05:04:21.551480 kubelet[3288]: E0916 05:04:21.551451 3288 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f85b3e6d9cf6b8e7d29171efdf8719cfada8516f66f3ca9bf93cc79aaa9095d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d8d6cbbb-tqsgv" Sep 16 05:04:21.551570 kubelet[3288]: E0916 05:04:21.551506 3288 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d8d6cbbb-tqsgv_calico-system(369f054e-4d5b-4c67-87e2-df1787272db4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d8d6cbbb-tqsgv_calico-system(369f054e-4d5b-4c67-87e2-df1787272db4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f85b3e6d9cf6b8e7d29171efdf8719cfada8516f66f3ca9bf93cc79aaa9095d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d8d6cbbb-tqsgv" podUID="369f054e-4d5b-4c67-87e2-df1787272db4" Sep 16 05:04:21.553325 containerd[1903]: time="2025-09-16T05:04:21.553246871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55bff48fc7-nbmlz,Uid:bff306de-bd7e-4453-aba3-b35487bc8935,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12041ee1d322f5d7704381b62f4f729084e4efa66f1f7eab048945425b7513e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.554257 kubelet[3288]: E0916 05:04:21.553478 3288 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12041ee1d322f5d7704381b62f4f729084e4efa66f1f7eab048945425b7513e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:21.554257 kubelet[3288]: E0916 05:04:21.553519 3288 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12041ee1d322f5d7704381b62f4f729084e4efa66f1f7eab048945425b7513e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55bff48fc7-nbmlz" Sep 16 05:04:21.554257 kubelet[3288]: E0916 05:04:21.553535 3288 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12041ee1d322f5d7704381b62f4f729084e4efa66f1f7eab048945425b7513e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55bff48fc7-nbmlz" Sep 16 05:04:21.554382 kubelet[3288]: E0916 05:04:21.553567 3288 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55bff48fc7-nbmlz_calico-system(bff306de-bd7e-4453-aba3-b35487bc8935)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55bff48fc7-nbmlz_calico-system(bff306de-bd7e-4453-aba3-b35487bc8935)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12041ee1d322f5d7704381b62f4f729084e4efa66f1f7eab048945425b7513e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55bff48fc7-nbmlz" podUID="bff306de-bd7e-4453-aba3-b35487bc8935" Sep 16 05:04:22.070191 systemd[1]: Created slice kubepods-besteffort-poddd82fc49_e12c_4f36_9356_cba04aba62de.slice - libcontainer container kubepods-besteffort-poddd82fc49_e12c_4f36_9356_cba04aba62de.slice. Sep 16 05:04:22.072899 containerd[1903]: time="2025-09-16T05:04:22.072624427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dd2vk,Uid:dd82fc49-e12c-4f36-9356-cba04aba62de,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:22.127686 containerd[1903]: time="2025-09-16T05:04:22.127641210Z" level=error msg="Failed to destroy network for sandbox \"16034ed8c770521776b1dc18347b2ac08fabf83f9f66f75d22f70ab553074f18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:22.130360 systemd[1]: run-netns-cni\x2de3e24773\x2d2dd7\x2d7a5c\x2d16e0\x2d45c126290a77.mount: Deactivated successfully. Sep 16 05:04:22.134282 containerd[1903]: time="2025-09-16T05:04:22.133947989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dd2vk,Uid:dd82fc49-e12c-4f36-9356-cba04aba62de,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16034ed8c770521776b1dc18347b2ac08fabf83f9f66f75d22f70ab553074f18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:22.134763 kubelet[3288]: E0916 05:04:22.134717 3288 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16034ed8c770521776b1dc18347b2ac08fabf83f9f66f75d22f70ab553074f18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:22.134895 kubelet[3288]: E0916 05:04:22.134797 3288 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16034ed8c770521776b1dc18347b2ac08fabf83f9f66f75d22f70ab553074f18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dd2vk" Sep 16 05:04:22.134895 kubelet[3288]: E0916 05:04:22.134854 3288 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16034ed8c770521776b1dc18347b2ac08fabf83f9f66f75d22f70ab553074f18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dd2vk" Sep 16 05:04:22.134999 kubelet[3288]: E0916 05:04:22.134940 3288 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dd2vk_calico-system(dd82fc49-e12c-4f36-9356-cba04aba62de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dd2vk_calico-system(dd82fc49-e12c-4f36-9356-cba04aba62de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16034ed8c770521776b1dc18347b2ac08fabf83f9f66f75d22f70ab553074f18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dd2vk" podUID="dd82fc49-e12c-4f36-9356-cba04aba62de" Sep 16 05:04:27.786125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2677085775.mount: Deactivated successfully. Sep 16 05:04:27.849177 containerd[1903]: time="2025-09-16T05:04:27.849127070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:27.857575 containerd[1903]: time="2025-09-16T05:04:27.857432625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 16 05:04:27.858319 containerd[1903]: time="2025-09-16T05:04:27.858272552Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:27.863687 containerd[1903]: time="2025-09-16T05:04:27.863640059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:27.866674 containerd[1903]: time="2025-09-16T05:04:27.866612771Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.618413417s" Sep 16 05:04:27.866674 containerd[1903]: time="2025-09-16T05:04:27.866663489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 16 05:04:27.895569 containerd[1903]: time="2025-09-16T05:04:27.895527684Z" level=info msg="CreateContainer within sandbox \"8ed68935c1e1e9766e3710a622cd39799c4ae1a9f9248146ce1f9ffb682e1607\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 16 05:04:27.944440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2755428126.mount: Deactivated successfully. Sep 16 05:04:27.944850 containerd[1903]: time="2025-09-16T05:04:27.944783245Z" level=info msg="Container 35ce390107eb15267a30c5bf0298248e6a41d57c1f3ceedef9a73dc7874780c3: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:28.010587 containerd[1903]: time="2025-09-16T05:04:28.010530158Z" level=info msg="CreateContainer within sandbox \"8ed68935c1e1e9766e3710a622cd39799c4ae1a9f9248146ce1f9ffb682e1607\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"35ce390107eb15267a30c5bf0298248e6a41d57c1f3ceedef9a73dc7874780c3\"" Sep 16 05:04:28.011265 containerd[1903]: time="2025-09-16T05:04:28.011222731Z" level=info msg="StartContainer for \"35ce390107eb15267a30c5bf0298248e6a41d57c1f3ceedef9a73dc7874780c3\"" Sep 16 05:04:28.017119 containerd[1903]: time="2025-09-16T05:04:28.017054245Z" level=info msg="connecting to shim 35ce390107eb15267a30c5bf0298248e6a41d57c1f3ceedef9a73dc7874780c3" address="unix:///run/containerd/s/14d84abf407c801fe05e6ba44978863502687ed526d661442ad33f0a436edb2e" protocol=ttrpc version=3 Sep 16 05:04:28.128210 systemd[1]: Started cri-containerd-35ce390107eb15267a30c5bf0298248e6a41d57c1f3ceedef9a73dc7874780c3.scope - libcontainer container 35ce390107eb15267a30c5bf0298248e6a41d57c1f3ceedef9a73dc7874780c3. Sep 16 05:04:28.205516 containerd[1903]: time="2025-09-16T05:04:28.205398112Z" level=info msg="StartContainer for \"35ce390107eb15267a30c5bf0298248e6a41d57c1f3ceedef9a73dc7874780c3\" returns successfully" Sep 16 05:04:28.345430 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 16 05:04:28.346878 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 16 05:04:28.745170 containerd[1903]: time="2025-09-16T05:04:28.745129017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35ce390107eb15267a30c5bf0298248e6a41d57c1f3ceedef9a73dc7874780c3\" id:\"bd209538e8398f32271c9371366108f14bcb7d509550b8839744713a91e0d27c\" pid:4621 exit_status:1 exited_at:{seconds:1757999068 nanos:744255390}" Sep 16 05:04:28.805286 kubelet[3288]: I0916 05:04:28.801896 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pvqsk" podStartSLOduration=2.174338782 podStartE2EDuration="17.801870549s" podCreationTimestamp="2025-09-16 05:04:11 +0000 UTC" firstStartedPulling="2025-09-16 05:04:12.239865263 +0000 UTC m=+22.319202697" lastFinishedPulling="2025-09-16 05:04:27.86739703 +0000 UTC m=+37.946734464" observedRunningTime="2025-09-16 05:04:28.394821951 +0000 UTC m=+38.474159419" watchObservedRunningTime="2025-09-16 05:04:28.801870549 +0000 UTC m=+38.881208103" Sep 16 05:04:28.978833 kubelet[3288]: I0916 05:04:28.978666 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bff306de-bd7e-4453-aba3-b35487bc8935-whisker-ca-bundle\") pod \"bff306de-bd7e-4453-aba3-b35487bc8935\" (UID: \"bff306de-bd7e-4453-aba3-b35487bc8935\") " Sep 16 05:04:28.978833 kubelet[3288]: I0916 05:04:28.978709 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bff306de-bd7e-4453-aba3-b35487bc8935-whisker-backend-key-pair\") pod \"bff306de-bd7e-4453-aba3-b35487bc8935\" (UID: \"bff306de-bd7e-4453-aba3-b35487bc8935\") " Sep 16 05:04:28.978833 kubelet[3288]: I0916 05:04:28.978734 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmdxq\" (UniqueName: \"kubernetes.io/projected/bff306de-bd7e-4453-aba3-b35487bc8935-kube-api-access-mmdxq\") pod \"bff306de-bd7e-4453-aba3-b35487bc8935\" (UID: \"bff306de-bd7e-4453-aba3-b35487bc8935\") " Sep 16 05:04:28.986830 kubelet[3288]: I0916 05:04:28.986693 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bff306de-bd7e-4453-aba3-b35487bc8935-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bff306de-bd7e-4453-aba3-b35487bc8935" (UID: "bff306de-bd7e-4453-aba3-b35487bc8935"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 16 05:04:29.000951 kubelet[3288]: I0916 05:04:29.000129 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff306de-bd7e-4453-aba3-b35487bc8935-kube-api-access-mmdxq" (OuterVolumeSpecName: "kube-api-access-mmdxq") pod "bff306de-bd7e-4453-aba3-b35487bc8935" (UID: "bff306de-bd7e-4453-aba3-b35487bc8935"). InnerVolumeSpecName "kube-api-access-mmdxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 16 05:04:29.000951 kubelet[3288]: I0916 05:04:29.000270 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff306de-bd7e-4453-aba3-b35487bc8935-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bff306de-bd7e-4453-aba3-b35487bc8935" (UID: "bff306de-bd7e-4453-aba3-b35487bc8935"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 16 05:04:29.002213 systemd[1]: var-lib-kubelet-pods-bff306de\x2dbd7e\x2d4453\x2daba3\x2db35487bc8935-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmmdxq.mount: Deactivated successfully. Sep 16 05:04:29.002368 systemd[1]: var-lib-kubelet-pods-bff306de\x2dbd7e\x2d4453\x2daba3\x2db35487bc8935-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 16 05:04:29.079329 kubelet[3288]: I0916 05:04:29.079259 3288 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bff306de-bd7e-4453-aba3-b35487bc8935-whisker-backend-key-pair\") on node \"ip-172-31-26-160\" DevicePath \"\"" Sep 16 05:04:29.079329 kubelet[3288]: I0916 05:04:29.079301 3288 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmdxq\" (UniqueName: \"kubernetes.io/projected/bff306de-bd7e-4453-aba3-b35487bc8935-kube-api-access-mmdxq\") on node \"ip-172-31-26-160\" DevicePath \"\"" Sep 16 05:04:29.079329 kubelet[3288]: I0916 05:04:29.079313 3288 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bff306de-bd7e-4453-aba3-b35487bc8935-whisker-ca-bundle\") on node \"ip-172-31-26-160\" DevicePath \"\"" Sep 16 05:04:29.362537 systemd[1]: Removed slice kubepods-besteffort-podbff306de_bd7e_4453_aba3_b35487bc8935.slice - libcontainer container kubepods-besteffort-podbff306de_bd7e_4453_aba3_b35487bc8935.slice. Sep 16 05:04:29.530022 systemd[1]: Created slice kubepods-besteffort-poddef70af2_e278_45d1_b5be_7ae8177eeaa4.slice - libcontainer container kubepods-besteffort-poddef70af2_e278_45d1_b5be_7ae8177eeaa4.slice. Sep 16 05:04:29.536027 containerd[1903]: time="2025-09-16T05:04:29.535982613Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35ce390107eb15267a30c5bf0298248e6a41d57c1f3ceedef9a73dc7874780c3\" id:\"5e38b4d48d0955300c6eb8c2bfa91c4c0b2f8f6cf4cbab25d368e84d4cd12e7e\" pid:4665 exit_status:1 exited_at:{seconds:1757999069 nanos:535198595}" Sep 16 05:04:29.683171 kubelet[3288]: I0916 05:04:29.683027 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc6xr\" (UniqueName: \"kubernetes.io/projected/def70af2-e278-45d1-b5be-7ae8177eeaa4-kube-api-access-gc6xr\") pod \"whisker-54c87c7764-899xl\" (UID: \"def70af2-e278-45d1-b5be-7ae8177eeaa4\") " pod="calico-system/whisker-54c87c7764-899xl" Sep 16 05:04:29.683171 kubelet[3288]: I0916 05:04:29.683076 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/def70af2-e278-45d1-b5be-7ae8177eeaa4-whisker-backend-key-pair\") pod \"whisker-54c87c7764-899xl\" (UID: \"def70af2-e278-45d1-b5be-7ae8177eeaa4\") " pod="calico-system/whisker-54c87c7764-899xl" Sep 16 05:04:29.683171 kubelet[3288]: I0916 05:04:29.683101 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/def70af2-e278-45d1-b5be-7ae8177eeaa4-whisker-ca-bundle\") pod \"whisker-54c87c7764-899xl\" (UID: \"def70af2-e278-45d1-b5be-7ae8177eeaa4\") " pod="calico-system/whisker-54c87c7764-899xl" Sep 16 05:04:29.835202 containerd[1903]: time="2025-09-16T05:04:29.835131875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54c87c7764-899xl,Uid:def70af2-e278-45d1-b5be-7ae8177eeaa4,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:30.070334 kubelet[3288]: I0916 05:04:30.069606 3288 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bff306de-bd7e-4453-aba3-b35487bc8935" path="/var/lib/kubelet/pods/bff306de-bd7e-4453-aba3-b35487bc8935/volumes" Sep 16 05:04:30.406770 (udev-worker)[4602]: Network interface NamePolicy= disabled on kernel command line. Sep 16 05:04:30.413268 systemd-networkd[1814]: cali160db943eba: Link UP Sep 16 05:04:30.414645 systemd-networkd[1814]: cali160db943eba: Gained carrier Sep 16 05:04:30.450109 containerd[1903]: 2025-09-16 05:04:29.876 [INFO][4679] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 05:04:30.450109 containerd[1903]: 2025-09-16 05:04:29.922 [INFO][4679] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--160-k8s-whisker--54c87c7764--899xl-eth0 whisker-54c87c7764- calico-system def70af2-e278-45d1-b5be-7ae8177eeaa4 873 0 2025-09-16 05:04:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54c87c7764 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-26-160 whisker-54c87c7764-899xl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali160db943eba [] [] }} ContainerID="e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" Namespace="calico-system" Pod="whisker-54c87c7764-899xl" WorkloadEndpoint="ip--172--31--26--160-k8s-whisker--54c87c7764--899xl-" Sep 16 05:04:30.450109 containerd[1903]: 2025-09-16 05:04:29.922 [INFO][4679] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" Namespace="calico-system" Pod="whisker-54c87c7764-899xl" WorkloadEndpoint="ip--172--31--26--160-k8s-whisker--54c87c7764--899xl-eth0" Sep 16 05:04:30.450109 containerd[1903]: 2025-09-16 05:04:30.282 [INFO][4691] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" HandleID="k8s-pod-network.e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" Workload="ip--172--31--26--160-k8s-whisker--54c87c7764--899xl-eth0" Sep 16 05:04:30.451026 containerd[1903]: 2025-09-16 05:04:30.287 [INFO][4691] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" HandleID="k8s-pod-network.e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" Workload="ip--172--31--26--160-k8s-whisker--54c87c7764--899xl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e750), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-160", "pod":"whisker-54c87c7764-899xl", "timestamp":"2025-09-16 05:04:30.282279219 +0000 UTC"}, Hostname:"ip-172-31-26-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 05:04:30.451026 containerd[1903]: 2025-09-16 05:04:30.288 [INFO][4691] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:04:30.451026 containerd[1903]: 2025-09-16 05:04:30.289 [INFO][4691] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:04:30.451026 containerd[1903]: 2025-09-16 05:04:30.290 [INFO][4691] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-160' Sep 16 05:04:30.451026 containerd[1903]: 2025-09-16 05:04:30.325 [INFO][4691] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" host="ip-172-31-26-160" Sep 16 05:04:30.451026 containerd[1903]: 2025-09-16 05:04:30.343 [INFO][4691] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-160" Sep 16 05:04:30.451026 containerd[1903]: 2025-09-16 05:04:30.353 [INFO][4691] ipam/ipam.go 511: Trying affinity for 192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:30.451026 containerd[1903]: 2025-09-16 05:04:30.358 [INFO][4691] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:30.451026 containerd[1903]: 2025-09-16 05:04:30.361 [INFO][4691] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:30.451392 containerd[1903]: 2025-09-16 05:04:30.361 [INFO][4691] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" host="ip-172-31-26-160" Sep 16 05:04:30.451392 containerd[1903]: 2025-09-16 05:04:30.364 [INFO][4691] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd Sep 16 05:04:30.451392 containerd[1903]: 2025-09-16 05:04:30.373 [INFO][4691] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" host="ip-172-31-26-160" Sep 16 05:04:30.451392 containerd[1903]: 2025-09-16 05:04:30.385 [INFO][4691] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.193/26] block=192.168.23.192/26 handle="k8s-pod-network.e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" host="ip-172-31-26-160" Sep 16 05:04:30.451392 containerd[1903]: 2025-09-16 05:04:30.385 [INFO][4691] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.193/26] handle="k8s-pod-network.e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" host="ip-172-31-26-160" Sep 16 05:04:30.451392 containerd[1903]: 2025-09-16 05:04:30.385 [INFO][4691] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:04:30.451392 containerd[1903]: 2025-09-16 05:04:30.385 [INFO][4691] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.193/26] IPv6=[] ContainerID="e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" HandleID="k8s-pod-network.e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" Workload="ip--172--31--26--160-k8s-whisker--54c87c7764--899xl-eth0" Sep 16 05:04:30.451666 containerd[1903]: 2025-09-16 05:04:30.391 [INFO][4679] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" Namespace="calico-system" Pod="whisker-54c87c7764-899xl" WorkloadEndpoint="ip--172--31--26--160-k8s-whisker--54c87c7764--899xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--160-k8s-whisker--54c87c7764--899xl-eth0", GenerateName:"whisker-54c87c7764-", Namespace:"calico-system", SelfLink:"", UID:"def70af2-e278-45d1-b5be-7ae8177eeaa4", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 4, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54c87c7764", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-160", ContainerID:"", Pod:"whisker-54c87c7764-899xl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.23.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali160db943eba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:30.451666 containerd[1903]: 2025-09-16 05:04:30.392 [INFO][4679] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.193/32] ContainerID="e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" Namespace="calico-system" Pod="whisker-54c87c7764-899xl" WorkloadEndpoint="ip--172--31--26--160-k8s-whisker--54c87c7764--899xl-eth0" Sep 16 05:04:30.451802 containerd[1903]: 2025-09-16 05:04:30.392 [INFO][4679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali160db943eba ContainerID="e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" Namespace="calico-system" Pod="whisker-54c87c7764-899xl" WorkloadEndpoint="ip--172--31--26--160-k8s-whisker--54c87c7764--899xl-eth0" Sep 16 05:04:30.451802 containerd[1903]: 2025-09-16 05:04:30.417 [INFO][4679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" Namespace="calico-system" Pod="whisker-54c87c7764-899xl" WorkloadEndpoint="ip--172--31--26--160-k8s-whisker--54c87c7764--899xl-eth0" Sep 16 05:04:30.453393 containerd[1903]: 2025-09-16 05:04:30.418 [INFO][4679] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" Namespace="calico-system" Pod="whisker-54c87c7764-899xl" WorkloadEndpoint="ip--172--31--26--160-k8s-whisker--54c87c7764--899xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--160-k8s-whisker--54c87c7764--899xl-eth0", GenerateName:"whisker-54c87c7764-", Namespace:"calico-system", SelfLink:"", UID:"def70af2-e278-45d1-b5be-7ae8177eeaa4", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 4, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54c87c7764", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-160", ContainerID:"e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd", Pod:"whisker-54c87c7764-899xl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.23.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali160db943eba", MAC:"c6:d1:d2:5a:78:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:30.453490 containerd[1903]: 2025-09-16 05:04:30.437 [INFO][4679] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" Namespace="calico-system" Pod="whisker-54c87c7764-899xl" WorkloadEndpoint="ip--172--31--26--160-k8s-whisker--54c87c7764--899xl-eth0" Sep 16 05:04:30.695673 containerd[1903]: time="2025-09-16T05:04:30.695469030Z" level=info msg="connecting to shim e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd" address="unix:///run/containerd/s/e41932a25d50dc5c316c8a32b4901c33afd65cee72d8d37ab561044c6888f79c" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:30.733045 systemd[1]: Started cri-containerd-e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd.scope - libcontainer container e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd. Sep 16 05:04:30.824318 containerd[1903]: time="2025-09-16T05:04:30.824269388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54c87c7764-899xl,Uid:def70af2-e278-45d1-b5be-7ae8177eeaa4,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd\"" Sep 16 05:04:30.825721 containerd[1903]: time="2025-09-16T05:04:30.825690689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 16 05:04:32.179057 containerd[1903]: time="2025-09-16T05:04:32.179004433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:32.181147 containerd[1903]: time="2025-09-16T05:04:32.181001991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 16 05:04:32.183297 containerd[1903]: time="2025-09-16T05:04:32.183260278Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:32.186483 containerd[1903]: time="2025-09-16T05:04:32.186428760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:32.187220 containerd[1903]: time="2025-09-16T05:04:32.187173174Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.361284364s" Sep 16 05:04:32.187220 containerd[1903]: time="2025-09-16T05:04:32.187206215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 16 05:04:32.190445 containerd[1903]: time="2025-09-16T05:04:32.189470183Z" level=info msg="CreateContainer within sandbox \"e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 16 05:04:32.205746 containerd[1903]: time="2025-09-16T05:04:32.205713314Z" level=info msg="Container 7c2ee1473b4700cd659d59884017b3c6b5133141edd888c41cf616eb42d2f70f: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:32.221309 containerd[1903]: time="2025-09-16T05:04:32.221268409Z" level=info msg="CreateContainer within sandbox \"e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7c2ee1473b4700cd659d59884017b3c6b5133141edd888c41cf616eb42d2f70f\"" Sep 16 05:04:32.222249 containerd[1903]: time="2025-09-16T05:04:32.222213553Z" level=info msg="StartContainer for \"7c2ee1473b4700cd659d59884017b3c6b5133141edd888c41cf616eb42d2f70f\"" Sep 16 05:04:32.224557 containerd[1903]: time="2025-09-16T05:04:32.224516193Z" level=info msg="connecting to shim 7c2ee1473b4700cd659d59884017b3c6b5133141edd888c41cf616eb42d2f70f" address="unix:///run/containerd/s/e41932a25d50dc5c316c8a32b4901c33afd65cee72d8d37ab561044c6888f79c" protocol=ttrpc version=3 Sep 16 05:04:32.253046 systemd[1]: Started cri-containerd-7c2ee1473b4700cd659d59884017b3c6b5133141edd888c41cf616eb42d2f70f.scope - libcontainer container 7c2ee1473b4700cd659d59884017b3c6b5133141edd888c41cf616eb42d2f70f. Sep 16 05:04:32.307289 containerd[1903]: time="2025-09-16T05:04:32.307250517Z" level=info msg="StartContainer for \"7c2ee1473b4700cd659d59884017b3c6b5133141edd888c41cf616eb42d2f70f\" returns successfully" Sep 16 05:04:32.308718 containerd[1903]: time="2025-09-16T05:04:32.308672661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 16 05:04:32.421998 systemd-networkd[1814]: cali160db943eba: Gained IPv6LL Sep 16 05:04:33.065797 containerd[1903]: time="2025-09-16T05:04:33.065526374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6f88447f-fwsnq,Uid:833a7020-2682-4481-acbc-82870da6246d,Namespace:calico-apiserver,Attempt:0,}" Sep 16 05:04:33.200126 systemd-networkd[1814]: calif33b9155f12: Link UP Sep 16 05:04:33.201770 systemd-networkd[1814]: calif33b9155f12: Gained carrier Sep 16 05:04:33.227976 containerd[1903]: 2025-09-16 05:04:33.101 [INFO][4925] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 05:04:33.227976 containerd[1903]: 2025-09-16 05:04:33.113 [INFO][4925] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--fwsnq-eth0 calico-apiserver-d6f88447f- calico-apiserver 833a7020-2682-4481-acbc-82870da6246d 799 0 2025-09-16 05:04:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d6f88447f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-160 calico-apiserver-d6f88447f-fwsnq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif33b9155f12 [] [] }} ContainerID="56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" Namespace="calico-apiserver" Pod="calico-apiserver-d6f88447f-fwsnq" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--fwsnq-" Sep 16 05:04:33.227976 containerd[1903]: 2025-09-16 05:04:33.113 [INFO][4925] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" Namespace="calico-apiserver" Pod="calico-apiserver-d6f88447f-fwsnq" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--fwsnq-eth0" Sep 16 05:04:33.227976 containerd[1903]: 2025-09-16 05:04:33.144 [INFO][4938] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" HandleID="k8s-pod-network.56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" Workload="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--fwsnq-eth0" Sep 16 05:04:33.228751 containerd[1903]: 2025-09-16 05:04:33.145 [INFO][4938] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" HandleID="k8s-pod-network.56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" Workload="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--fwsnq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-160", "pod":"calico-apiserver-d6f88447f-fwsnq", "timestamp":"2025-09-16 05:04:33.144756421 +0000 UTC"}, Hostname:"ip-172-31-26-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 05:04:33.228751 containerd[1903]: 2025-09-16 05:04:33.145 [INFO][4938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:04:33.228751 containerd[1903]: 2025-09-16 05:04:33.145 [INFO][4938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:04:33.228751 containerd[1903]: 2025-09-16 05:04:33.145 [INFO][4938] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-160' Sep 16 05:04:33.228751 containerd[1903]: 2025-09-16 05:04:33.153 [INFO][4938] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" host="ip-172-31-26-160" Sep 16 05:04:33.228751 containerd[1903]: 2025-09-16 05:04:33.158 [INFO][4938] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-160" Sep 16 05:04:33.228751 containerd[1903]: 2025-09-16 05:04:33.164 [INFO][4938] ipam/ipam.go 511: Trying affinity for 192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:33.228751 containerd[1903]: 2025-09-16 05:04:33.166 [INFO][4938] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:33.228751 containerd[1903]: 2025-09-16 05:04:33.169 [INFO][4938] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:33.229483 containerd[1903]: 2025-09-16 05:04:33.169 [INFO][4938] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" host="ip-172-31-26-160" Sep 16 05:04:33.229483 containerd[1903]: 2025-09-16 05:04:33.173 [INFO][4938] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9 Sep 16 05:04:33.229483 containerd[1903]: 2025-09-16 05:04:33.179 [INFO][4938] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" host="ip-172-31-26-160" Sep 16 05:04:33.229483 containerd[1903]: 2025-09-16 05:04:33.188 [INFO][4938] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.194/26] block=192.168.23.192/26 handle="k8s-pod-network.56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" host="ip-172-31-26-160" Sep 16 05:04:33.229483 containerd[1903]: 2025-09-16 05:04:33.189 [INFO][4938] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.194/26] handle="k8s-pod-network.56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" host="ip-172-31-26-160" Sep 16 05:04:33.229483 containerd[1903]: 2025-09-16 05:04:33.189 [INFO][4938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:04:33.229483 containerd[1903]: 2025-09-16 05:04:33.189 [INFO][4938] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.194/26] IPv6=[] ContainerID="56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" HandleID="k8s-pod-network.56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" Workload="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--fwsnq-eth0" Sep 16 05:04:33.230125 containerd[1903]: 2025-09-16 05:04:33.196 [INFO][4925] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" Namespace="calico-apiserver" Pod="calico-apiserver-d6f88447f-fwsnq" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--fwsnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--fwsnq-eth0", GenerateName:"calico-apiserver-d6f88447f-", Namespace:"calico-apiserver", SelfLink:"", UID:"833a7020-2682-4481-acbc-82870da6246d", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 4, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6f88447f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-160", ContainerID:"", Pod:"calico-apiserver-d6f88447f-fwsnq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif33b9155f12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:33.230306 containerd[1903]: 2025-09-16 05:04:33.196 [INFO][4925] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.194/32] ContainerID="56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" Namespace="calico-apiserver" Pod="calico-apiserver-d6f88447f-fwsnq" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--fwsnq-eth0" Sep 16 05:04:33.230306 containerd[1903]: 2025-09-16 05:04:33.196 [INFO][4925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif33b9155f12 ContainerID="56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" Namespace="calico-apiserver" Pod="calico-apiserver-d6f88447f-fwsnq" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--fwsnq-eth0" Sep 16 05:04:33.230306 containerd[1903]: 2025-09-16 05:04:33.201 [INFO][4925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" Namespace="calico-apiserver" Pod="calico-apiserver-d6f88447f-fwsnq" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--fwsnq-eth0" Sep 16 05:04:33.230539 containerd[1903]: 2025-09-16 05:04:33.203 [INFO][4925] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" Namespace="calico-apiserver" Pod="calico-apiserver-d6f88447f-fwsnq" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--fwsnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--fwsnq-eth0", GenerateName:"calico-apiserver-d6f88447f-", Namespace:"calico-apiserver", SelfLink:"", UID:"833a7020-2682-4481-acbc-82870da6246d", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 4, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6f88447f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-160", ContainerID:"56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9", Pod:"calico-apiserver-d6f88447f-fwsnq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif33b9155f12", MAC:"82:e9:50:d6:5c:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:33.230696 containerd[1903]: 2025-09-16 05:04:33.223 [INFO][4925] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" Namespace="calico-apiserver" Pod="calico-apiserver-d6f88447f-fwsnq" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--fwsnq-eth0" Sep 16 05:04:33.268836 containerd[1903]: time="2025-09-16T05:04:33.268778020Z" level=info msg="connecting to shim 56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9" address="unix:///run/containerd/s/9197f5a129278b473fe422e6ce214560ae974eb3c6d3af72c9ee7a3cca49580e" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:33.316064 systemd[1]: Started cri-containerd-56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9.scope - libcontainer container 56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9. Sep 16 05:04:33.384875 containerd[1903]: time="2025-09-16T05:04:33.384831207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6f88447f-fwsnq,Uid:833a7020-2682-4481-acbc-82870da6246d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9\"" Sep 16 05:04:34.371251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184047834.mount: Deactivated successfully. Sep 16 05:04:34.396458 containerd[1903]: time="2025-09-16T05:04:34.396403060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:34.398427 containerd[1903]: time="2025-09-16T05:04:34.398213790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 16 05:04:34.400426 containerd[1903]: time="2025-09-16T05:04:34.400391565Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:34.406891 containerd[1903]: time="2025-09-16T05:04:34.406845933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:34.407607 containerd[1903]: time="2025-09-16T05:04:34.407580558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.098783902s" Sep 16 05:04:34.407733 containerd[1903]: time="2025-09-16T05:04:34.407718906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 16 05:04:34.408594 containerd[1903]: time="2025-09-16T05:04:34.408572429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 05:04:34.409912 containerd[1903]: time="2025-09-16T05:04:34.409877280Z" level=info msg="CreateContainer within sandbox \"e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 16 05:04:34.424090 containerd[1903]: time="2025-09-16T05:04:34.424056575Z" level=info msg="Container ac15f70be0d60bc449840d1c1551a28dabfa8258793bcc0143383c7b62f1fd9c: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:34.431161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2126267105.mount: Deactivated successfully. Sep 16 05:04:34.439914 containerd[1903]: time="2025-09-16T05:04:34.439871867Z" level=info msg="CreateContainer within sandbox \"e5f96e89b6b2409accb380e7d829a288570f00918a0a2887a64a62e6aa14c7cd\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ac15f70be0d60bc449840d1c1551a28dabfa8258793bcc0143383c7b62f1fd9c\"" Sep 16 05:04:34.440643 containerd[1903]: time="2025-09-16T05:04:34.440567645Z" level=info msg="StartContainer for \"ac15f70be0d60bc449840d1c1551a28dabfa8258793bcc0143383c7b62f1fd9c\"" Sep 16 05:04:34.441799 containerd[1903]: time="2025-09-16T05:04:34.441767719Z" level=info msg="connecting to shim ac15f70be0d60bc449840d1c1551a28dabfa8258793bcc0143383c7b62f1fd9c" address="unix:///run/containerd/s/e41932a25d50dc5c316c8a32b4901c33afd65cee72d8d37ab561044c6888f79c" protocol=ttrpc version=3 Sep 16 05:04:34.460163 systemd[1]: Started cri-containerd-ac15f70be0d60bc449840d1c1551a28dabfa8258793bcc0143383c7b62f1fd9c.scope - libcontainer container ac15f70be0d60bc449840d1c1551a28dabfa8258793bcc0143383c7b62f1fd9c. Sep 16 05:04:34.469959 systemd-networkd[1814]: calif33b9155f12: Gained IPv6LL Sep 16 05:04:34.523926 containerd[1903]: time="2025-09-16T05:04:34.523886464Z" level=info msg="StartContainer for \"ac15f70be0d60bc449840d1c1551a28dabfa8258793bcc0143383c7b62f1fd9c\" returns successfully" Sep 16 05:04:35.065372 containerd[1903]: time="2025-09-16T05:04:35.065327245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d8d6cbbb-tqsgv,Uid:369f054e-4d5b-4c67-87e2-df1787272db4,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:35.175743 kubelet[3288]: I0916 05:04:35.175704 3288 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 05:04:35.213588 systemd-networkd[1814]: cali2c229ea402f: Link UP Sep 16 05:04:35.215642 systemd-networkd[1814]: cali2c229ea402f: Gained carrier Sep 16 05:04:35.255833 containerd[1903]: 2025-09-16 05:04:35.092 [INFO][5068] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 05:04:35.255833 containerd[1903]: 2025-09-16 05:04:35.104 [INFO][5068] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--160-k8s-calico--kube--controllers--5d8d6cbbb--tqsgv-eth0 calico-kube-controllers-5d8d6cbbb- calico-system 369f054e-4d5b-4c67-87e2-df1787272db4 789 0 2025-09-16 05:04:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d8d6cbbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-160 calico-kube-controllers-5d8d6cbbb-tqsgv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2c229ea402f [] [] }} ContainerID="d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" Namespace="calico-system" Pod="calico-kube-controllers-5d8d6cbbb-tqsgv" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--kube--controllers--5d8d6cbbb--tqsgv-" Sep 16 05:04:35.255833 containerd[1903]: 2025-09-16 05:04:35.105 [INFO][5068] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" Namespace="calico-system" Pod="calico-kube-controllers-5d8d6cbbb-tqsgv" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--kube--controllers--5d8d6cbbb--tqsgv-eth0" Sep 16 05:04:35.255833 containerd[1903]: 2025-09-16 05:04:35.148 [INFO][5081] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" HandleID="k8s-pod-network.d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" Workload="ip--172--31--26--160-k8s-calico--kube--controllers--5d8d6cbbb--tqsgv-eth0" Sep 16 05:04:35.256209 containerd[1903]: 2025-09-16 05:04:35.150 [INFO][5081] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" HandleID="k8s-pod-network.d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" Workload="ip--172--31--26--160-k8s-calico--kube--controllers--5d8d6cbbb--tqsgv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac140), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-160", "pod":"calico-kube-controllers-5d8d6cbbb-tqsgv", "timestamp":"2025-09-16 05:04:35.14876789 +0000 UTC"}, Hostname:"ip-172-31-26-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 05:04:35.256209 containerd[1903]: 2025-09-16 05:04:35.150 [INFO][5081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:04:35.256209 containerd[1903]: 2025-09-16 05:04:35.150 [INFO][5081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:04:35.256209 containerd[1903]: 2025-09-16 05:04:35.150 [INFO][5081] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-160' Sep 16 05:04:35.256209 containerd[1903]: 2025-09-16 05:04:35.159 [INFO][5081] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" host="ip-172-31-26-160" Sep 16 05:04:35.256209 containerd[1903]: 2025-09-16 05:04:35.165 [INFO][5081] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-160" Sep 16 05:04:35.256209 containerd[1903]: 2025-09-16 05:04:35.172 [INFO][5081] ipam/ipam.go 511: Trying affinity for 192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:35.256209 containerd[1903]: 2025-09-16 05:04:35.175 [INFO][5081] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:35.256209 containerd[1903]: 2025-09-16 05:04:35.180 [INFO][5081] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:35.256614 containerd[1903]: 2025-09-16 05:04:35.180 [INFO][5081] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" host="ip-172-31-26-160" Sep 16 05:04:35.256614 containerd[1903]: 2025-09-16 05:04:35.182 [INFO][5081] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde Sep 16 05:04:35.256614 containerd[1903]: 2025-09-16 05:04:35.188 [INFO][5081] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" host="ip-172-31-26-160" Sep 16 05:04:35.256614 containerd[1903]: 2025-09-16 05:04:35.200 [INFO][5081] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.195/26] block=192.168.23.192/26 handle="k8s-pod-network.d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" host="ip-172-31-26-160" Sep 16 05:04:35.256614 containerd[1903]: 2025-09-16 05:04:35.200 [INFO][5081] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.195/26] handle="k8s-pod-network.d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" host="ip-172-31-26-160" Sep 16 05:04:35.256614 containerd[1903]: 2025-09-16 05:04:35.200 [INFO][5081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:04:35.256614 containerd[1903]: 2025-09-16 05:04:35.201 [INFO][5081] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.195/26] IPv6=[] ContainerID="d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" HandleID="k8s-pod-network.d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" Workload="ip--172--31--26--160-k8s-calico--kube--controllers--5d8d6cbbb--tqsgv-eth0" Sep 16 05:04:35.259004 containerd[1903]: 2025-09-16 05:04:35.204 [INFO][5068] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" Namespace="calico-system" Pod="calico-kube-controllers-5d8d6cbbb-tqsgv" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--kube--controllers--5d8d6cbbb--tqsgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--160-k8s-calico--kube--controllers--5d8d6cbbb--tqsgv-eth0", GenerateName:"calico-kube-controllers-5d8d6cbbb-", Namespace:"calico-system", SelfLink:"", UID:"369f054e-4d5b-4c67-87e2-df1787272db4", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 4, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d8d6cbbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-160", ContainerID:"", Pod:"calico-kube-controllers-5d8d6cbbb-tqsgv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2c229ea402f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:35.259135 containerd[1903]: 2025-09-16 05:04:35.206 [INFO][5068] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.195/32] ContainerID="d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" Namespace="calico-system" Pod="calico-kube-controllers-5d8d6cbbb-tqsgv" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--kube--controllers--5d8d6cbbb--tqsgv-eth0" Sep 16 05:04:35.259135 containerd[1903]: 2025-09-16 05:04:35.206 [INFO][5068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c229ea402f ContainerID="d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" Namespace="calico-system" Pod="calico-kube-controllers-5d8d6cbbb-tqsgv" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--kube--controllers--5d8d6cbbb--tqsgv-eth0" Sep 16 05:04:35.259135 containerd[1903]: 2025-09-16 05:04:35.221 [INFO][5068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" Namespace="calico-system" Pod="calico-kube-controllers-5d8d6cbbb-tqsgv" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--kube--controllers--5d8d6cbbb--tqsgv-eth0" Sep 16 05:04:35.259261 containerd[1903]: 2025-09-16 05:04:35.222 [INFO][5068] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" Namespace="calico-system" Pod="calico-kube-controllers-5d8d6cbbb-tqsgv" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--kube--controllers--5d8d6cbbb--tqsgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--160-k8s-calico--kube--controllers--5d8d6cbbb--tqsgv-eth0", GenerateName:"calico-kube-controllers-5d8d6cbbb-", Namespace:"calico-system", SelfLink:"", UID:"369f054e-4d5b-4c67-87e2-df1787272db4", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 4, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d8d6cbbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-160", ContainerID:"d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde", Pod:"calico-kube-controllers-5d8d6cbbb-tqsgv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2c229ea402f", MAC:"46:de:df:c9:98:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:35.260519 containerd[1903]: 2025-09-16 05:04:35.246 [INFO][5068] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" Namespace="calico-system" Pod="calico-kube-controllers-5d8d6cbbb-tqsgv" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--kube--controllers--5d8d6cbbb--tqsgv-eth0" Sep 16 05:04:35.386883 containerd[1903]: time="2025-09-16T05:04:35.384932724Z" level=info msg="connecting to shim d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde" address="unix:///run/containerd/s/c4f304af61df2aeb3c8a4f093f8176e7c1085bc898973ca7f8b0f6062b63131b" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:35.484576 systemd[1]: Started cri-containerd-d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde.scope - libcontainer container d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde. Sep 16 05:04:35.600648 kubelet[3288]: I0916 05:04:35.599247 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-54c87c7764-899xl" podStartSLOduration=3.010004884 podStartE2EDuration="6.592954203s" podCreationTimestamp="2025-09-16 05:04:29 +0000 UTC" firstStartedPulling="2025-09-16 05:04:30.825469177 +0000 UTC m=+40.904806611" lastFinishedPulling="2025-09-16 05:04:34.408418485 +0000 UTC m=+44.487755930" observedRunningTime="2025-09-16 05:04:35.592889941 +0000 UTC m=+45.672227398" watchObservedRunningTime="2025-09-16 05:04:35.592954203 +0000 UTC m=+45.672291663" Sep 16 05:04:35.939142 containerd[1903]: time="2025-09-16T05:04:35.937736887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d8d6cbbb-tqsgv,Uid:369f054e-4d5b-4c67-87e2-df1787272db4,Namespace:calico-system,Attempt:0,} returns sandbox id \"d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde\"" Sep 16 05:04:36.086191 containerd[1903]: time="2025-09-16T05:04:36.086105026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zg8mj,Uid:dc3cc30b-6a97-4224-a3b0-28026a64f7bd,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:36.090126 containerd[1903]: time="2025-09-16T05:04:36.087202931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6f88447f-252dq,Uid:86d4db9f-1b1a-4dc0-afd7-9b9364edc17e,Namespace:calico-apiserver,Attempt:0,}" Sep 16 05:04:36.092168 containerd[1903]: time="2025-09-16T05:04:36.092050512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dd2vk,Uid:dd82fc49-e12c-4f36-9356-cba04aba62de,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:36.519495 systemd-networkd[1814]: cali2c229ea402f: Gained IPv6LL Sep 16 05:04:36.640193 systemd-networkd[1814]: calid6a7204b5cd: Link UP Sep 16 05:04:36.640411 systemd-networkd[1814]: calid6a7204b5cd: Gained carrier Sep 16 05:04:36.703850 containerd[1903]: 2025-09-16 05:04:36.253 [INFO][5198] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 05:04:36.703850 containerd[1903]: 2025-09-16 05:04:36.291 [INFO][5198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--160-k8s-csi--node--driver--dd2vk-eth0 csi-node-driver- calico-system dd82fc49-e12c-4f36-9356-cba04aba62de 693 0 2025-09-16 05:04:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-26-160 csi-node-driver-dd2vk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid6a7204b5cd [] [] }} ContainerID="0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" Namespace="calico-system" Pod="csi-node-driver-dd2vk" WorkloadEndpoint="ip--172--31--26--160-k8s-csi--node--driver--dd2vk-" Sep 16 05:04:36.703850 containerd[1903]: 2025-09-16 05:04:36.291 [INFO][5198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" Namespace="calico-system" Pod="csi-node-driver-dd2vk" WorkloadEndpoint="ip--172--31--26--160-k8s-csi--node--driver--dd2vk-eth0" Sep 16 05:04:36.703850 containerd[1903]: 2025-09-16 05:04:36.495 [INFO][5214] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" HandleID="k8s-pod-network.0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" Workload="ip--172--31--26--160-k8s-csi--node--driver--dd2vk-eth0" Sep 16 05:04:36.704187 containerd[1903]: 2025-09-16 05:04:36.496 [INFO][5214] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" HandleID="k8s-pod-network.0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" Workload="ip--172--31--26--160-k8s-csi--node--driver--dd2vk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00062a020), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-160", "pod":"csi-node-driver-dd2vk", "timestamp":"2025-09-16 05:04:36.495937587 +0000 UTC"}, Hostname:"ip-172-31-26-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 05:04:36.704187 containerd[1903]: 2025-09-16 05:04:36.496 [INFO][5214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:04:36.704187 containerd[1903]: 2025-09-16 05:04:36.496 [INFO][5214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:04:36.704187 containerd[1903]: 2025-09-16 05:04:36.496 [INFO][5214] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-160' Sep 16 05:04:36.704187 containerd[1903]: 2025-09-16 05:04:36.528 [INFO][5214] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" host="ip-172-31-26-160" Sep 16 05:04:36.704187 containerd[1903]: 2025-09-16 05:04:36.549 [INFO][5214] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-160" Sep 16 05:04:36.704187 containerd[1903]: 2025-09-16 05:04:36.560 [INFO][5214] ipam/ipam.go 511: Trying affinity for 192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:36.704187 containerd[1903]: 2025-09-16 05:04:36.565 [INFO][5214] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:36.704187 containerd[1903]: 2025-09-16 05:04:36.569 [INFO][5214] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:36.704554 containerd[1903]: 2025-09-16 05:04:36.571 [INFO][5214] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" host="ip-172-31-26-160" Sep 16 05:04:36.704554 containerd[1903]: 2025-09-16 05:04:36.573 [INFO][5214] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020 Sep 16 05:04:36.704554 containerd[1903]: 2025-09-16 05:04:36.588 [INFO][5214] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" host="ip-172-31-26-160" Sep 16 05:04:36.704554 containerd[1903]: 2025-09-16 05:04:36.608 [INFO][5214] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.196/26] block=192.168.23.192/26 handle="k8s-pod-network.0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" host="ip-172-31-26-160" Sep 16 05:04:36.704554 containerd[1903]: 2025-09-16 05:04:36.608 [INFO][5214] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.196/26] handle="k8s-pod-network.0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" host="ip-172-31-26-160" Sep 16 05:04:36.704554 containerd[1903]: 2025-09-16 05:04:36.609 [INFO][5214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:04:36.704554 containerd[1903]: 2025-09-16 05:04:36.610 [INFO][5214] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.196/26] IPv6=[] ContainerID="0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" HandleID="k8s-pod-network.0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" Workload="ip--172--31--26--160-k8s-csi--node--driver--dd2vk-eth0" Sep 16 05:04:36.706089 containerd[1903]: 2025-09-16 05:04:36.632 [INFO][5198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" Namespace="calico-system" Pod="csi-node-driver-dd2vk" WorkloadEndpoint="ip--172--31--26--160-k8s-csi--node--driver--dd2vk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--160-k8s-csi--node--driver--dd2vk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dd82fc49-e12c-4f36-9356-cba04aba62de", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 4, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-160", ContainerID:"", Pod:"csi-node-driver-dd2vk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.23.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid6a7204b5cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:36.706259 containerd[1903]: 2025-09-16 05:04:36.633 [INFO][5198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.196/32] ContainerID="0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" Namespace="calico-system" Pod="csi-node-driver-dd2vk" WorkloadEndpoint="ip--172--31--26--160-k8s-csi--node--driver--dd2vk-eth0" Sep 16 05:04:36.706259 containerd[1903]: 2025-09-16 05:04:36.633 [INFO][5198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6a7204b5cd ContainerID="0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" Namespace="calico-system" Pod="csi-node-driver-dd2vk" WorkloadEndpoint="ip--172--31--26--160-k8s-csi--node--driver--dd2vk-eth0" Sep 16 05:04:36.706259 containerd[1903]: 2025-09-16 05:04:36.639 [INFO][5198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" Namespace="calico-system" Pod="csi-node-driver-dd2vk" WorkloadEndpoint="ip--172--31--26--160-k8s-csi--node--driver--dd2vk-eth0" Sep 16 05:04:36.706368 containerd[1903]: 2025-09-16 05:04:36.640 [INFO][5198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" Namespace="calico-system" Pod="csi-node-driver-dd2vk" WorkloadEndpoint="ip--172--31--26--160-k8s-csi--node--driver--dd2vk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--160-k8s-csi--node--driver--dd2vk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dd82fc49-e12c-4f36-9356-cba04aba62de", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 4, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-160", ContainerID:"0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020", Pod:"csi-node-driver-dd2vk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.23.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid6a7204b5cd", MAC:"06:6a:36:31:3d:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:36.706442 containerd[1903]: 2025-09-16 05:04:36.687 [INFO][5198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" Namespace="calico-system" Pod="csi-node-driver-dd2vk" WorkloadEndpoint="ip--172--31--26--160-k8s-csi--node--driver--dd2vk-eth0" Sep 16 05:04:36.757400 systemd-networkd[1814]: cali280787d9895: Link UP Sep 16 05:04:36.757642 systemd-networkd[1814]: cali280787d9895: Gained carrier Sep 16 05:04:36.831197 containerd[1903]: time="2025-09-16T05:04:36.831065383Z" level=info msg="connecting to shim 0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020" address="unix:///run/containerd/s/981aab110ddcb423dbc2778b0bb385cfe0d3cfdc73a404896f17e6e8dfe8fb70" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:36.835888 containerd[1903]: 2025-09-16 05:04:36.377 [INFO][5174] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--160-k8s-goldmane--7988f88666--zg8mj-eth0 goldmane-7988f88666- calico-system dc3cc30b-6a97-4224-a3b0-28026a64f7bd 796 0 2025-09-16 05:04:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-26-160 goldmane-7988f88666-zg8mj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali280787d9895 [] [] }} ContainerID="ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" Namespace="calico-system" Pod="goldmane-7988f88666-zg8mj" WorkloadEndpoint="ip--172--31--26--160-k8s-goldmane--7988f88666--zg8mj-" Sep 16 05:04:36.835888 containerd[1903]: 2025-09-16 05:04:36.378 [INFO][5174] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" Namespace="calico-system" Pod="goldmane-7988f88666-zg8mj" WorkloadEndpoint="ip--172--31--26--160-k8s-goldmane--7988f88666--zg8mj-eth0" Sep 16 05:04:36.835888 containerd[1903]: 2025-09-16 05:04:36.533 [INFO][5230] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" HandleID="k8s-pod-network.ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" Workload="ip--172--31--26--160-k8s-goldmane--7988f88666--zg8mj-eth0" Sep 16 05:04:36.836169 containerd[1903]: 2025-09-16 05:04:36.535 [INFO][5230] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" HandleID="k8s-pod-network.ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" Workload="ip--172--31--26--160-k8s-goldmane--7988f88666--zg8mj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325c20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-160", "pod":"goldmane-7988f88666-zg8mj", "timestamp":"2025-09-16 05:04:36.528723485 +0000 UTC"}, Hostname:"ip-172-31-26-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 05:04:36.836169 containerd[1903]: 2025-09-16 05:04:36.535 [INFO][5230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:04:36.836169 containerd[1903]: 2025-09-16 05:04:36.610 [INFO][5230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:04:36.836169 containerd[1903]: 2025-09-16 05:04:36.612 [INFO][5230] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-160' Sep 16 05:04:36.836169 containerd[1903]: 2025-09-16 05:04:36.645 [INFO][5230] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" host="ip-172-31-26-160" Sep 16 05:04:36.836169 containerd[1903]: 2025-09-16 05:04:36.685 [INFO][5230] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-160" Sep 16 05:04:36.836169 containerd[1903]: 2025-09-16 05:04:36.695 [INFO][5230] ipam/ipam.go 511: Trying affinity for 192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:36.836169 containerd[1903]: 2025-09-16 05:04:36.700 [INFO][5230] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:36.836169 containerd[1903]: 2025-09-16 05:04:36.704 [INFO][5230] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:36.836534 containerd[1903]: 2025-09-16 05:04:36.704 [INFO][5230] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" host="ip-172-31-26-160" Sep 16 05:04:36.836534 containerd[1903]: 2025-09-16 05:04:36.708 [INFO][5230] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485 Sep 16 05:04:36.836534 containerd[1903]: 2025-09-16 05:04:36.718 [INFO][5230] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" host="ip-172-31-26-160" Sep 16 05:04:36.836534 containerd[1903]: 2025-09-16 05:04:36.736 [INFO][5230] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.197/26] block=192.168.23.192/26 handle="k8s-pod-network.ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" host="ip-172-31-26-160" Sep 16 05:04:36.836534 containerd[1903]: 2025-09-16 05:04:36.736 [INFO][5230] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.197/26] handle="k8s-pod-network.ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" host="ip-172-31-26-160" Sep 16 05:04:36.836534 containerd[1903]: 2025-09-16 05:04:36.736 [INFO][5230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:04:36.836534 containerd[1903]: 2025-09-16 05:04:36.736 [INFO][5230] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.197/26] IPv6=[] ContainerID="ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" HandleID="k8s-pod-network.ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" Workload="ip--172--31--26--160-k8s-goldmane--7988f88666--zg8mj-eth0" Sep 16 05:04:36.837613 containerd[1903]: 2025-09-16 05:04:36.749 [INFO][5174] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" Namespace="calico-system" Pod="goldmane-7988f88666-zg8mj" WorkloadEndpoint="ip--172--31--26--160-k8s-goldmane--7988f88666--zg8mj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--160-k8s-goldmane--7988f88666--zg8mj-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"dc3cc30b-6a97-4224-a3b0-28026a64f7bd", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-160", ContainerID:"", Pod:"goldmane-7988f88666-zg8mj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.23.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali280787d9895", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:36.837613 containerd[1903]: 2025-09-16 05:04:36.751 [INFO][5174] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.197/32] ContainerID="ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" Namespace="calico-system" Pod="goldmane-7988f88666-zg8mj" WorkloadEndpoint="ip--172--31--26--160-k8s-goldmane--7988f88666--zg8mj-eth0" Sep 16 05:04:36.838057 containerd[1903]: 2025-09-16 05:04:36.751 [INFO][5174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali280787d9895 ContainerID="ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" Namespace="calico-system" Pod="goldmane-7988f88666-zg8mj" WorkloadEndpoint="ip--172--31--26--160-k8s-goldmane--7988f88666--zg8mj-eth0" Sep 16 05:04:36.838057 containerd[1903]: 2025-09-16 05:04:36.756 [INFO][5174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" Namespace="calico-system" Pod="goldmane-7988f88666-zg8mj" WorkloadEndpoint="ip--172--31--26--160-k8s-goldmane--7988f88666--zg8mj-eth0" Sep 16 05:04:36.838901 containerd[1903]: 2025-09-16 05:04:36.759 [INFO][5174] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" Namespace="calico-system" Pod="goldmane-7988f88666-zg8mj" WorkloadEndpoint="ip--172--31--26--160-k8s-goldmane--7988f88666--zg8mj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--160-k8s-goldmane--7988f88666--zg8mj-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"dc3cc30b-6a97-4224-a3b0-28026a64f7bd", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-160", ContainerID:"ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485", Pod:"goldmane-7988f88666-zg8mj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.23.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali280787d9895", MAC:"4a:87:9b:3f:64:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:36.839033 containerd[1903]: 2025-09-16 05:04:36.804 [INFO][5174] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" Namespace="calico-system" Pod="goldmane-7988f88666-zg8mj" WorkloadEndpoint="ip--172--31--26--160-k8s-goldmane--7988f88666--zg8mj-eth0" Sep 16 05:04:36.927142 systemd[1]: Started cri-containerd-0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020.scope - libcontainer container 0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020. Sep 16 05:04:36.937844 containerd[1903]: time="2025-09-16T05:04:36.937648757Z" level=info msg="connecting to shim ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485" address="unix:///run/containerd/s/517f5241f533e1da44b5796962b934cf78c7c3ed668be1ef53494291dcbfb0ed" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:36.956111 systemd-networkd[1814]: cali6998ca1eaae: Link UP Sep 16 05:04:36.959151 systemd-networkd[1814]: cali6998ca1eaae: Gained carrier Sep 16 05:04:37.002342 containerd[1903]: 2025-09-16 05:04:36.351 [INFO][5179] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--252dq-eth0 calico-apiserver-d6f88447f- calico-apiserver 86d4db9f-1b1a-4dc0-afd7-9b9364edc17e 797 0 2025-09-16 05:04:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d6f88447f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-160 calico-apiserver-d6f88447f-252dq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6998ca1eaae [] [] }} ContainerID="a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" Namespace="calico-apiserver" Pod="calico-apiserver-d6f88447f-252dq" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--252dq-" Sep 16 05:04:37.002342 containerd[1903]: 2025-09-16 05:04:36.353 [INFO][5179] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" Namespace="calico-apiserver" Pod="calico-apiserver-d6f88447f-252dq" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--252dq-eth0" Sep 16 05:04:37.002342 containerd[1903]: 2025-09-16 05:04:36.616 [INFO][5225] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" HandleID="k8s-pod-network.a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" Workload="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--252dq-eth0" Sep 16 05:04:37.002952 containerd[1903]: 2025-09-16 05:04:36.617 [INFO][5225] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" HandleID="k8s-pod-network.a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" Workload="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--252dq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-160", "pod":"calico-apiserver-d6f88447f-252dq", "timestamp":"2025-09-16 05:04:36.616944329 +0000 UTC"}, Hostname:"ip-172-31-26-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 05:04:37.002952 containerd[1903]: 2025-09-16 05:04:36.617 [INFO][5225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:04:37.002952 containerd[1903]: 2025-09-16 05:04:36.736 [INFO][5225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:04:37.002952 containerd[1903]: 2025-09-16 05:04:36.741 [INFO][5225] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-160' Sep 16 05:04:37.002952 containerd[1903]: 2025-09-16 05:04:36.792 [INFO][5225] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" host="ip-172-31-26-160" Sep 16 05:04:37.002952 containerd[1903]: 2025-09-16 05:04:36.817 [INFO][5225] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-160" Sep 16 05:04:37.002952 containerd[1903]: 2025-09-16 05:04:36.835 [INFO][5225] ipam/ipam.go 511: Trying affinity for 192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:37.002952 containerd[1903]: 2025-09-16 05:04:36.855 [INFO][5225] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:37.002952 containerd[1903]: 2025-09-16 05:04:36.862 [INFO][5225] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:37.003353 containerd[1903]: 2025-09-16 05:04:36.863 [INFO][5225] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" host="ip-172-31-26-160" Sep 16 05:04:37.003353 containerd[1903]: 2025-09-16 05:04:36.867 [INFO][5225] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4 Sep 16 05:04:37.003353 containerd[1903]: 2025-09-16 05:04:36.886 [INFO][5225] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" host="ip-172-31-26-160" Sep 16 05:04:37.003353 containerd[1903]: 2025-09-16 05:04:36.904 [INFO][5225] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.198/26] block=192.168.23.192/26 handle="k8s-pod-network.a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" host="ip-172-31-26-160" Sep 16 05:04:37.003353 containerd[1903]: 2025-09-16 05:04:36.905 [INFO][5225] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.198/26] handle="k8s-pod-network.a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" host="ip-172-31-26-160" Sep 16 05:04:37.003353 containerd[1903]: 2025-09-16 05:04:36.905 [INFO][5225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:04:37.003353 containerd[1903]: 2025-09-16 05:04:36.905 [INFO][5225] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.198/26] IPv6=[] ContainerID="a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" HandleID="k8s-pod-network.a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" Workload="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--252dq-eth0" Sep 16 05:04:37.003643 containerd[1903]: 2025-09-16 05:04:36.940 [INFO][5179] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" Namespace="calico-apiserver" Pod="calico-apiserver-d6f88447f-252dq" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--252dq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--252dq-eth0", GenerateName:"calico-apiserver-d6f88447f-", Namespace:"calico-apiserver", SelfLink:"", UID:"86d4db9f-1b1a-4dc0-afd7-9b9364edc17e", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 4, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6f88447f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-160", ContainerID:"", Pod:"calico-apiserver-d6f88447f-252dq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6998ca1eaae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:37.003764 containerd[1903]: 2025-09-16 05:04:36.943 [INFO][5179] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.198/32] ContainerID="a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" Namespace="calico-apiserver" Pod="calico-apiserver-d6f88447f-252dq" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--252dq-eth0" Sep 16 05:04:37.003764 containerd[1903]: 2025-09-16 05:04:36.947 [INFO][5179] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6998ca1eaae ContainerID="a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" Namespace="calico-apiserver" Pod="calico-apiserver-d6f88447f-252dq" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--252dq-eth0" Sep 16 05:04:37.003764 containerd[1903]: 2025-09-16 05:04:36.954 [INFO][5179] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" Namespace="calico-apiserver" Pod="calico-apiserver-d6f88447f-252dq" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--252dq-eth0" Sep 16 05:04:37.004960 containerd[1903]: 2025-09-16 05:04:36.957 [INFO][5179] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" Namespace="calico-apiserver" Pod="calico-apiserver-d6f88447f-252dq" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--252dq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--252dq-eth0", GenerateName:"calico-apiserver-d6f88447f-", Namespace:"calico-apiserver", SelfLink:"", UID:"86d4db9f-1b1a-4dc0-afd7-9b9364edc17e", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 4, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6f88447f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-160", ContainerID:"a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4", Pod:"calico-apiserver-d6f88447f-252dq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6998ca1eaae", MAC:"d2:1a:2a:e2:7e:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:37.005114 containerd[1903]: 2025-09-16 05:04:36.984 [INFO][5179] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" Namespace="calico-apiserver" Pod="calico-apiserver-d6f88447f-252dq" WorkloadEndpoint="ip--172--31--26--160-k8s-calico--apiserver--d6f88447f--252dq-eth0" Sep 16 05:04:37.036334 systemd[1]: Started cri-containerd-ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485.scope - libcontainer container ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485. Sep 16 05:04:37.059359 containerd[1903]: time="2025-09-16T05:04:37.059271814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dd2vk,Uid:dd82fc49-e12c-4f36-9356-cba04aba62de,Namespace:calico-system,Attempt:0,} returns sandbox id \"0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020\"" Sep 16 05:04:37.068201 containerd[1903]: time="2025-09-16T05:04:37.068164264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p7rrw,Uid:a33e2693-caa3-4c5a-931a-c092b2ac0ca3,Namespace:kube-system,Attempt:0,}" Sep 16 05:04:37.072958 containerd[1903]: time="2025-09-16T05:04:37.068381488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hscsv,Uid:97844c74-d2ee-4414-82c7-5b2cecb1de63,Namespace:kube-system,Attempt:0,}" Sep 16 05:04:37.103761 containerd[1903]: time="2025-09-16T05:04:37.103473844Z" level=info msg="connecting to shim a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4" address="unix:///run/containerd/s/5a9872360adef92189e75b862420a246e26826155ac159f66695e3fc2040d642" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:37.233156 systemd[1]: Started cri-containerd-a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4.scope - libcontainer container a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4. Sep 16 05:04:37.482271 containerd[1903]: time="2025-09-16T05:04:37.482204993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zg8mj,Uid:dc3cc30b-6a97-4224-a3b0-28026a64f7bd,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485\"" Sep 16 05:04:37.529493 containerd[1903]: time="2025-09-16T05:04:37.526449744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6f88447f-252dq,Uid:86d4db9f-1b1a-4dc0-afd7-9b9364edc17e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4\"" Sep 16 05:04:37.619454 systemd-networkd[1814]: cali7879b69df7c: Link UP Sep 16 05:04:37.623337 systemd-networkd[1814]: cali7879b69df7c: Gained carrier Sep 16 05:04:37.669107 containerd[1903]: 2025-09-16 05:04:37.326 [INFO][5368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--160-k8s-coredns--7c65d6cfc9--p7rrw-eth0 coredns-7c65d6cfc9- kube-system a33e2693-caa3-4c5a-931a-c092b2ac0ca3 794 0 2025-09-16 05:03:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-160 coredns-7c65d6cfc9-p7rrw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7879b69df7c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p7rrw" WorkloadEndpoint="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--p7rrw-" Sep 16 05:04:37.669107 containerd[1903]: 2025-09-16 05:04:37.327 [INFO][5368] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p7rrw" WorkloadEndpoint="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--p7rrw-eth0" Sep 16 05:04:37.669107 containerd[1903]: 2025-09-16 05:04:37.495 [INFO][5442] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" HandleID="k8s-pod-network.30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" Workload="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--p7rrw-eth0" Sep 16 05:04:37.669462 containerd[1903]: 2025-09-16 05:04:37.495 [INFO][5442] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" HandleID="k8s-pod-network.30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" Workload="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--p7rrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103920), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-160", "pod":"coredns-7c65d6cfc9-p7rrw", "timestamp":"2025-09-16 05:04:37.49542588 +0000 UTC"}, Hostname:"ip-172-31-26-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 05:04:37.669462 containerd[1903]: 2025-09-16 05:04:37.495 [INFO][5442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:04:37.669462 containerd[1903]: 2025-09-16 05:04:37.495 [INFO][5442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:04:37.669462 containerd[1903]: 2025-09-16 05:04:37.495 [INFO][5442] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-160' Sep 16 05:04:37.669462 containerd[1903]: 2025-09-16 05:04:37.518 [INFO][5442] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" host="ip-172-31-26-160" Sep 16 05:04:37.669462 containerd[1903]: 2025-09-16 05:04:37.540 [INFO][5442] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-160" Sep 16 05:04:37.669462 containerd[1903]: 2025-09-16 05:04:37.550 [INFO][5442] ipam/ipam.go 511: Trying affinity for 192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:37.669462 containerd[1903]: 2025-09-16 05:04:37.555 [INFO][5442] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:37.669462 containerd[1903]: 2025-09-16 05:04:37.561 [INFO][5442] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:37.670001 containerd[1903]: 2025-09-16 05:04:37.562 [INFO][5442] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" host="ip-172-31-26-160" Sep 16 05:04:37.670001 containerd[1903]: 2025-09-16 05:04:37.566 [INFO][5442] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99 Sep 16 05:04:37.670001 containerd[1903]: 2025-09-16 05:04:37.581 [INFO][5442] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" host="ip-172-31-26-160" Sep 16 05:04:37.670001 containerd[1903]: 2025-09-16 05:04:37.598 [INFO][5442] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.199/26] block=192.168.23.192/26 handle="k8s-pod-network.30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" host="ip-172-31-26-160" Sep 16 05:04:37.670001 containerd[1903]: 2025-09-16 05:04:37.598 [INFO][5442] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.199/26] handle="k8s-pod-network.30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" host="ip-172-31-26-160" Sep 16 05:04:37.670001 containerd[1903]: 2025-09-16 05:04:37.598 [INFO][5442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:04:37.670001 containerd[1903]: 2025-09-16 05:04:37.598 [INFO][5442] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.199/26] IPv6=[] ContainerID="30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" HandleID="k8s-pod-network.30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" Workload="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--p7rrw-eth0" Sep 16 05:04:37.670401 containerd[1903]: 2025-09-16 05:04:37.608 [INFO][5368] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p7rrw" WorkloadEndpoint="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--p7rrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--160-k8s-coredns--7c65d6cfc9--p7rrw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a33e2693-caa3-4c5a-931a-c092b2ac0ca3", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-160", ContainerID:"", Pod:"coredns-7c65d6cfc9-p7rrw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7879b69df7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:37.670401 containerd[1903]: 2025-09-16 05:04:37.609 [INFO][5368] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.199/32] ContainerID="30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p7rrw" WorkloadEndpoint="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--p7rrw-eth0" Sep 16 05:04:37.670401 containerd[1903]: 2025-09-16 05:04:37.609 [INFO][5368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7879b69df7c ContainerID="30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p7rrw" WorkloadEndpoint="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--p7rrw-eth0" Sep 16 05:04:37.670401 containerd[1903]: 2025-09-16 05:04:37.623 [INFO][5368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p7rrw" WorkloadEndpoint="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--p7rrw-eth0" Sep 16 05:04:37.670401 containerd[1903]: 2025-09-16 05:04:37.630 [INFO][5368] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p7rrw" WorkloadEndpoint="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--p7rrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--160-k8s-coredns--7c65d6cfc9--p7rrw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a33e2693-caa3-4c5a-931a-c092b2ac0ca3", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-160", ContainerID:"30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99", Pod:"coredns-7c65d6cfc9-p7rrw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7879b69df7c", MAC:"46:8a:8c:57:fa:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:37.670401 containerd[1903]: 2025-09-16 05:04:37.662 [INFO][5368] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p7rrw" WorkloadEndpoint="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--p7rrw-eth0" Sep 16 05:04:37.734048 systemd-networkd[1814]: calid6a7204b5cd: Gained IPv6LL Sep 16 05:04:37.766990 systemd-networkd[1814]: cali3317261ad25: Link UP Sep 16 05:04:37.776927 systemd-networkd[1814]: cali3317261ad25: Gained carrier Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.294 [INFO][5384] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--160-k8s-coredns--7c65d6cfc9--hscsv-eth0 coredns-7c65d6cfc9- kube-system 97844c74-d2ee-4414-82c7-5b2cecb1de63 798 0 2025-09-16 05:03:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-160 coredns-7c65d6cfc9-hscsv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3317261ad25 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hscsv" WorkloadEndpoint="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--hscsv-" Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.296 [INFO][5384] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hscsv" WorkloadEndpoint="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--hscsv-eth0" Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.534 [INFO][5436] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" HandleID="k8s-pod-network.13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" Workload="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--hscsv-eth0" Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.535 [INFO][5436] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" HandleID="k8s-pod-network.13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" Workload="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--hscsv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb00), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-160", "pod":"coredns-7c65d6cfc9-hscsv", "timestamp":"2025-09-16 05:04:37.534672428 +0000 UTC"}, Hostname:"ip-172-31-26-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.536 [INFO][5436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.599 [INFO][5436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.599 [INFO][5436] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-160' Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.625 [INFO][5436] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" host="ip-172-31-26-160" Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.647 [INFO][5436] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-160" Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.666 [INFO][5436] ipam/ipam.go 511: Trying affinity for 192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.673 [INFO][5436] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.681 [INFO][5436] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="ip-172-31-26-160" Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.681 [INFO][5436] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" host="ip-172-31-26-160" Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.684 [INFO][5436] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.695 [INFO][5436] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" host="ip-172-31-26-160" Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.725 [INFO][5436] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.200/26] block=192.168.23.192/26 handle="k8s-pod-network.13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" host="ip-172-31-26-160" Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.725 [INFO][5436] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.200/26] handle="k8s-pod-network.13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" host="ip-172-31-26-160" Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.726 [INFO][5436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:04:37.816905 containerd[1903]: 2025-09-16 05:04:37.726 [INFO][5436] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.200/26] IPv6=[] ContainerID="13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" HandleID="k8s-pod-network.13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" Workload="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--hscsv-eth0" Sep 16 05:04:37.817536 containerd[1903]: 2025-09-16 05:04:37.741 [INFO][5384] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hscsv" WorkloadEndpoint="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--hscsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--160-k8s-coredns--7c65d6cfc9--hscsv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"97844c74-d2ee-4414-82c7-5b2cecb1de63", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-160", ContainerID:"", Pod:"coredns-7c65d6cfc9-hscsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3317261ad25", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:37.817536 containerd[1903]: 2025-09-16 05:04:37.745 [INFO][5384] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.200/32] ContainerID="13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hscsv" WorkloadEndpoint="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--hscsv-eth0" Sep 16 05:04:37.817536 containerd[1903]: 2025-09-16 05:04:37.745 [INFO][5384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3317261ad25 ContainerID="13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hscsv" WorkloadEndpoint="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--hscsv-eth0" Sep 16 05:04:37.817536 containerd[1903]: 2025-09-16 05:04:37.772 [INFO][5384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hscsv" WorkloadEndpoint="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--hscsv-eth0" Sep 16 05:04:37.817536 containerd[1903]: 2025-09-16 05:04:37.773 [INFO][5384] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hscsv" WorkloadEndpoint="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--hscsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--160-k8s-coredns--7c65d6cfc9--hscsv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"97844c74-d2ee-4414-82c7-5b2cecb1de63", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-160", ContainerID:"13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac", Pod:"coredns-7c65d6cfc9-hscsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3317261ad25", MAC:"ba:57:ab:5b:e8:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:37.817536 containerd[1903]: 2025-09-16 05:04:37.805 [INFO][5384] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hscsv" WorkloadEndpoint="ip--172--31--26--160-k8s-coredns--7c65d6cfc9--hscsv-eth0" Sep 16 05:04:37.848725 containerd[1903]: time="2025-09-16T05:04:37.848648337Z" level=info msg="connecting to shim 30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99" address="unix:///run/containerd/s/c16421b2a28714e65a8baf3e57e0893883d145440802d9355fa0055cdbe72fda" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:37.909160 systemd[1]: Started cri-containerd-30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99.scope - libcontainer container 30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99. Sep 16 05:04:37.916776 containerd[1903]: time="2025-09-16T05:04:37.916739279Z" level=info msg="connecting to shim 13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac" address="unix:///run/containerd/s/150bf9d4c3d965bf01ba064d252ff3a88963c7f48cbdd3dfc23e833ddcb73338" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:38.007112 systemd[1]: Started cri-containerd-13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac.scope - libcontainer container 13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac. Sep 16 05:04:38.100742 containerd[1903]: time="2025-09-16T05:04:38.100699583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p7rrw,Uid:a33e2693-caa3-4c5a-931a-c092b2ac0ca3,Namespace:kube-system,Attempt:0,} returns sandbox id \"30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99\"" Sep 16 05:04:38.144139 containerd[1903]: time="2025-09-16T05:04:38.144094388Z" level=info msg="CreateContainer within sandbox \"30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 05:04:38.183008 systemd-networkd[1814]: cali6998ca1eaae: Gained IPv6LL Sep 16 05:04:38.206699 containerd[1903]: time="2025-09-16T05:04:38.206655730Z" level=info msg="Container c6251165b9d120f4d5ce12c7b96a536b9b3fd197a2b7b010e8d7de6cd0801dd5: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:38.232711 containerd[1903]: time="2025-09-16T05:04:38.232429429Z" level=info msg="CreateContainer within sandbox \"30d09cd407a6a49425cc460e118398b5da56370f7a38f4a9d540969ec851fc99\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c6251165b9d120f4d5ce12c7b96a536b9b3fd197a2b7b010e8d7de6cd0801dd5\"" Sep 16 05:04:38.234405 containerd[1903]: time="2025-09-16T05:04:38.234020985Z" level=info msg="StartContainer for \"c6251165b9d120f4d5ce12c7b96a536b9b3fd197a2b7b010e8d7de6cd0801dd5\"" Sep 16 05:04:38.237343 containerd[1903]: time="2025-09-16T05:04:38.236187191Z" level=info msg="connecting to shim c6251165b9d120f4d5ce12c7b96a536b9b3fd197a2b7b010e8d7de6cd0801dd5" address="unix:///run/containerd/s/c16421b2a28714e65a8baf3e57e0893883d145440802d9355fa0055cdbe72fda" protocol=ttrpc version=3 Sep 16 05:04:38.266903 containerd[1903]: time="2025-09-16T05:04:38.266769876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hscsv,Uid:97844c74-d2ee-4414-82c7-5b2cecb1de63,Namespace:kube-system,Attempt:0,} returns sandbox id \"13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac\"" Sep 16 05:04:38.279872 containerd[1903]: time="2025-09-16T05:04:38.279565278Z" level=info msg="CreateContainer within sandbox \"13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 05:04:38.322081 systemd[1]: Started cri-containerd-c6251165b9d120f4d5ce12c7b96a536b9b3fd197a2b7b010e8d7de6cd0801dd5.scope - libcontainer container c6251165b9d120f4d5ce12c7b96a536b9b3fd197a2b7b010e8d7de6cd0801dd5. Sep 16 05:04:38.326660 containerd[1903]: time="2025-09-16T05:04:38.326621870Z" level=info msg="Container 9d2895bb7d386292522ec0a2336bee63265de3802a5b790c35fe7bcc6ad9abfd: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:38.351983 containerd[1903]: time="2025-09-16T05:04:38.351936821Z" level=info msg="CreateContainer within sandbox \"13d0ae91b6bd648ec826083d62193b798f2f5dfa4922ca3c77796d0415fa17ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d2895bb7d386292522ec0a2336bee63265de3802a5b790c35fe7bcc6ad9abfd\"" Sep 16 05:04:38.355129 containerd[1903]: time="2025-09-16T05:04:38.355094597Z" level=info msg="StartContainer for \"9d2895bb7d386292522ec0a2336bee63265de3802a5b790c35fe7bcc6ad9abfd\"" Sep 16 05:04:38.361147 containerd[1903]: time="2025-09-16T05:04:38.360886057Z" level=info msg="connecting to shim 9d2895bb7d386292522ec0a2336bee63265de3802a5b790c35fe7bcc6ad9abfd" address="unix:///run/containerd/s/150bf9d4c3d965bf01ba064d252ff3a88963c7f48cbdd3dfc23e833ddcb73338" protocol=ttrpc version=3 Sep 16 05:04:38.438254 systemd-networkd[1814]: cali280787d9895: Gained IPv6LL Sep 16 05:04:38.438281 systemd[1]: Started cri-containerd-9d2895bb7d386292522ec0a2336bee63265de3802a5b790c35fe7bcc6ad9abfd.scope - libcontainer container 9d2895bb7d386292522ec0a2336bee63265de3802a5b790c35fe7bcc6ad9abfd. Sep 16 05:04:38.490061 containerd[1903]: time="2025-09-16T05:04:38.489994751Z" level=info msg="StartContainer for \"c6251165b9d120f4d5ce12c7b96a536b9b3fd197a2b7b010e8d7de6cd0801dd5\" returns successfully" Sep 16 05:04:38.588482 containerd[1903]: time="2025-09-16T05:04:38.588070821Z" level=info msg="StartContainer for \"9d2895bb7d386292522ec0a2336bee63265de3802a5b790c35fe7bcc6ad9abfd\" returns successfully" Sep 16 05:04:38.799769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1210605231.mount: Deactivated successfully. Sep 16 05:04:38.843456 systemd-networkd[1814]: vxlan.calico: Link UP Sep 16 05:04:38.843467 systemd-networkd[1814]: vxlan.calico: Gained carrier Sep 16 05:04:39.026922 (udev-worker)[4600]: Network interface NamePolicy= disabled on kernel command line. Sep 16 05:04:39.079352 systemd-networkd[1814]: cali7879b69df7c: Gained IPv6LL Sep 16 05:04:39.607435 containerd[1903]: time="2025-09-16T05:04:39.607390141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:39.610802 containerd[1903]: time="2025-09-16T05:04:39.610760766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 16 05:04:39.616322 containerd[1903]: time="2025-09-16T05:04:39.616168335Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:39.621346 kubelet[3288]: I0916 05:04:39.621279 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-p7rrw" podStartSLOduration=44.621248187 podStartE2EDuration="44.621248187s" podCreationTimestamp="2025-09-16 05:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:04:39.61918226 +0000 UTC m=+49.698519715" watchObservedRunningTime="2025-09-16 05:04:39.621248187 +0000 UTC m=+49.700585643" Sep 16 05:04:39.627975 containerd[1903]: time="2025-09-16T05:04:39.627915264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:39.629752 containerd[1903]: time="2025-09-16T05:04:39.629706639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.221090971s" Sep 16 05:04:39.629886 containerd[1903]: time="2025-09-16T05:04:39.629840135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 16 05:04:39.632167 containerd[1903]: time="2025-09-16T05:04:39.632134737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 16 05:04:39.635885 containerd[1903]: time="2025-09-16T05:04:39.634899466Z" level=info msg="CreateContainer within sandbox \"56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 05:04:39.655593 containerd[1903]: time="2025-09-16T05:04:39.653961368Z" level=info msg="Container 677585d47b0057338118d354fb42f3427a5cc049dbefa8c76d2cbdc41ac9cf08: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:39.685464 containerd[1903]: time="2025-09-16T05:04:39.685419138Z" level=info msg="CreateContainer within sandbox \"56bdfd85d90eab02042e8bdd295218e5565d3dcb0b3877cfc0a5a09ba59618c9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"677585d47b0057338118d354fb42f3427a5cc049dbefa8c76d2cbdc41ac9cf08\"" Sep 16 05:04:39.695355 containerd[1903]: time="2025-09-16T05:04:39.695294249Z" level=info msg="StartContainer for \"677585d47b0057338118d354fb42f3427a5cc049dbefa8c76d2cbdc41ac9cf08\"" Sep 16 05:04:39.697874 kubelet[3288]: I0916 05:04:39.696461 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hscsv" podStartSLOduration=44.696299894 podStartE2EDuration="44.696299894s" podCreationTimestamp="2025-09-16 05:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:04:39.644017832 +0000 UTC m=+49.723355284" watchObservedRunningTime="2025-09-16 05:04:39.696299894 +0000 UTC m=+49.775637348" Sep 16 05:04:39.698571 containerd[1903]: time="2025-09-16T05:04:39.698485374Z" level=info msg="connecting to shim 677585d47b0057338118d354fb42f3427a5cc049dbefa8c76d2cbdc41ac9cf08" address="unix:///run/containerd/s/9197f5a129278b473fe422e6ce214560ae974eb3c6d3af72c9ee7a3cca49580e" protocol=ttrpc version=3 Sep 16 05:04:39.782038 systemd-networkd[1814]: cali3317261ad25: Gained IPv6LL Sep 16 05:04:39.825830 systemd[1]: Started cri-containerd-677585d47b0057338118d354fb42f3427a5cc049dbefa8c76d2cbdc41ac9cf08.scope - libcontainer container 677585d47b0057338118d354fb42f3427a5cc049dbefa8c76d2cbdc41ac9cf08. Sep 16 05:04:39.879061 systemd[1]: Started sshd@7-172.31.26.160:22-139.178.68.195:32938.service - OpenSSH per-connection server daemon (139.178.68.195:32938). Sep 16 05:04:40.125832 containerd[1903]: time="2025-09-16T05:04:40.125758884Z" level=info msg="StartContainer for \"677585d47b0057338118d354fb42f3427a5cc049dbefa8c76d2cbdc41ac9cf08\" returns successfully" Sep 16 05:04:40.166051 systemd-networkd[1814]: vxlan.calico: Gained IPv6LL Sep 16 05:04:40.183871 sshd[5719]: Accepted publickey for core from 139.178.68.195 port 32938 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:04:40.190481 sshd-session[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:40.205576 systemd-logind[1866]: New session 8 of user core. Sep 16 05:04:40.213514 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 05:04:40.601783 kubelet[3288]: I0916 05:04:40.601603 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d6f88447f-fwsnq" podStartSLOduration=27.356771338 podStartE2EDuration="33.601578198s" podCreationTimestamp="2025-09-16 05:04:07 +0000 UTC" firstStartedPulling="2025-09-16 05:04:33.3870233 +0000 UTC m=+43.466360741" lastFinishedPulling="2025-09-16 05:04:39.631830151 +0000 UTC m=+49.711167601" observedRunningTime="2025-09-16 05:04:40.599656146 +0000 UTC m=+50.678993613" watchObservedRunningTime="2025-09-16 05:04:40.601578198 +0000 UTC m=+50.680915654" Sep 16 05:04:41.443297 sshd[5733]: Connection closed by 139.178.68.195 port 32938 Sep 16 05:04:41.443957 sshd-session[5719]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:41.451670 systemd[1]: sshd@7-172.31.26.160:22-139.178.68.195:32938.service: Deactivated successfully. Sep 16 05:04:41.456009 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 05:04:41.457280 systemd-logind[1866]: Session 8 logged out. Waiting for processes to exit. Sep 16 05:04:41.462438 systemd-logind[1866]: Removed session 8. Sep 16 05:04:43.082449 ntpd[2200]: Listen normally on 6 vxlan.calico 192.168.23.192:123 Sep 16 05:04:43.082505 ntpd[2200]: Listen normally on 7 cali160db943eba [fe80::ecee:eeff:feee:eeee%4]:123 Sep 16 05:04:43.083847 ntpd[2200]: 16 Sep 05:04:43 ntpd[2200]: Listen normally on 6 vxlan.calico 192.168.23.192:123 Sep 16 05:04:43.083847 ntpd[2200]: 16 Sep 05:04:43 ntpd[2200]: Listen normally on 7 cali160db943eba [fe80::ecee:eeff:feee:eeee%4]:123 Sep 16 05:04:43.083847 ntpd[2200]: 16 Sep 05:04:43 ntpd[2200]: Listen normally on 8 calif33b9155f12 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 16 05:04:43.083847 ntpd[2200]: 16 Sep 05:04:43 ntpd[2200]: Listen normally on 9 cali2c229ea402f [fe80::ecee:eeff:feee:eeee%6]:123 Sep 16 05:04:43.083847 ntpd[2200]: 16 Sep 05:04:43 ntpd[2200]: Listen normally on 10 calid6a7204b5cd [fe80::ecee:eeff:feee:eeee%7]:123 Sep 16 05:04:43.083847 ntpd[2200]: 16 Sep 05:04:43 ntpd[2200]: Listen normally on 11 cali280787d9895 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 16 05:04:43.083847 ntpd[2200]: 16 Sep 05:04:43 ntpd[2200]: Listen normally on 12 cali6998ca1eaae [fe80::ecee:eeff:feee:eeee%9]:123 Sep 16 05:04:43.083847 ntpd[2200]: 16 Sep 05:04:43 ntpd[2200]: Listen normally on 13 cali7879b69df7c [fe80::ecee:eeff:feee:eeee%10]:123 Sep 16 05:04:43.083847 ntpd[2200]: 16 Sep 05:04:43 ntpd[2200]: Listen normally on 14 cali3317261ad25 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 16 05:04:43.083847 ntpd[2200]: 16 Sep 05:04:43 ntpd[2200]: Listen normally on 15 vxlan.calico [fe80::648a:f8ff:fe94:93a5%12]:123 Sep 16 05:04:43.082528 ntpd[2200]: Listen normally on 8 calif33b9155f12 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 16 05:04:43.082547 ntpd[2200]: Listen normally on 9 cali2c229ea402f [fe80::ecee:eeff:feee:eeee%6]:123 Sep 16 05:04:43.082565 ntpd[2200]: Listen normally on 10 calid6a7204b5cd [fe80::ecee:eeff:feee:eeee%7]:123 Sep 16 05:04:43.082584 ntpd[2200]: Listen normally on 11 cali280787d9895 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 16 05:04:43.082607 ntpd[2200]: Listen normally on 12 cali6998ca1eaae [fe80::ecee:eeff:feee:eeee%9]:123 Sep 16 05:04:43.082627 ntpd[2200]: Listen normally on 13 cali7879b69df7c [fe80::ecee:eeff:feee:eeee%10]:123 Sep 16 05:04:43.082646 ntpd[2200]: Listen normally on 14 cali3317261ad25 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 16 05:04:43.082666 ntpd[2200]: Listen normally on 15 vxlan.calico [fe80::648a:f8ff:fe94:93a5%12]:123 Sep 16 05:04:45.478296 containerd[1903]: time="2025-09-16T05:04:45.478232492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:45.483942 containerd[1903]: time="2025-09-16T05:04:45.483830009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 16 05:04:45.600516 containerd[1903]: time="2025-09-16T05:04:45.600473366Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:45.604771 containerd[1903]: time="2025-09-16T05:04:45.604717046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:45.605632 containerd[1903]: time="2025-09-16T05:04:45.605587330Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.973406785s" Sep 16 05:04:45.605632 containerd[1903]: time="2025-09-16T05:04:45.605629216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 16 05:04:45.617878 containerd[1903]: time="2025-09-16T05:04:45.617751935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 16 05:04:45.682703 containerd[1903]: time="2025-09-16T05:04:45.681371608Z" level=info msg="CreateContainer within sandbox \"d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 16 05:04:45.840158 containerd[1903]: time="2025-09-16T05:04:45.840057705Z" level=info msg="Container bfb97d1d71caf245ad88b848565ee54aa9a07e1496ee7ed5bf89d568a03216ec: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:45.847995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1997168818.mount: Deactivated successfully. Sep 16 05:04:45.955264 containerd[1903]: time="2025-09-16T05:04:45.955212291Z" level=info msg="CreateContainer within sandbox \"d58f92cb453fd7f3f99945c2546ef30da8a238deb6321321ab658b1662a2ffde\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bfb97d1d71caf245ad88b848565ee54aa9a07e1496ee7ed5bf89d568a03216ec\"" Sep 16 05:04:45.955947 containerd[1903]: time="2025-09-16T05:04:45.955923318Z" level=info msg="StartContainer for \"bfb97d1d71caf245ad88b848565ee54aa9a07e1496ee7ed5bf89d568a03216ec\"" Sep 16 05:04:45.970788 containerd[1903]: time="2025-09-16T05:04:45.970747653Z" level=info msg="connecting to shim bfb97d1d71caf245ad88b848565ee54aa9a07e1496ee7ed5bf89d568a03216ec" address="unix:///run/containerd/s/c4f304af61df2aeb3c8a4f093f8176e7c1085bc898973ca7f8b0f6062b63131b" protocol=ttrpc version=3 Sep 16 05:04:46.129153 systemd[1]: Started cri-containerd-bfb97d1d71caf245ad88b848565ee54aa9a07e1496ee7ed5bf89d568a03216ec.scope - libcontainer container bfb97d1d71caf245ad88b848565ee54aa9a07e1496ee7ed5bf89d568a03216ec. Sep 16 05:04:46.222233 containerd[1903]: time="2025-09-16T05:04:46.222191150Z" level=info msg="StartContainer for \"bfb97d1d71caf245ad88b848565ee54aa9a07e1496ee7ed5bf89d568a03216ec\" returns successfully" Sep 16 05:04:46.477095 systemd[1]: Started sshd@8-172.31.26.160:22-139.178.68.195:54542.service - OpenSSH per-connection server daemon (139.178.68.195:54542). Sep 16 05:04:46.621861 kubelet[3288]: I0916 05:04:46.621775 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d8d6cbbb-tqsgv" podStartSLOduration=24.945448396 podStartE2EDuration="34.621758048s" podCreationTimestamp="2025-09-16 05:04:12 +0000 UTC" firstStartedPulling="2025-09-16 05:04:35.941053632 +0000 UTC m=+46.020391080" lastFinishedPulling="2025-09-16 05:04:45.61736328 +0000 UTC m=+55.696700732" observedRunningTime="2025-09-16 05:04:46.607333898 +0000 UTC m=+56.686671353" watchObservedRunningTime="2025-09-16 05:04:46.621758048 +0000 UTC m=+56.701095502" Sep 16 05:04:46.676907 containerd[1903]: time="2025-09-16T05:04:46.676630401Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb97d1d71caf245ad88b848565ee54aa9a07e1496ee7ed5bf89d568a03216ec\" id:\"20f06f94280f28fa835e066b5a2feb38b460940be0463d42c57214a681573bdc\" pid:5873 exited_at:{seconds:1757999086 nanos:661598033}" Sep 16 05:04:46.720860 sshd[5857]: Accepted publickey for core from 139.178.68.195 port 54542 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:04:46.724156 sshd-session[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:46.732271 systemd-logind[1866]: New session 9 of user core. Sep 16 05:04:46.737013 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 05:04:47.154220 containerd[1903]: time="2025-09-16T05:04:47.153976886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:47.155845 containerd[1903]: time="2025-09-16T05:04:47.155773974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 16 05:04:47.158414 containerd[1903]: time="2025-09-16T05:04:47.158359578Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:47.161403 containerd[1903]: time="2025-09-16T05:04:47.161355865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:47.162651 containerd[1903]: time="2025-09-16T05:04:47.162501773Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.544706175s" Sep 16 05:04:47.162651 containerd[1903]: time="2025-09-16T05:04:47.162545573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 16 05:04:47.169316 containerd[1903]: time="2025-09-16T05:04:47.169202753Z" level=info msg="CreateContainer within sandbox \"0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 16 05:04:47.202191 containerd[1903]: time="2025-09-16T05:04:47.201490754Z" level=info msg="Container e7747d9aa25684bdbcee39c89bb7e8b089d70eff1095d3e3032cb133fcbc7d78: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:47.209743 containerd[1903]: time="2025-09-16T05:04:47.209082830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 16 05:04:47.213018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2894799702.mount: Deactivated successfully. Sep 16 05:04:47.228749 containerd[1903]: time="2025-09-16T05:04:47.228640159Z" level=info msg="CreateContainer within sandbox \"0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e7747d9aa25684bdbcee39c89bb7e8b089d70eff1095d3e3032cb133fcbc7d78\"" Sep 16 05:04:47.230185 containerd[1903]: time="2025-09-16T05:04:47.230117298Z" level=info msg="StartContainer for \"e7747d9aa25684bdbcee39c89bb7e8b089d70eff1095d3e3032cb133fcbc7d78\"" Sep 16 05:04:47.232785 containerd[1903]: time="2025-09-16T05:04:47.232717753Z" level=info msg="connecting to shim e7747d9aa25684bdbcee39c89bb7e8b089d70eff1095d3e3032cb133fcbc7d78" address="unix:///run/containerd/s/981aab110ddcb423dbc2778b0bb385cfe0d3cfdc73a404896f17e6e8dfe8fb70" protocol=ttrpc version=3 Sep 16 05:04:47.299276 systemd[1]: Started cri-containerd-e7747d9aa25684bdbcee39c89bb7e8b089d70eff1095d3e3032cb133fcbc7d78.scope - libcontainer container e7747d9aa25684bdbcee39c89bb7e8b089d70eff1095d3e3032cb133fcbc7d78. Sep 16 05:04:47.386235 containerd[1903]: time="2025-09-16T05:04:47.383921010Z" level=info msg="StartContainer for \"e7747d9aa25684bdbcee39c89bb7e8b089d70eff1095d3e3032cb133fcbc7d78\" returns successfully" Sep 16 05:04:47.696401 sshd[5883]: Connection closed by 139.178.68.195 port 54542 Sep 16 05:04:47.697558 sshd-session[5857]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:47.702214 systemd[1]: sshd@8-172.31.26.160:22-139.178.68.195:54542.service: Deactivated successfully. Sep 16 05:04:47.702404 systemd-logind[1866]: Session 9 logged out. Waiting for processes to exit. Sep 16 05:04:47.704839 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 05:04:47.706679 systemd-logind[1866]: Removed session 9. Sep 16 05:04:51.083032 containerd[1903]: time="2025-09-16T05:04:51.082967191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb97d1d71caf245ad88b848565ee54aa9a07e1496ee7ed5bf89d568a03216ec\" id:\"c9699b6bfdd71af9fca1b31bd352da1bd15a523712bc322f47892bdad8e7e010\" pid:5950 exited_at:{seconds:1757999091 nanos:82608567}" Sep 16 05:04:52.143549 containerd[1903]: time="2025-09-16T05:04:52.143478060Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35ce390107eb15267a30c5bf0298248e6a41d57c1f3ceedef9a73dc7874780c3\" id:\"0371bf78064ee51fe9ff3c14492b9c918f552a6dc53b6fc1a1bbd087fe94e6b5\" pid:5976 exited_at:{seconds:1757999092 nanos:141785070}" Sep 16 05:04:52.392986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount147467379.mount: Deactivated successfully. Sep 16 05:04:52.736135 systemd[1]: Started sshd@9-172.31.26.160:22-139.178.68.195:37198.service - OpenSSH per-connection server daemon (139.178.68.195:37198). Sep 16 05:04:53.043437 sshd[5992]: Accepted publickey for core from 139.178.68.195 port 37198 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:04:53.047383 sshd-session[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:53.060260 systemd-logind[1866]: New session 10 of user core. Sep 16 05:04:53.064032 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 05:04:53.208360 containerd[1903]: time="2025-09-16T05:04:53.208304409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:53.210749 containerd[1903]: time="2025-09-16T05:04:53.210701768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 16 05:04:53.213750 containerd[1903]: time="2025-09-16T05:04:53.213444687Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:53.219007 containerd[1903]: time="2025-09-16T05:04:53.218820159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 6.009687475s" Sep 16 05:04:53.219007 containerd[1903]: time="2025-09-16T05:04:53.218864497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 16 05:04:53.221481 containerd[1903]: time="2025-09-16T05:04:53.221452131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 05:04:53.229591 containerd[1903]: time="2025-09-16T05:04:53.217695174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:53.241587 containerd[1903]: time="2025-09-16T05:04:53.240405013Z" level=info msg="CreateContainer within sandbox \"ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 16 05:04:53.268208 containerd[1903]: time="2025-09-16T05:04:53.268172653Z" level=info msg="Container 4a5074d2174173e647954a15bbb3c4942fea678ce9581ba2c4eb7bfa50cd00a3: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:53.277216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3135497668.mount: Deactivated successfully. Sep 16 05:04:53.290831 containerd[1903]: time="2025-09-16T05:04:53.290779683Z" level=info msg="CreateContainer within sandbox \"ad999d552e26459847adeea7b57e4b7f27c74886fe78c120085fcdeea5a0d485\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4a5074d2174173e647954a15bbb3c4942fea678ce9581ba2c4eb7bfa50cd00a3\"" Sep 16 05:04:53.291887 containerd[1903]: time="2025-09-16T05:04:53.291541477Z" level=info msg="StartContainer for \"4a5074d2174173e647954a15bbb3c4942fea678ce9581ba2c4eb7bfa50cd00a3\"" Sep 16 05:04:53.293719 containerd[1903]: time="2025-09-16T05:04:53.292987922Z" level=info msg="connecting to shim 4a5074d2174173e647954a15bbb3c4942fea678ce9581ba2c4eb7bfa50cd00a3" address="unix:///run/containerd/s/517f5241f533e1da44b5796962b934cf78c7c3ed668be1ef53494291dcbfb0ed" protocol=ttrpc version=3 Sep 16 05:04:53.354548 systemd[1]: Started cri-containerd-4a5074d2174173e647954a15bbb3c4942fea678ce9581ba2c4eb7bfa50cd00a3.scope - libcontainer container 4a5074d2174173e647954a15bbb3c4942fea678ce9581ba2c4eb7bfa50cd00a3. Sep 16 05:04:53.482824 containerd[1903]: time="2025-09-16T05:04:53.482711202Z" level=info msg="StartContainer for \"4a5074d2174173e647954a15bbb3c4942fea678ce9581ba2c4eb7bfa50cd00a3\" returns successfully" Sep 16 05:04:53.775416 containerd[1903]: time="2025-09-16T05:04:53.775355543Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:53.844992 containerd[1903]: time="2025-09-16T05:04:53.844939799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 16 05:04:53.850733 containerd[1903]: time="2025-09-16T05:04:53.849565459Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 627.917415ms" Sep 16 05:04:53.851124 containerd[1903]: time="2025-09-16T05:04:53.850957400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 16 05:04:53.886085 containerd[1903]: time="2025-09-16T05:04:53.886016767Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a5074d2174173e647954a15bbb3c4942fea678ce9581ba2c4eb7bfa50cd00a3\" id:\"203eb51f470619dc2c2d8e93075a19a516a46e903870e23f3c39db7121f99828\" pid:6054 exit_status:1 exited_at:{seconds:1757999093 nanos:884565740}" Sep 16 05:04:53.955177 containerd[1903]: time="2025-09-16T05:04:53.955078602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 16 05:04:53.958282 containerd[1903]: time="2025-09-16T05:04:53.958260014Z" level=info msg="CreateContainer within sandbox \"a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 05:04:53.982463 containerd[1903]: time="2025-09-16T05:04:53.982418603Z" level=info msg="Container 7a08ddcfc56035e3f5d52ac3bfb7bbd43c75a8030a0166eba97a4ab29016a513: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:53.990730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount656605401.mount: Deactivated successfully. Sep 16 05:04:54.005035 containerd[1903]: time="2025-09-16T05:04:54.004983181Z" level=info msg="CreateContainer within sandbox \"a4e91d7a0f5d43df1ee525aa2758143c7ab6924e9708c38e0b809f1bf86a92a4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7a08ddcfc56035e3f5d52ac3bfb7bbd43c75a8030a0166eba97a4ab29016a513\"" Sep 16 05:04:54.006670 containerd[1903]: time="2025-09-16T05:04:54.006635169Z" level=info msg="StartContainer for \"7a08ddcfc56035e3f5d52ac3bfb7bbd43c75a8030a0166eba97a4ab29016a513\"" Sep 16 05:04:54.009380 containerd[1903]: time="2025-09-16T05:04:54.009337564Z" level=info msg="connecting to shim 7a08ddcfc56035e3f5d52ac3bfb7bbd43c75a8030a0166eba97a4ab29016a513" address="unix:///run/containerd/s/5a9872360adef92189e75b862420a246e26826155ac159f66695e3fc2040d642" protocol=ttrpc version=3 Sep 16 05:04:54.039018 systemd[1]: Started cri-containerd-7a08ddcfc56035e3f5d52ac3bfb7bbd43c75a8030a0166eba97a4ab29016a513.scope - libcontainer container 7a08ddcfc56035e3f5d52ac3bfb7bbd43c75a8030a0166eba97a4ab29016a513. Sep 16 05:04:54.147010 containerd[1903]: time="2025-09-16T05:04:54.146891770Z" level=info msg="StartContainer for \"7a08ddcfc56035e3f5d52ac3bfb7bbd43c75a8030a0166eba97a4ab29016a513\" returns successfully" Sep 16 05:04:54.543633 sshd[5995]: Connection closed by 139.178.68.195 port 37198 Sep 16 05:04:54.545264 sshd-session[5992]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:54.556867 systemd[1]: sshd@9-172.31.26.160:22-139.178.68.195:37198.service: Deactivated successfully. Sep 16 05:04:54.562211 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 05:04:54.565032 systemd-logind[1866]: Session 10 logged out. Waiting for processes to exit. Sep 16 05:04:54.587643 systemd[1]: Started sshd@10-172.31.26.160:22-139.178.68.195:37212.service - OpenSSH per-connection server daemon (139.178.68.195:37212). Sep 16 05:04:54.590360 systemd-logind[1866]: Removed session 10. Sep 16 05:04:54.761474 kubelet[3288]: I0916 05:04:54.756291 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-zg8mj" podStartSLOduration=27.974958599 podStartE2EDuration="43.690860383s" podCreationTimestamp="2025-09-16 05:04:11 +0000 UTC" firstStartedPulling="2025-09-16 05:04:37.505426609 +0000 UTC m=+47.584764055" lastFinishedPulling="2025-09-16 05:04:53.221328405 +0000 UTC m=+63.300665839" observedRunningTime="2025-09-16 05:04:53.656181763 +0000 UTC m=+63.735519218" watchObservedRunningTime="2025-09-16 05:04:54.690860383 +0000 UTC m=+64.770197835" Sep 16 05:04:54.761474 kubelet[3288]: I0916 05:04:54.761274 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d6f88447f-252dq" podStartSLOduration=31.335799021 podStartE2EDuration="47.761253276s" podCreationTimestamp="2025-09-16 05:04:07 +0000 UTC" firstStartedPulling="2025-09-16 05:04:37.529389693 +0000 UTC m=+47.608727139" lastFinishedPulling="2025-09-16 05:04:53.954843957 +0000 UTC m=+64.034181394" observedRunningTime="2025-09-16 05:04:54.689057785 +0000 UTC m=+64.768395240" watchObservedRunningTime="2025-09-16 05:04:54.761253276 +0000 UTC m=+64.840590731" Sep 16 05:04:54.873767 sshd[6104]: Accepted publickey for core from 139.178.68.195 port 37212 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:04:54.879281 sshd-session[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:54.893977 systemd-logind[1866]: New session 11 of user core. Sep 16 05:04:54.900372 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 05:04:55.269052 containerd[1903]: time="2025-09-16T05:04:55.268976914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a5074d2174173e647954a15bbb3c4942fea678ce9581ba2c4eb7bfa50cd00a3\" id:\"a28ad67903eb91bded17cec6460f53d46c09a35d9323a37f26118b1f6bae6a68\" pid:6121 exit_status:1 exited_at:{seconds:1757999095 nanos:145031410}" Sep 16 05:04:55.625878 sshd[6131]: Connection closed by 139.178.68.195 port 37212 Sep 16 05:04:55.627843 sshd-session[6104]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:55.642822 systemd[1]: sshd@10-172.31.26.160:22-139.178.68.195:37212.service: Deactivated successfully. Sep 16 05:04:55.653261 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 05:04:55.662166 systemd-logind[1866]: Session 11 logged out. Waiting for processes to exit. Sep 16 05:04:55.682135 systemd[1]: Started sshd@11-172.31.26.160:22-139.178.68.195:37216.service - OpenSSH per-connection server daemon (139.178.68.195:37216). Sep 16 05:04:55.696085 systemd-logind[1866]: Removed session 11. Sep 16 05:04:56.039779 sshd[6144]: Accepted publickey for core from 139.178.68.195 port 37216 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:04:56.056724 sshd-session[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:56.077909 systemd-logind[1866]: New session 12 of user core. Sep 16 05:04:56.081281 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 05:04:56.335900 containerd[1903]: time="2025-09-16T05:04:56.335242460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a5074d2174173e647954a15bbb3c4942fea678ce9581ba2c4eb7bfa50cd00a3\" id:\"732a3c3bc1ab7f7d991bafe97dc4ac124e643c70bc22810f5c94b6c5c9bae8c0\" pid:6158 exit_status:1 exited_at:{seconds:1757999096 nanos:334879249}" Sep 16 05:04:56.787751 kubelet[3288]: I0916 05:04:56.787706 3288 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 05:04:56.968406 containerd[1903]: time="2025-09-16T05:04:56.968228085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:56.972139 containerd[1903]: time="2025-09-16T05:04:56.972053710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 16 05:04:56.975013 containerd[1903]: time="2025-09-16T05:04:56.974594495Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:57.005527 containerd[1903]: time="2025-09-16T05:04:57.003336758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:57.005527 containerd[1903]: time="2025-09-16T05:04:57.005365924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.050244398s" Sep 16 05:04:57.005527 containerd[1903]: time="2025-09-16T05:04:57.005415464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 16 05:04:57.066141 containerd[1903]: time="2025-09-16T05:04:57.064180844Z" level=info msg="CreateContainer within sandbox \"0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 16 05:04:57.160716 containerd[1903]: time="2025-09-16T05:04:57.157589165Z" level=info msg="Container e333e4625537fccf98a77539969b0c64d03b2390fb5f0d9948c917dbb8301957: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:57.165310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount267748684.mount: Deactivated successfully. Sep 16 05:04:57.192861 containerd[1903]: time="2025-09-16T05:04:57.191440908Z" level=info msg="CreateContainer within sandbox \"0621280a56c9bee990127b5560367d011b617e9a0cf6e9754f7ddcdeff2c2020\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e333e4625537fccf98a77539969b0c64d03b2390fb5f0d9948c917dbb8301957\"" Sep 16 05:04:57.195135 containerd[1903]: time="2025-09-16T05:04:57.195094069Z" level=info msg="StartContainer for \"e333e4625537fccf98a77539969b0c64d03b2390fb5f0d9948c917dbb8301957\"" Sep 16 05:04:57.200471 containerd[1903]: time="2025-09-16T05:04:57.200287593Z" level=info msg="connecting to shim e333e4625537fccf98a77539969b0c64d03b2390fb5f0d9948c917dbb8301957" address="unix:///run/containerd/s/981aab110ddcb423dbc2778b0bb385cfe0d3cfdc73a404896f17e6e8dfe8fb70" protocol=ttrpc version=3 Sep 16 05:04:57.283390 systemd[1]: Started cri-containerd-e333e4625537fccf98a77539969b0c64d03b2390fb5f0d9948c917dbb8301957.scope - libcontainer container e333e4625537fccf98a77539969b0c64d03b2390fb5f0d9948c917dbb8301957. Sep 16 05:04:57.347911 sshd[6177]: Connection closed by 139.178.68.195 port 37216 Sep 16 05:04:57.349405 sshd-session[6144]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:57.359117 systemd[1]: sshd@11-172.31.26.160:22-139.178.68.195:37216.service: Deactivated successfully. Sep 16 05:04:57.363473 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 05:04:57.366991 systemd-logind[1866]: Session 12 logged out. Waiting for processes to exit. Sep 16 05:04:57.372863 systemd-logind[1866]: Removed session 12. Sep 16 05:04:57.424094 containerd[1903]: time="2025-09-16T05:04:57.424052822Z" level=info msg="StartContainer for \"e333e4625537fccf98a77539969b0c64d03b2390fb5f0d9948c917dbb8301957\" returns successfully" Sep 16 05:04:57.804797 kubelet[3288]: I0916 05:04:57.804740 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dd2vk" podStartSLOduration=25.858608712 podStartE2EDuration="45.804719639s" podCreationTimestamp="2025-09-16 05:04:12 +0000 UTC" firstStartedPulling="2025-09-16 05:04:37.063420688 +0000 UTC m=+47.142758134" lastFinishedPulling="2025-09-16 05:04:57.009531614 +0000 UTC m=+67.088869061" observedRunningTime="2025-09-16 05:04:57.799475594 +0000 UTC m=+67.878813049" watchObservedRunningTime="2025-09-16 05:04:57.804719639 +0000 UTC m=+67.884057095" Sep 16 05:04:58.371737 kubelet[3288]: I0916 05:04:58.364942 3288 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 16 05:04:58.375041 kubelet[3288]: I0916 05:04:58.375002 3288 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 16 05:05:02.384769 systemd[1]: Started sshd@12-172.31.26.160:22-139.178.68.195:57324.service - OpenSSH per-connection server daemon (139.178.68.195:57324). Sep 16 05:05:02.709324 sshd[6237]: Accepted publickey for core from 139.178.68.195 port 57324 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:05:02.718846 sshd-session[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:02.724545 systemd-logind[1866]: New session 13 of user core. Sep 16 05:05:02.729022 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 05:05:04.011613 sshd[6240]: Connection closed by 139.178.68.195 port 57324 Sep 16 05:05:04.012875 sshd-session[6237]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:04.021352 systemd[1]: sshd@12-172.31.26.160:22-139.178.68.195:57324.service: Deactivated successfully. Sep 16 05:05:04.024929 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 05:05:04.031077 systemd-logind[1866]: Session 13 logged out. Waiting for processes to exit. Sep 16 05:05:04.032928 systemd-logind[1866]: Removed session 13. Sep 16 05:05:04.048596 systemd[1]: Started sshd@13-172.31.26.160:22-139.178.68.195:57338.service - OpenSSH per-connection server daemon (139.178.68.195:57338). Sep 16 05:05:04.239828 sshd[6251]: Accepted publickey for core from 139.178.68.195 port 57338 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:05:04.242206 sshd-session[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:04.248876 systemd-logind[1866]: New session 14 of user core. Sep 16 05:05:04.255279 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 05:05:04.873695 sshd[6254]: Connection closed by 139.178.68.195 port 57338 Sep 16 05:05:04.875524 sshd-session[6251]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:04.879642 systemd-logind[1866]: Session 14 logged out. Waiting for processes to exit. Sep 16 05:05:04.880295 systemd[1]: sshd@13-172.31.26.160:22-139.178.68.195:57338.service: Deactivated successfully. Sep 16 05:05:04.882456 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 05:05:04.883821 systemd-logind[1866]: Removed session 14. Sep 16 05:05:04.910969 systemd[1]: Started sshd@14-172.31.26.160:22-139.178.68.195:57342.service - OpenSSH per-connection server daemon (139.178.68.195:57342). Sep 16 05:05:05.114370 sshd[6265]: Accepted publickey for core from 139.178.68.195 port 57342 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:05:05.115796 sshd-session[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:05.121571 systemd-logind[1866]: New session 15 of user core. Sep 16 05:05:05.126988 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 05:05:07.785742 sshd[6268]: Connection closed by 139.178.68.195 port 57342 Sep 16 05:05:07.791256 sshd-session[6265]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:07.833250 systemd[1]: sshd@14-172.31.26.160:22-139.178.68.195:57342.service: Deactivated successfully. Sep 16 05:05:07.836145 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 05:05:07.836353 systemd[1]: session-15.scope: Consumed 703ms CPU time, 72.2M memory peak. Sep 16 05:05:07.841862 systemd-logind[1866]: Session 15 logged out. Waiting for processes to exit. Sep 16 05:05:07.846448 systemd[1]: Started sshd@15-172.31.26.160:22-139.178.68.195:57350.service - OpenSSH per-connection server daemon (139.178.68.195:57350). Sep 16 05:05:07.859885 systemd-logind[1866]: Removed session 15. Sep 16 05:05:08.126599 sshd[6286]: Accepted publickey for core from 139.178.68.195 port 57350 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:05:08.132684 sshd-session[6286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:08.162576 systemd-logind[1866]: New session 16 of user core. Sep 16 05:05:08.168339 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 05:05:10.078077 sshd[6290]: Connection closed by 139.178.68.195 port 57350 Sep 16 05:05:10.092206 sshd-session[6286]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:10.119387 systemd[1]: sshd@15-172.31.26.160:22-139.178.68.195:57350.service: Deactivated successfully. Sep 16 05:05:10.126508 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 05:05:10.133879 systemd-logind[1866]: Session 16 logged out. Waiting for processes to exit. Sep 16 05:05:10.139518 systemd[1]: Started sshd@16-172.31.26.160:22-139.178.68.195:59860.service - OpenSSH per-connection server daemon (139.178.68.195:59860). Sep 16 05:05:10.145656 systemd-logind[1866]: Removed session 16. Sep 16 05:05:10.499184 sshd[6300]: Accepted publickey for core from 139.178.68.195 port 59860 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:05:10.507890 sshd-session[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:10.530897 systemd-logind[1866]: New session 17 of user core. Sep 16 05:05:10.538148 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 05:05:11.298482 kubelet[3288]: E0916 05:05:11.260148 3288 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.183s" Sep 16 05:05:11.803825 sshd[6303]: Connection closed by 139.178.68.195 port 59860 Sep 16 05:05:11.807318 sshd-session[6300]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:11.826481 systemd[1]: sshd@16-172.31.26.160:22-139.178.68.195:59860.service: Deactivated successfully. Sep 16 05:05:11.831270 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 05:05:11.843108 systemd-logind[1866]: Session 17 logged out. Waiting for processes to exit. Sep 16 05:05:11.846345 systemd-logind[1866]: Removed session 17. Sep 16 05:05:16.838497 systemd[1]: Started sshd@17-172.31.26.160:22-139.178.68.195:59874.service - OpenSSH per-connection server daemon (139.178.68.195:59874). Sep 16 05:05:17.106698 sshd[6318]: Accepted publickey for core from 139.178.68.195 port 59874 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:05:17.108277 sshd-session[6318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:17.114385 systemd-logind[1866]: New session 18 of user core. Sep 16 05:05:17.122164 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 05:05:18.161842 sshd[6321]: Connection closed by 139.178.68.195 port 59874 Sep 16 05:05:18.164467 sshd-session[6318]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:18.171565 systemd[1]: sshd@17-172.31.26.160:22-139.178.68.195:59874.service: Deactivated successfully. Sep 16 05:05:18.174389 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 05:05:18.175433 systemd-logind[1866]: Session 18 logged out. Waiting for processes to exit. Sep 16 05:05:18.178647 systemd-logind[1866]: Removed session 18. Sep 16 05:05:21.481763 containerd[1903]: time="2025-09-16T05:05:21.481714143Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb97d1d71caf245ad88b848565ee54aa9a07e1496ee7ed5bf89d568a03216ec\" id:\"c44a1440629de119fcec4b6870e1c10473360d0dedfbaefda6832a55f3c43431\" pid:6365 exited_at:{seconds:1757999121 nanos:444045279}" Sep 16 05:05:22.210689 containerd[1903]: time="2025-09-16T05:05:22.210642354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a5074d2174173e647954a15bbb3c4942fea678ce9581ba2c4eb7bfa50cd00a3\" id:\"dfb89fc832f92a6a9302f96844e68cdc4706e368570c8feef63fe6610d6e17f9\" pid:6363 exited_at:{seconds:1757999122 nanos:209903005}" Sep 16 05:05:22.602609 containerd[1903]: time="2025-09-16T05:05:22.602236121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a5074d2174173e647954a15bbb3c4942fea678ce9581ba2c4eb7bfa50cd00a3\" id:\"ae8528e6c9151c03621c14391c4ff6ea395ed1537d39bc9e0ff92e9d705d4b2f\" pid:6415 exited_at:{seconds:1757999122 nanos:599333714}" Sep 16 05:05:23.023666 containerd[1903]: time="2025-09-16T05:05:23.023623038Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35ce390107eb15267a30c5bf0298248e6a41d57c1f3ceedef9a73dc7874780c3\" id:\"9f0142ba8bbb629d7cebbc405f0c3ec92545a47769d6bded09e3beb0a976f9c6\" pid:6394 exited_at:{seconds:1757999123 nanos:23008584}" Sep 16 05:05:23.229793 systemd[1]: Started sshd@18-172.31.26.160:22-139.178.68.195:36298.service - OpenSSH per-connection server daemon (139.178.68.195:36298). Sep 16 05:05:23.590657 sshd[6431]: Accepted publickey for core from 139.178.68.195 port 36298 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:05:23.594064 sshd-session[6431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:23.602881 systemd-logind[1866]: New session 19 of user core. Sep 16 05:05:23.610073 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 05:05:26.155181 sshd[6434]: Connection closed by 139.178.68.195 port 36298 Sep 16 05:05:26.157054 sshd-session[6431]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:26.170365 systemd[1]: sshd@18-172.31.26.160:22-139.178.68.195:36298.service: Deactivated successfully. Sep 16 05:05:26.173977 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 05:05:26.175897 systemd-logind[1866]: Session 19 logged out. Waiting for processes to exit. Sep 16 05:05:26.179585 systemd-logind[1866]: Removed session 19. Sep 16 05:05:31.193167 systemd[1]: Started sshd@19-172.31.26.160:22-139.178.68.195:43354.service - OpenSSH per-connection server daemon (139.178.68.195:43354). Sep 16 05:05:31.456580 sshd[6453]: Accepted publickey for core from 139.178.68.195 port 43354 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:05:31.461720 sshd-session[6453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:31.468755 systemd-logind[1866]: New session 20 of user core. Sep 16 05:05:31.476189 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 05:05:32.506058 sshd[6456]: Connection closed by 139.178.68.195 port 43354 Sep 16 05:05:32.513047 sshd-session[6453]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:32.519555 systemd-logind[1866]: Session 20 logged out. Waiting for processes to exit. Sep 16 05:05:32.520628 systemd[1]: sshd@19-172.31.26.160:22-139.178.68.195:43354.service: Deactivated successfully. Sep 16 05:05:32.525748 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 05:05:32.532270 systemd-logind[1866]: Removed session 20. Sep 16 05:05:37.541891 systemd[1]: Started sshd@20-172.31.26.160:22-139.178.68.195:43368.service - OpenSSH per-connection server daemon (139.178.68.195:43368). Sep 16 05:05:37.776601 sshd[6469]: Accepted publickey for core from 139.178.68.195 port 43368 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:05:37.779264 sshd-session[6469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:37.787416 systemd-logind[1866]: New session 21 of user core. Sep 16 05:05:37.793115 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 05:05:38.698584 sshd[6472]: Connection closed by 139.178.68.195 port 43368 Sep 16 05:05:38.703056 sshd-session[6469]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:38.708313 systemd-logind[1866]: Session 21 logged out. Waiting for processes to exit. Sep 16 05:05:38.710376 systemd[1]: sshd@20-172.31.26.160:22-139.178.68.195:43368.service: Deactivated successfully. Sep 16 05:05:38.715888 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 05:05:38.720060 systemd-logind[1866]: Removed session 21. Sep 16 05:05:41.993933 containerd[1903]: time="2025-09-16T05:05:41.993873234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb97d1d71caf245ad88b848565ee54aa9a07e1496ee7ed5bf89d568a03216ec\" id:\"a996ca11e987a4a40951f6338936f3b5ca7a3969c9654bd927f6601fd47ee89d\" pid:6494 exited_at:{seconds:1757999141 nanos:949722542}" Sep 16 05:05:43.738742 systemd[1]: Started sshd@21-172.31.26.160:22-139.178.68.195:51766.service - OpenSSH per-connection server daemon (139.178.68.195:51766). Sep 16 05:05:44.072057 sshd[6504]: Accepted publickey for core from 139.178.68.195 port 51766 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:05:44.075303 sshd-session[6504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:44.081904 systemd-logind[1866]: New session 22 of user core. Sep 16 05:05:44.087047 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 05:05:45.491448 sshd[6508]: Connection closed by 139.178.68.195 port 51766 Sep 16 05:05:45.495012 sshd-session[6504]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:45.502234 systemd[1]: sshd@21-172.31.26.160:22-139.178.68.195:51766.service: Deactivated successfully. Sep 16 05:05:45.507517 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 05:05:45.511386 systemd-logind[1866]: Session 22 logged out. Waiting for processes to exit. Sep 16 05:05:45.515105 systemd-logind[1866]: Removed session 22. Sep 16 05:05:51.095121 containerd[1903]: time="2025-09-16T05:05:51.094958049Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb97d1d71caf245ad88b848565ee54aa9a07e1496ee7ed5bf89d568a03216ec\" id:\"6bb131b827d5b8e2d8026865014a1ace0fe46a78466eaccef1d68923e9b1ec21\" pid:6534 exited_at:{seconds:1757999151 nanos:93908779}" Sep 16 05:05:51.521742 containerd[1903]: time="2025-09-16T05:05:51.511217158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a5074d2174173e647954a15bbb3c4942fea678ce9581ba2c4eb7bfa50cd00a3\" id:\"91a8e58aa52e73a08ea4f23edff81524fe85c36ae3ba3e136420d3d11f6a41c6\" pid:6555 exited_at:{seconds:1757999151 nanos:510881394}" Sep 16 05:05:52.470736 containerd[1903]: time="2025-09-16T05:05:52.470696370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35ce390107eb15267a30c5bf0298248e6a41d57c1f3ceedef9a73dc7874780c3\" id:\"62dc09fbc738d492ea5311f8bb827b8c52d6f807ba755c40677e148c43026c50\" pid:6578 exited_at:{seconds:1757999152 nanos:470408577}" Sep 16 05:06:00.176443 systemd[1]: cri-containerd-ac26c8fb4744aa27b932cc658eeccf987032c4279eb82966270d8f579c0f95c8.scope: Deactivated successfully. Sep 16 05:06:00.176822 systemd[1]: cri-containerd-ac26c8fb4744aa27b932cc658eeccf987032c4279eb82966270d8f579c0f95c8.scope: Consumed 3.723s CPU time, 80.5M memory peak, 120.3M read from disk. Sep 16 05:06:00.319320 containerd[1903]: time="2025-09-16T05:06:00.319273462Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac26c8fb4744aa27b932cc658eeccf987032c4279eb82966270d8f579c0f95c8\" id:\"ac26c8fb4744aa27b932cc658eeccf987032c4279eb82966270d8f579c0f95c8\" pid:3139 exit_status:1 exited_at:{seconds:1757999160 nanos:277381811}" Sep 16 05:06:00.330598 containerd[1903]: time="2025-09-16T05:06:00.330547267Z" level=info msg="received exit event container_id:\"ac26c8fb4744aa27b932cc658eeccf987032c4279eb82966270d8f579c0f95c8\" id:\"ac26c8fb4744aa27b932cc658eeccf987032c4279eb82966270d8f579c0f95c8\" pid:3139 exit_status:1 exited_at:{seconds:1757999160 nanos:277381811}" Sep 16 05:06:00.448332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac26c8fb4744aa27b932cc658eeccf987032c4279eb82966270d8f579c0f95c8-rootfs.mount: Deactivated successfully. Sep 16 05:06:00.709727 systemd[1]: cri-containerd-02aa8cf9b02cbec1c83f6415d2793ef5988e91631a3e322e0d0ab6ab437bbeb1.scope: Deactivated successfully. Sep 16 05:06:00.710625 systemd[1]: cri-containerd-02aa8cf9b02cbec1c83f6415d2793ef5988e91631a3e322e0d0ab6ab437bbeb1.scope: Consumed 13.623s CPU time, 103.6M memory peak, 90.1M read from disk. Sep 16 05:06:00.714540 containerd[1903]: time="2025-09-16T05:06:00.714502267Z" level=info msg="received exit event container_id:\"02aa8cf9b02cbec1c83f6415d2793ef5988e91631a3e322e0d0ab6ab437bbeb1\" id:\"02aa8cf9b02cbec1c83f6415d2793ef5988e91631a3e322e0d0ab6ab437bbeb1\" pid:3611 exit_status:1 exited_at:{seconds:1757999160 nanos:714069353}" Sep 16 05:06:00.715085 containerd[1903]: time="2025-09-16T05:06:00.715028919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02aa8cf9b02cbec1c83f6415d2793ef5988e91631a3e322e0d0ab6ab437bbeb1\" id:\"02aa8cf9b02cbec1c83f6415d2793ef5988e91631a3e322e0d0ab6ab437bbeb1\" pid:3611 exit_status:1 exited_at:{seconds:1757999160 nanos:714069353}" Sep 16 05:06:00.752760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02aa8cf9b02cbec1c83f6415d2793ef5988e91631a3e322e0d0ab6ab437bbeb1-rootfs.mount: Deactivated successfully. Sep 16 05:06:01.130744 kubelet[3288]: I0916 05:06:01.130593 3288 scope.go:117] "RemoveContainer" containerID="02aa8cf9b02cbec1c83f6415d2793ef5988e91631a3e322e0d0ab6ab437bbeb1" Sep 16 05:06:01.135782 kubelet[3288]: I0916 05:06:01.135636 3288 scope.go:117] "RemoveContainer" containerID="ac26c8fb4744aa27b932cc658eeccf987032c4279eb82966270d8f579c0f95c8" Sep 16 05:06:01.266645 containerd[1903]: time="2025-09-16T05:06:01.266585154Z" level=info msg="CreateContainer within sandbox \"d28536fa173817a6c2a7d2f0d75b61c723fe4944852536c0119c8dd2528b56bc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 16 05:06:01.269861 containerd[1903]: time="2025-09-16T05:06:01.269562842Z" level=info msg="CreateContainer within sandbox \"36941ded6aa3d1c846ce04ad2fe29a4b2e229ac1ddcc9b26762ceecf5ca7f7e8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 16 05:06:01.500297 containerd[1903]: time="2025-09-16T05:06:01.496388419Z" level=info msg="Container 77f44b4d87202874bac9c19388ea7331247680c0a624533201fa4a0dcce54445: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:06:01.559993 containerd[1903]: time="2025-09-16T05:06:01.535400976Z" level=info msg="Container d8c1862a891c3f56b3ee6dcca738386eea6afea40b154431ac95fc10ecb19096: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:06:01.576001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount250817082.mount: Deactivated successfully. Sep 16 05:06:01.607960 containerd[1903]: time="2025-09-16T05:06:01.607846770Z" level=info msg="CreateContainer within sandbox \"d28536fa173817a6c2a7d2f0d75b61c723fe4944852536c0119c8dd2528b56bc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d8c1862a891c3f56b3ee6dcca738386eea6afea40b154431ac95fc10ecb19096\"" Sep 16 05:06:01.609065 containerd[1903]: time="2025-09-16T05:06:01.608959058Z" level=info msg="StartContainer for \"d8c1862a891c3f56b3ee6dcca738386eea6afea40b154431ac95fc10ecb19096\"" Sep 16 05:06:01.651779 containerd[1903]: time="2025-09-16T05:06:01.650971936Z" level=info msg="CreateContainer within sandbox \"36941ded6aa3d1c846ce04ad2fe29a4b2e229ac1ddcc9b26762ceecf5ca7f7e8\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"77f44b4d87202874bac9c19388ea7331247680c0a624533201fa4a0dcce54445\"" Sep 16 05:06:01.651779 containerd[1903]: time="2025-09-16T05:06:01.651639622Z" level=info msg="StartContainer for \"77f44b4d87202874bac9c19388ea7331247680c0a624533201fa4a0dcce54445\"" Sep 16 05:06:01.656125 containerd[1903]: time="2025-09-16T05:06:01.655937946Z" level=info msg="connecting to shim 77f44b4d87202874bac9c19388ea7331247680c0a624533201fa4a0dcce54445" address="unix:///run/containerd/s/367b364d278228f9286fb1bd186841c35846f2a8691ebfb322a6fb18c7ab92b9" protocol=ttrpc version=3 Sep 16 05:06:01.658898 containerd[1903]: time="2025-09-16T05:06:01.658836488Z" level=info msg="connecting to shim d8c1862a891c3f56b3ee6dcca738386eea6afea40b154431ac95fc10ecb19096" address="unix:///run/containerd/s/b5095885c375de7b866d97cbe2a1b695421354a5e522bc633fce77dabe7052a5" protocol=ttrpc version=3 Sep 16 05:06:01.912486 systemd[1]: Started cri-containerd-d8c1862a891c3f56b3ee6dcca738386eea6afea40b154431ac95fc10ecb19096.scope - libcontainer container d8c1862a891c3f56b3ee6dcca738386eea6afea40b154431ac95fc10ecb19096. Sep 16 05:06:01.984969 systemd[1]: Started cri-containerd-77f44b4d87202874bac9c19388ea7331247680c0a624533201fa4a0dcce54445.scope - libcontainer container 77f44b4d87202874bac9c19388ea7331247680c0a624533201fa4a0dcce54445. Sep 16 05:06:02.322665 containerd[1903]: time="2025-09-16T05:06:02.322608634Z" level=info msg="StartContainer for \"77f44b4d87202874bac9c19388ea7331247680c0a624533201fa4a0dcce54445\" returns successfully" Sep 16 05:06:02.369446 containerd[1903]: time="2025-09-16T05:06:02.369404659Z" level=info msg="StartContainer for \"d8c1862a891c3f56b3ee6dcca738386eea6afea40b154431ac95fc10ecb19096\" returns successfully" Sep 16 05:06:02.616595 kubelet[3288]: E0916 05:06:02.596555 3288 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-160?timeout=10s\": context deadline exceeded" Sep 16 05:06:05.286207 systemd[1]: cri-containerd-eea8c6c535c82daa9405a46b96cbc205698b50679ca3cf696781b5a68f028af3.scope: Deactivated successfully. Sep 16 05:06:05.286639 systemd[1]: cri-containerd-eea8c6c535c82daa9405a46b96cbc205698b50679ca3cf696781b5a68f028af3.scope: Consumed 2.086s CPU time, 34.8M memory peak, 81.4M read from disk. Sep 16 05:06:05.297312 containerd[1903]: time="2025-09-16T05:06:05.297230263Z" level=info msg="received exit event container_id:\"eea8c6c535c82daa9405a46b96cbc205698b50679ca3cf696781b5a68f028af3\" id:\"eea8c6c535c82daa9405a46b96cbc205698b50679ca3cf696781b5a68f028af3\" pid:3117 exit_status:1 exited_at:{seconds:1757999165 nanos:296194979}" Sep 16 05:06:05.298067 containerd[1903]: time="2025-09-16T05:06:05.297522085Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eea8c6c535c82daa9405a46b96cbc205698b50679ca3cf696781b5a68f028af3\" id:\"eea8c6c535c82daa9405a46b96cbc205698b50679ca3cf696781b5a68f028af3\" pid:3117 exit_status:1 exited_at:{seconds:1757999165 nanos:296194979}" Sep 16 05:06:05.338308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eea8c6c535c82daa9405a46b96cbc205698b50679ca3cf696781b5a68f028af3-rootfs.mount: Deactivated successfully. Sep 16 05:06:06.146585 kubelet[3288]: I0916 05:06:06.146555 3288 scope.go:117] "RemoveContainer" containerID="eea8c6c535c82daa9405a46b96cbc205698b50679ca3cf696781b5a68f028af3" Sep 16 05:06:06.155639 containerd[1903]: time="2025-09-16T05:06:06.155571458Z" level=info msg="CreateContainer within sandbox \"a8e6c453dcbbd4152b74cc205571df3f47794d4c65e22bd18d6f6d54fe1607b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 16 05:06:06.194889 containerd[1903]: time="2025-09-16T05:06:06.194645787Z" level=info msg="Container 9d241f24e6c2eabadb85c885686a575a8950e21510fb9428ed25b6012653efd2: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:06:06.218969 containerd[1903]: time="2025-09-16T05:06:06.218926841Z" level=info msg="CreateContainer within sandbox \"a8e6c453dcbbd4152b74cc205571df3f47794d4c65e22bd18d6f6d54fe1607b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9d241f24e6c2eabadb85c885686a575a8950e21510fb9428ed25b6012653efd2\"" Sep 16 05:06:06.219439 containerd[1903]: time="2025-09-16T05:06:06.219406335Z" level=info msg="StartContainer for \"9d241f24e6c2eabadb85c885686a575a8950e21510fb9428ed25b6012653efd2\"" Sep 16 05:06:06.220430 containerd[1903]: time="2025-09-16T05:06:06.220400267Z" level=info msg="connecting to shim 9d241f24e6c2eabadb85c885686a575a8950e21510fb9428ed25b6012653efd2" address="unix:///run/containerd/s/bf22eca44c29eded2b92fac8fdff5e848402ceb3938a09177ff8fde72b11e32d" protocol=ttrpc version=3 Sep 16 05:06:06.241999 systemd[1]: Started cri-containerd-9d241f24e6c2eabadb85c885686a575a8950e21510fb9428ed25b6012653efd2.scope - libcontainer container 9d241f24e6c2eabadb85c885686a575a8950e21510fb9428ed25b6012653efd2. Sep 16 05:06:06.298388 containerd[1903]: time="2025-09-16T05:06:06.298352912Z" level=info msg="StartContainer for \"9d241f24e6c2eabadb85c885686a575a8950e21510fb9428ed25b6012653efd2\" returns successfully"