Sep 4 00:02:19.906476 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 3 22:05:39 -00 2025 Sep 4 00:02:19.906512 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:02:19.906526 kernel: BIOS-provided physical RAM map: Sep 4 00:02:19.906537 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 00:02:19.906547 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 4 00:02:19.906557 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 4 00:02:19.906570 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 4 00:02:19.906584 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 4 00:02:19.906595 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 4 00:02:19.906605 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 4 00:02:19.906616 kernel: NX (Execute Disable) protection: active Sep 4 00:02:19.906627 kernel: APIC: Static calls initialized Sep 4 00:02:19.906651 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 4 00:02:19.906664 kernel: extended physical RAM map: Sep 4 00:02:19.906682 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 00:02:19.906695 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Sep 4 00:02:19.906709 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Sep 4 00:02:19.906722 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Sep 4 00:02:19.906735 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 4 00:02:19.906749 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 4 00:02:19.906762 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 4 00:02:19.906775 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 4 00:02:19.906789 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 4 00:02:19.906805 kernel: efi: EFI v2.7 by EDK II Sep 4 00:02:19.906818 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 4 00:02:19.906830 kernel: secureboot: Secure boot disabled Sep 4 00:02:19.906842 kernel: SMBIOS 2.7 present. Sep 4 00:02:19.906855 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 4 00:02:19.906868 kernel: DMI: Memory slots populated: 1/1 Sep 4 00:02:19.906881 kernel: Hypervisor detected: KVM Sep 4 00:02:19.906893 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 00:02:19.906907 kernel: kvm-clock: using sched offset of 5212866481 cycles Sep 4 00:02:19.906921 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 00:02:19.906954 kernel: tsc: Detected 2499.996 MHz processor Sep 4 00:02:19.906969 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 00:02:19.906985 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 00:02:19.906999 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 4 00:02:19.907013 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 00:02:19.907026 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 00:02:19.907040 kernel: Using GB pages for direct mapping Sep 4 00:02:19.907059 kernel: ACPI: Early table checksum verification disabled Sep 4 00:02:19.907076 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 4 00:02:19.907091 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 4 00:02:19.907106 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 00:02:19.907121 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 4 00:02:19.907135 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 4 00:02:19.907150 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 4 00:02:19.907164 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 00:02:19.907179 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 00:02:19.907198 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 4 00:02:19.907213 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 4 00:02:19.907227 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 4 00:02:19.907242 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 4 00:02:19.907257 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 4 00:02:19.907271 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 4 00:02:19.907285 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 4 00:02:19.907300 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 4 00:02:19.907313 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 4 00:02:19.907331 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 4 00:02:19.907357 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 4 00:02:19.907372 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 4 00:02:19.907386 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 4 00:02:19.907401 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 4 00:02:19.907416 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 4 00:02:19.907431 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 4 00:02:19.907446 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 4 00:02:19.907461 kernel: NUMA: Initialized distance table, cnt=1 Sep 4 00:02:19.907479 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Sep 4 00:02:19.907492 kernel: Zone ranges: Sep 4 00:02:19.907504 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 00:02:19.907519 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 4 00:02:19.907534 kernel: Normal empty Sep 4 00:02:19.907548 kernel: Device empty Sep 4 00:02:19.907563 kernel: Movable zone start for each node Sep 4 00:02:19.907577 kernel: Early memory node ranges Sep 4 00:02:19.907591 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 00:02:19.907609 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 4 00:02:19.907624 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 4 00:02:19.907638 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 4 00:02:19.907651 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 00:02:19.907665 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 00:02:19.907680 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 4 00:02:19.907694 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 4 00:02:19.907708 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 00:02:19.907723 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 00:02:19.907741 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 4 00:02:19.907755 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 00:02:19.907769 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 00:02:19.907784 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 00:02:19.907798 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 00:02:19.907813 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 00:02:19.907827 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 00:02:19.907841 kernel: TSC deadline timer available Sep 4 00:02:19.907856 kernel: CPU topo: Max. logical packages: 1 Sep 4 00:02:19.907870 kernel: CPU topo: Max. logical dies: 1 Sep 4 00:02:19.907887 kernel: CPU topo: Max. dies per package: 1 Sep 4 00:02:19.907901 kernel: CPU topo: Max. threads per core: 2 Sep 4 00:02:19.907915 kernel: CPU topo: Num. cores per package: 1 Sep 4 00:02:19.907953 kernel: CPU topo: Num. threads per package: 2 Sep 4 00:02:19.907968 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 4 00:02:19.907983 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 00:02:19.907998 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 4 00:02:19.908012 kernel: Booting paravirtualized kernel on KVM Sep 4 00:02:19.908027 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 00:02:19.908045 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 00:02:19.908060 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 4 00:02:19.908075 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 4 00:02:19.908090 kernel: pcpu-alloc: [0] 0 1 Sep 4 00:02:19.908104 kernel: kvm-guest: PV spinlocks enabled Sep 4 00:02:19.908120 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 00:02:19.908137 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:02:19.908152 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 00:02:19.908169 kernel: random: crng init done Sep 4 00:02:19.908184 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 00:02:19.908199 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 00:02:19.908214 kernel: Fallback order for Node 0: 0 Sep 4 00:02:19.908229 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Sep 4 00:02:19.908243 kernel: Policy zone: DMA32 Sep 4 00:02:19.908271 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 00:02:19.908286 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 00:02:19.908302 kernel: Kernel/User page tables isolation: enabled Sep 4 00:02:19.908317 kernel: ftrace: allocating 40099 entries in 157 pages Sep 4 00:02:19.908333 kernel: ftrace: allocated 157 pages with 5 groups Sep 4 00:02:19.908351 kernel: Dynamic Preempt: voluntary Sep 4 00:02:19.908366 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 00:02:19.908382 kernel: rcu: RCU event tracing is enabled. Sep 4 00:02:19.908398 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 00:02:19.908414 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 00:02:19.908430 kernel: Rude variant of Tasks RCU enabled. Sep 4 00:02:19.908448 kernel: Tracing variant of Tasks RCU enabled. Sep 4 00:02:19.908464 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 00:02:19.908480 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 00:02:19.908496 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:02:19.908512 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:02:19.908528 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:02:19.908543 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 00:02:19.908559 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 00:02:19.908577 kernel: Console: colour dummy device 80x25 Sep 4 00:02:19.908593 kernel: printk: legacy console [tty0] enabled Sep 4 00:02:19.908608 kernel: printk: legacy console [ttyS0] enabled Sep 4 00:02:19.908623 kernel: ACPI: Core revision 20240827 Sep 4 00:02:19.908639 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 4 00:02:19.908655 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 00:02:19.908670 kernel: x2apic enabled Sep 4 00:02:19.908685 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 00:02:19.908701 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 4 00:02:19.908721 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Sep 4 00:02:19.908736 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 00:02:19.908752 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 4 00:02:19.908767 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 00:02:19.908782 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 00:02:19.908797 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 00:02:19.908812 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 00:02:19.908828 kernel: RETBleed: Vulnerable Sep 4 00:02:19.908843 kernel: Speculative Store Bypass: Vulnerable Sep 4 00:02:19.908858 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 00:02:19.908873 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 00:02:19.908891 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 00:02:19.908906 kernel: active return thunk: its_return_thunk Sep 4 00:02:19.908921 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 4 00:02:19.908962 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 00:02:19.908975 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 00:02:19.908988 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 00:02:19.909001 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 4 00:02:19.909014 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 4 00:02:19.909027 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 00:02:19.909040 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 00:02:19.909054 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 00:02:19.909071 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 4 00:02:19.909085 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 00:02:19.909098 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 4 00:02:19.909111 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 4 00:02:19.909125 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 4 00:02:19.909138 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 4 00:02:19.909152 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 4 00:02:19.909166 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 4 00:02:19.909180 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 4 00:02:19.909194 kernel: Freeing SMP alternatives memory: 32K Sep 4 00:02:19.909207 kernel: pid_max: default: 32768 minimum: 301 Sep 4 00:02:19.909224 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 00:02:19.909238 kernel: landlock: Up and running. Sep 4 00:02:19.909252 kernel: SELinux: Initializing. Sep 4 00:02:19.909266 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 00:02:19.909280 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 00:02:19.909294 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 00:02:19.909309 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 00:02:19.909323 kernel: signal: max sigframe size: 3632 Sep 4 00:02:19.909337 kernel: rcu: Hierarchical SRCU implementation. Sep 4 00:02:19.909352 kernel: rcu: Max phase no-delay instances is 400. Sep 4 00:02:19.909370 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 00:02:19.909384 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 00:02:19.909399 kernel: smp: Bringing up secondary CPUs ... Sep 4 00:02:19.909413 kernel: smpboot: x86: Booting SMP configuration: Sep 4 00:02:19.909427 kernel: .... node #0, CPUs: #1 Sep 4 00:02:19.909443 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 4 00:02:19.909459 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 00:02:19.909474 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 00:02:19.909489 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Sep 4 00:02:19.909506 kernel: Memory: 1910108K/2037804K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 123140K reserved, 0K cma-reserved) Sep 4 00:02:19.909521 kernel: devtmpfs: initialized Sep 4 00:02:19.909536 kernel: x86/mm: Memory block size: 128MB Sep 4 00:02:19.909550 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 4 00:02:19.909565 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 00:02:19.909580 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 00:02:19.909595 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 00:02:19.909610 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 00:02:19.909627 kernel: audit: initializing netlink subsys (disabled) Sep 4 00:02:19.909641 kernel: audit: type=2000 audit(1756944137.678:1): state=initialized audit_enabled=0 res=1 Sep 4 00:02:19.909655 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 00:02:19.909670 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 00:02:19.909685 kernel: cpuidle: using governor menu Sep 4 00:02:19.909700 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 00:02:19.909715 kernel: dca service started, version 1.12.1 Sep 4 00:02:19.909730 kernel: PCI: Using configuration type 1 for base access Sep 4 00:02:19.909744 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 00:02:19.909760 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 00:02:19.909775 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 00:02:19.909791 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 00:02:19.909804 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 00:02:19.909818 kernel: ACPI: Added _OSI(Module Device) Sep 4 00:02:19.909833 kernel: ACPI: Added _OSI(Processor Device) Sep 4 00:02:19.909847 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 00:02:19.909862 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 4 00:02:19.909875 kernel: ACPI: Interpreter enabled Sep 4 00:02:19.909893 kernel: ACPI: PM: (supports S0 S5) Sep 4 00:02:19.909907 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 00:02:19.909921 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 00:02:19.909955 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 00:02:19.909976 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 00:02:19.909989 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 00:02:19.910229 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 00:02:19.910375 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 00:02:19.910514 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 00:02:19.910533 kernel: acpiphp: Slot [3] registered Sep 4 00:02:19.910549 kernel: acpiphp: Slot [4] registered Sep 4 00:02:19.910565 kernel: acpiphp: Slot [5] registered Sep 4 00:02:19.910580 kernel: acpiphp: Slot [6] registered Sep 4 00:02:19.910596 kernel: acpiphp: Slot [7] registered Sep 4 00:02:19.910611 kernel: acpiphp: Slot [8] registered Sep 4 00:02:19.910627 kernel: acpiphp: Slot [9] registered Sep 4 00:02:19.910642 kernel: acpiphp: Slot [10] registered Sep 4 00:02:19.910661 kernel: acpiphp: Slot [11] registered Sep 4 00:02:19.910676 kernel: acpiphp: Slot [12] registered Sep 4 00:02:19.910692 kernel: acpiphp: Slot [13] registered Sep 4 00:02:19.910707 kernel: acpiphp: Slot [14] registered Sep 4 00:02:19.910723 kernel: acpiphp: Slot [15] registered Sep 4 00:02:19.910739 kernel: acpiphp: Slot [16] registered Sep 4 00:02:19.910755 kernel: acpiphp: Slot [17] registered Sep 4 00:02:19.910770 kernel: acpiphp: Slot [18] registered Sep 4 00:02:19.910786 kernel: acpiphp: Slot [19] registered Sep 4 00:02:19.910803 kernel: acpiphp: Slot [20] registered Sep 4 00:02:19.910819 kernel: acpiphp: Slot [21] registered Sep 4 00:02:19.910835 kernel: acpiphp: Slot [22] registered Sep 4 00:02:19.910851 kernel: acpiphp: Slot [23] registered Sep 4 00:02:19.910866 kernel: acpiphp: Slot [24] registered Sep 4 00:02:19.910882 kernel: acpiphp: Slot [25] registered Sep 4 00:02:19.910898 kernel: acpiphp: Slot [26] registered Sep 4 00:02:19.910914 kernel: acpiphp: Slot [27] registered Sep 4 00:02:19.910952 kernel: acpiphp: Slot [28] registered Sep 4 00:02:19.910969 kernel: acpiphp: Slot [29] registered Sep 4 00:02:19.910988 kernel: acpiphp: Slot [30] registered Sep 4 00:02:19.911003 kernel: acpiphp: Slot [31] registered Sep 4 00:02:19.911019 kernel: PCI host bridge to bus 0000:00 Sep 4 00:02:19.911166 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 00:02:19.911289 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 00:02:19.911421 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 00:02:19.911542 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 4 00:02:19.911662 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 4 00:02:19.911784 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 00:02:19.911952 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 4 00:02:19.912100 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 4 00:02:19.912246 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Sep 4 00:02:19.912381 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 00:02:19.912526 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 4 00:02:19.912664 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 4 00:02:19.912801 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 4 00:02:19.912950 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 4 00:02:19.913090 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 4 00:02:19.913221 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 4 00:02:19.913370 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Sep 4 00:02:19.913512 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Sep 4 00:02:19.913644 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 4 00:02:19.913775 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 00:02:19.913923 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Sep 4 00:02:19.914086 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Sep 4 00:02:19.914269 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Sep 4 00:02:19.914405 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Sep 4 00:02:19.914429 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 00:02:19.914444 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 00:02:19.914459 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 00:02:19.914474 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 00:02:19.914489 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 00:02:19.914505 kernel: iommu: Default domain type: Translated Sep 4 00:02:19.914520 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 00:02:19.914534 kernel: efivars: Registered efivars operations Sep 4 00:02:19.914549 kernel: PCI: Using ACPI for IRQ routing Sep 4 00:02:19.914567 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 00:02:19.914582 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Sep 4 00:02:19.914596 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 4 00:02:19.914611 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 4 00:02:19.914740 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 4 00:02:19.914868 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 4 00:02:19.915017 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 00:02:19.915036 kernel: vgaarb: loaded Sep 4 00:02:19.915054 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 4 00:02:19.915069 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 4 00:02:19.915084 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 00:02:19.915098 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 00:02:19.915114 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 00:02:19.915128 kernel: pnp: PnP ACPI init Sep 4 00:02:19.915143 kernel: pnp: PnP ACPI: found 5 devices Sep 4 00:02:19.915158 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 00:02:19.915173 kernel: NET: Registered PF_INET protocol family Sep 4 00:02:19.915191 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 00:02:19.915206 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 4 00:02:19.915221 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 00:02:19.915235 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 00:02:19.915250 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 00:02:19.915265 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 4 00:02:19.915279 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 00:02:19.915294 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 00:02:19.915309 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 00:02:19.915326 kernel: NET: Registered PF_XDP protocol family Sep 4 00:02:19.915463 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 00:02:19.915581 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 00:02:19.915698 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 00:02:19.915814 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 4 00:02:19.915967 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 4 00:02:19.916106 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 00:02:19.916125 kernel: PCI: CLS 0 bytes, default 64 Sep 4 00:02:19.916145 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 00:02:19.916160 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 4 00:02:19.916175 kernel: clocksource: Switched to clocksource tsc Sep 4 00:02:19.916190 kernel: Initialise system trusted keyrings Sep 4 00:02:19.916205 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 4 00:02:19.916219 kernel: Key type asymmetric registered Sep 4 00:02:19.916234 kernel: Asymmetric key parser 'x509' registered Sep 4 00:02:19.916249 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 00:02:19.916264 kernel: io scheduler mq-deadline registered Sep 4 00:02:19.916281 kernel: io scheduler kyber registered Sep 4 00:02:19.916296 kernel: io scheduler bfq registered Sep 4 00:02:19.916311 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 00:02:19.916327 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 00:02:19.916342 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 00:02:19.916357 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 00:02:19.916372 kernel: i8042: Warning: Keylock active Sep 4 00:02:19.916386 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 00:02:19.916401 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 00:02:19.916547 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 4 00:02:19.916672 kernel: rtc_cmos 00:00: registered as rtc0 Sep 4 00:02:19.916793 kernel: rtc_cmos 00:00: setting system clock to 2025-09-04T00:02:19 UTC (1756944139) Sep 4 00:02:19.916916 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 4 00:02:19.916976 kernel: intel_pstate: CPU model not supported Sep 4 00:02:19.916995 kernel: efifb: probing for efifb Sep 4 00:02:19.917010 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 4 00:02:19.917028 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 4 00:02:19.917044 kernel: efifb: scrolling: redraw Sep 4 00:02:19.917060 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 00:02:19.917075 kernel: Console: switching to colour frame buffer device 100x37 Sep 4 00:02:19.917091 kernel: fb0: EFI VGA frame buffer device Sep 4 00:02:19.917106 kernel: pstore: Using crash dump compression: deflate Sep 4 00:02:19.917122 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 00:02:19.917137 kernel: NET: Registered PF_INET6 protocol family Sep 4 00:02:19.917153 kernel: Segment Routing with IPv6 Sep 4 00:02:19.917168 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 00:02:19.917186 kernel: NET: Registered PF_PACKET protocol family Sep 4 00:02:19.917202 kernel: Key type dns_resolver registered Sep 4 00:02:19.917217 kernel: IPI shorthand broadcast: enabled Sep 4 00:02:19.917232 kernel: sched_clock: Marking stable (2621003036, 149374718)->(2868093611, -97715857) Sep 4 00:02:19.917248 kernel: registered taskstats version 1 Sep 4 00:02:19.917263 kernel: Loading compiled-in X.509 certificates Sep 4 00:02:19.917279 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 247a8159a15e16f8eb89737aa66cd9cf9bbb3c10' Sep 4 00:02:19.917294 kernel: Demotion targets for Node 0: null Sep 4 00:02:19.917309 kernel: Key type .fscrypt registered Sep 4 00:02:19.917327 kernel: Key type fscrypt-provisioning registered Sep 4 00:02:19.917342 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 00:02:19.917357 kernel: ima: Allocated hash algorithm: sha1 Sep 4 00:02:19.917373 kernel: ima: No architecture policies found Sep 4 00:02:19.917388 kernel: clk: Disabling unused clocks Sep 4 00:02:19.917404 kernel: Warning: unable to open an initial console. Sep 4 00:02:19.917419 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 4 00:02:19.917435 kernel: Write protecting the kernel read-only data: 24576k Sep 4 00:02:19.917453 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 4 00:02:19.917474 kernel: Run /init as init process Sep 4 00:02:19.917489 kernel: with arguments: Sep 4 00:02:19.917505 kernel: /init Sep 4 00:02:19.917520 kernel: with environment: Sep 4 00:02:19.917535 kernel: HOME=/ Sep 4 00:02:19.917553 kernel: TERM=linux Sep 4 00:02:19.917568 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 00:02:19.917586 systemd[1]: Successfully made /usr/ read-only. Sep 4 00:02:19.917606 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:02:19.917624 systemd[1]: Detected virtualization amazon. Sep 4 00:02:19.917639 systemd[1]: Detected architecture x86-64. Sep 4 00:02:19.917655 systemd[1]: Running in initrd. Sep 4 00:02:19.917673 systemd[1]: No hostname configured, using default hostname. Sep 4 00:02:19.917691 systemd[1]: Hostname set to . Sep 4 00:02:19.917707 systemd[1]: Initializing machine ID from VM UUID. Sep 4 00:02:19.917723 systemd[1]: Queued start job for default target initrd.target. Sep 4 00:02:19.917740 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:02:19.917756 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:02:19.917775 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 00:02:19.917791 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:02:19.917810 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 00:02:19.917828 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 00:02:19.917846 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 00:02:19.917863 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 00:02:19.917879 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:02:19.917896 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:02:19.917912 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:02:19.917941 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:02:19.917968 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:02:19.917984 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:02:19.918000 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:02:19.918017 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:02:19.918033 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 00:02:19.918050 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 00:02:19.918067 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:02:19.918084 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:02:19.918104 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:02:19.918120 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:02:19.918137 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 00:02:19.918154 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:02:19.918170 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 00:02:19.918187 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 00:02:19.918204 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 00:02:19.918221 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:02:19.918240 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:02:19.918256 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:02:19.918303 systemd-journald[207]: Collecting audit messages is disabled. Sep 4 00:02:19.918345 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 00:02:19.918367 systemd-journald[207]: Journal started Sep 4 00:02:19.918400 systemd-journald[207]: Runtime Journal (/run/log/journal/ec27a517ca3e4ee21060ecb6697d8023) is 4.8M, max 38.4M, 33.6M free. Sep 4 00:02:19.921967 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:02:19.925958 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:02:19.926194 systemd-modules-load[208]: Inserted module 'overlay' Sep 4 00:02:19.927500 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 00:02:19.932958 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 00:02:19.937090 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:02:19.962126 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:02:19.972142 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 00:02:19.978186 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 00:02:19.985026 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 00:02:19.984041 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:02:19.988437 kernel: Bridge firewalling registered Sep 4 00:02:19.987123 systemd-modules-load[208]: Inserted module 'br_netfilter' Sep 4 00:02:19.989402 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:02:19.990251 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:02:19.997142 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:02:19.999513 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:02:20.009971 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 00:02:20.016877 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:02:20.021861 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 00:02:20.023655 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:02:20.026581 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:02:20.033102 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:02:20.049563 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:02:20.092724 systemd-resolved[247]: Positive Trust Anchors: Sep 4 00:02:20.093649 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:02:20.093712 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:02:20.102721 systemd-resolved[247]: Defaulting to hostname 'linux'. Sep 4 00:02:20.105253 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:02:20.106693 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:02:20.153968 kernel: SCSI subsystem initialized Sep 4 00:02:20.163998 kernel: Loading iSCSI transport class v2.0-870. Sep 4 00:02:20.175969 kernel: iscsi: registered transport (tcp) Sep 4 00:02:20.198188 kernel: iscsi: registered transport (qla4xxx) Sep 4 00:02:20.198269 kernel: QLogic iSCSI HBA Driver Sep 4 00:02:20.217369 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:02:20.248242 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:02:20.249323 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:02:20.296808 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 00:02:20.299014 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 00:02:20.355984 kernel: raid6: avx512x4 gen() 17474 MB/s Sep 4 00:02:20.373970 kernel: raid6: avx512x2 gen() 16472 MB/s Sep 4 00:02:20.391986 kernel: raid6: avx512x1 gen() 16586 MB/s Sep 4 00:02:20.410983 kernel: raid6: avx2x4 gen() 11969 MB/s Sep 4 00:02:20.428989 kernel: raid6: avx2x2 gen() 8069 MB/s Sep 4 00:02:20.467352 kernel: raid6: avx2x1 gen() 156 MB/s Sep 4 00:02:20.467440 kernel: raid6: using algorithm avx512x4 gen() 17474 MB/s Sep 4 00:02:20.504504 kernel: raid6: .... xor() 2262 MB/s, rmw enabled Sep 4 00:02:20.504578 kernel: raid6: using avx512x2 recovery algorithm Sep 4 00:02:20.526979 kernel: xor: automatically using best checksumming function avx Sep 4 00:02:20.699971 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 00:02:20.706852 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:02:20.709311 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:02:20.734071 systemd-udevd[456]: Using default interface naming scheme 'v255'. Sep 4 00:02:20.740964 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:02:20.745068 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 00:02:20.767250 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Sep 4 00:02:20.793759 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:02:20.796189 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:02:20.855416 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:02:20.861287 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 00:02:20.962995 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 00:02:20.978299 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 00:02:20.978633 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 00:02:20.990959 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 4 00:02:20.994953 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 4 00:02:20.997242 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:02:20.997491 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:02:21.000059 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:02:21.022688 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 00:02:21.022990 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 4 00:02:21.023014 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:2c:dd:5a:8d:73 Sep 4 00:02:21.023204 kernel: AES CTR mode by8 optimization enabled Sep 4 00:02:21.023223 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 00:02:21.001769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:02:21.019646 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:02:21.030794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:02:21.030947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:02:21.039203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:02:21.049816 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 00:02:21.049852 kernel: GPT:9289727 != 16777215 Sep 4 00:02:21.049871 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 00:02:21.049890 kernel: GPT:9289727 != 16777215 Sep 4 00:02:21.049909 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 00:02:21.049927 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 00:02:21.057344 (udev-worker)[502]: Network interface NamePolicy= disabled on kernel command line. Sep 4 00:02:21.081193 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:02:21.097047 kernel: nvme nvme0: using unchecked data buffer Sep 4 00:02:21.199324 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 00:02:21.249337 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 00:02:21.250266 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 00:02:21.262154 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 00:02:21.272378 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 00:02:21.272988 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 00:02:21.274497 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:02:21.275584 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:02:21.276711 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:02:21.278411 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 00:02:21.282084 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 00:02:21.300904 disk-uuid[693]: Primary Header is updated. Sep 4 00:02:21.300904 disk-uuid[693]: Secondary Entries is updated. Sep 4 00:02:21.300904 disk-uuid[693]: Secondary Header is updated. Sep 4 00:02:21.307953 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 00:02:21.309181 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:02:22.322288 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 00:02:22.323589 disk-uuid[696]: The operation has completed successfully. Sep 4 00:02:22.469615 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 00:02:22.469746 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 00:02:22.506022 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 00:02:22.520578 sh[961]: Success Sep 4 00:02:22.549691 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 00:02:22.549774 kernel: device-mapper: uevent: version 1.0.3 Sep 4 00:02:22.550027 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 00:02:22.563978 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 4 00:02:22.676354 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 00:02:22.680080 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 00:02:22.696287 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 00:02:22.714955 kernel: BTRFS: device fsid 8a9c2e34-3d3c-49a9-acce-59bf90003071 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (984) Sep 4 00:02:22.715009 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9c2e34-3d3c-49a9-acce-59bf90003071 Sep 4 00:02:22.718562 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:02:22.821203 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 00:02:22.821272 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 00:02:22.823557 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 00:02:22.849407 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 00:02:22.850481 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:02:22.851205 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 00:02:22.852465 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 00:02:22.854947 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 00:02:22.895967 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1017) Sep 4 00:02:22.899979 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:02:22.900046 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:02:22.911544 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 00:02:22.911629 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 4 00:02:22.919997 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:02:22.920415 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 00:02:22.923222 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 00:02:22.966699 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:02:22.969396 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:02:23.008686 systemd-networkd[1153]: lo: Link UP Sep 4 00:02:23.008698 systemd-networkd[1153]: lo: Gained carrier Sep 4 00:02:23.012355 systemd-networkd[1153]: Enumeration completed Sep 4 00:02:23.013219 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:02:23.013474 systemd-networkd[1153]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:02:23.013480 systemd-networkd[1153]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:02:23.015133 systemd[1]: Reached target network.target - Network. Sep 4 00:02:23.017366 systemd-networkd[1153]: eth0: Link UP Sep 4 00:02:23.017372 systemd-networkd[1153]: eth0: Gained carrier Sep 4 00:02:23.017390 systemd-networkd[1153]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:02:23.043192 systemd-networkd[1153]: eth0: DHCPv4 address 172.31.20.83/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 00:02:23.464048 ignition[1098]: Ignition 2.21.0 Sep 4 00:02:23.464064 ignition[1098]: Stage: fetch-offline Sep 4 00:02:23.464239 ignition[1098]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:02:23.464248 ignition[1098]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 00:02:23.465823 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:02:23.464530 ignition[1098]: Ignition finished successfully Sep 4 00:02:23.468386 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 00:02:23.501011 ignition[1162]: Ignition 2.21.0 Sep 4 00:02:23.501025 ignition[1162]: Stage: fetch Sep 4 00:02:23.501313 ignition[1162]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:02:23.501321 ignition[1162]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 00:02:23.501408 ignition[1162]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 00:02:23.519989 ignition[1162]: PUT result: OK Sep 4 00:02:23.522132 ignition[1162]: parsed url from cmdline: "" Sep 4 00:02:23.522143 ignition[1162]: no config URL provided Sep 4 00:02:23.522150 ignition[1162]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 00:02:23.522162 ignition[1162]: no config at "/usr/lib/ignition/user.ign" Sep 4 00:02:23.522185 ignition[1162]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 00:02:23.523005 ignition[1162]: PUT result: OK Sep 4 00:02:23.523095 ignition[1162]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 00:02:23.524103 ignition[1162]: GET result: OK Sep 4 00:02:23.524236 ignition[1162]: parsing config with SHA512: 0a07092d47489e64f0d8b90ef98d0b0bbdd78e9492050807b7f94e33c136ef5a791f0091e6a688942d5dd1c27f6d63247a981c3d2188963b409b64c07474fcac Sep 4 00:02:23.531285 unknown[1162]: fetched base config from "system" Sep 4 00:02:23.531301 unknown[1162]: fetched base config from "system" Sep 4 00:02:23.531744 ignition[1162]: fetch: fetch complete Sep 4 00:02:23.531308 unknown[1162]: fetched user config from "aws" Sep 4 00:02:23.531749 ignition[1162]: fetch: fetch passed Sep 4 00:02:23.531795 ignition[1162]: Ignition finished successfully Sep 4 00:02:23.534265 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 00:02:23.535764 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 00:02:23.568238 ignition[1168]: Ignition 2.21.0 Sep 4 00:02:23.568253 ignition[1168]: Stage: kargs Sep 4 00:02:23.568631 ignition[1168]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:02:23.568643 ignition[1168]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 00:02:23.568761 ignition[1168]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 00:02:23.569679 ignition[1168]: PUT result: OK Sep 4 00:02:23.572712 ignition[1168]: kargs: kargs passed Sep 4 00:02:23.572782 ignition[1168]: Ignition finished successfully Sep 4 00:02:23.575100 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 00:02:23.576722 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 00:02:23.604749 ignition[1174]: Ignition 2.21.0 Sep 4 00:02:23.604767 ignition[1174]: Stage: disks Sep 4 00:02:23.605169 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:02:23.605181 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 00:02:23.605298 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 00:02:23.606176 ignition[1174]: PUT result: OK Sep 4 00:02:23.608622 ignition[1174]: disks: disks passed Sep 4 00:02:23.608705 ignition[1174]: Ignition finished successfully Sep 4 00:02:23.610451 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 00:02:23.611062 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 00:02:23.611437 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 00:02:23.611997 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:02:23.612502 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:02:23.613053 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:02:23.614678 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 00:02:23.664319 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 4 00:02:23.666894 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 00:02:23.669566 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 00:02:23.820979 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c3518c93-f823-4477-a620-ff9666a59be5 r/w with ordered data mode. Quota mode: none. Sep 4 00:02:23.821523 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 00:02:23.822512 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 00:02:23.824555 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:02:23.827116 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 00:02:23.830657 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 00:02:23.831947 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 00:02:23.831988 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:02:23.840328 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 00:02:23.842348 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 00:02:23.861145 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Sep 4 00:02:23.861204 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:02:23.863785 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:02:23.871777 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 00:02:23.871846 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 4 00:02:23.873615 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:02:24.126120 systemd-networkd[1153]: eth0: Gained IPv6LL Sep 4 00:02:24.235208 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 00:02:24.252303 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Sep 4 00:02:24.270072 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 00:02:24.275672 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 00:02:24.565523 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 00:02:24.567689 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 00:02:24.571088 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 00:02:24.591390 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 00:02:24.595148 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:02:24.620071 ignition[1313]: INFO : Ignition 2.21.0 Sep 4 00:02:24.620071 ignition[1313]: INFO : Stage: mount Sep 4 00:02:24.621203 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:02:24.621203 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 00:02:24.621203 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 00:02:24.622209 ignition[1313]: INFO : PUT result: OK Sep 4 00:02:24.622755 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 00:02:24.627766 ignition[1313]: INFO : mount: mount passed Sep 4 00:02:24.628295 ignition[1313]: INFO : Ignition finished successfully Sep 4 00:02:24.629527 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 00:02:24.630817 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 00:02:24.823666 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:02:24.862001 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1325) Sep 4 00:02:24.867013 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:02:24.867104 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:02:24.877517 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 00:02:24.877607 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 4 00:02:24.879664 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:02:24.916063 ignition[1341]: INFO : Ignition 2.21.0 Sep 4 00:02:24.918029 ignition[1341]: INFO : Stage: files Sep 4 00:02:24.918029 ignition[1341]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:02:24.918029 ignition[1341]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 00:02:24.918029 ignition[1341]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 00:02:24.921795 ignition[1341]: INFO : PUT result: OK Sep 4 00:02:24.925021 ignition[1341]: DEBUG : files: compiled without relabeling support, skipping Sep 4 00:02:24.926191 ignition[1341]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 00:02:24.926191 ignition[1341]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 00:02:24.949912 ignition[1341]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 00:02:24.953840 ignition[1341]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 00:02:24.953840 ignition[1341]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 00:02:24.950603 unknown[1341]: wrote ssh authorized keys file for user: core Sep 4 00:02:24.967157 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 4 00:02:24.968265 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 4 00:02:25.036056 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 00:02:25.316944 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 4 00:02:25.316944 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 00:02:25.318496 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 00:02:25.318496 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:02:25.318496 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:02:25.318496 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:02:25.318496 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:02:25.318496 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:02:25.318496 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:02:25.323841 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:02:25.324678 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:02:25.324678 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 4 00:02:25.326539 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 4 00:02:25.326539 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 4 00:02:25.326539 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 4 00:02:26.075483 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 00:02:28.117826 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 4 00:02:28.117826 ignition[1341]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 00:02:28.127244 ignition[1341]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:02:28.132453 ignition[1341]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:02:28.132453 ignition[1341]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 00:02:28.132453 ignition[1341]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 00:02:28.132453 ignition[1341]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 00:02:28.132453 ignition[1341]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:02:28.132453 ignition[1341]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:02:28.132453 ignition[1341]: INFO : files: files passed Sep 4 00:02:28.132453 ignition[1341]: INFO : Ignition finished successfully Sep 4 00:02:28.135195 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 00:02:28.136357 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 00:02:28.140105 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 00:02:28.154626 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 00:02:28.154724 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 00:02:28.159771 initrd-setup-root-after-ignition[1372]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:02:28.161585 initrd-setup-root-after-ignition[1372]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:02:28.162307 initrd-setup-root-after-ignition[1376]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:02:28.162499 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:02:28.164189 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 00:02:28.165860 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 00:02:28.207051 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 00:02:28.207182 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 00:02:28.208677 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 00:02:28.209398 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 00:02:28.210483 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 00:02:28.211528 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 00:02:28.233890 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:02:28.235960 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 00:02:28.259587 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:02:28.260285 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:02:28.261361 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 00:02:28.262224 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 00:02:28.262448 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:02:28.263668 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 00:02:28.264594 systemd[1]: Stopped target basic.target - Basic System. Sep 4 00:02:28.265377 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 00:02:28.266198 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:02:28.267016 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 00:02:28.267881 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:02:28.268689 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 00:02:28.269450 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:02:28.270250 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 00:02:28.271376 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 00:02:28.272235 systemd[1]: Stopped target swap.target - Swaps. Sep 4 00:02:28.272968 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 00:02:28.273199 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:02:28.274247 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:02:28.275039 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:02:28.275859 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 00:02:28.276563 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:02:28.277034 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 00:02:28.277209 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 00:02:28.278718 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 00:02:28.278975 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:02:28.279784 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 00:02:28.280006 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 00:02:28.281693 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 00:02:28.286958 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 00:02:28.287895 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 00:02:28.288124 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:02:28.290185 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 00:02:28.290361 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:02:28.297284 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 00:02:28.297408 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 00:02:28.320976 ignition[1396]: INFO : Ignition 2.21.0 Sep 4 00:02:28.320976 ignition[1396]: INFO : Stage: umount Sep 4 00:02:28.320976 ignition[1396]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:02:28.320976 ignition[1396]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 00:02:28.320976 ignition[1396]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 00:02:28.325954 ignition[1396]: INFO : PUT result: OK Sep 4 00:02:28.323363 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 00:02:28.327586 ignition[1396]: INFO : umount: umount passed Sep 4 00:02:28.328005 ignition[1396]: INFO : Ignition finished successfully Sep 4 00:02:28.329959 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 00:02:28.330123 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 00:02:28.331050 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 00:02:28.331116 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 00:02:28.331698 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 00:02:28.331759 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 00:02:28.332354 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 00:02:28.332411 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 00:02:28.333025 systemd[1]: Stopped target network.target - Network. Sep 4 00:02:28.333663 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 00:02:28.333729 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:02:28.334333 systemd[1]: Stopped target paths.target - Path Units. Sep 4 00:02:28.334896 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 00:02:28.339050 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:02:28.339593 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 00:02:28.340587 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 00:02:28.341294 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 00:02:28.341342 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:02:28.341877 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 00:02:28.341910 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:02:28.342463 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 00:02:28.342520 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 00:02:28.343115 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 00:02:28.343157 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 00:02:28.344073 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 00:02:28.344570 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 00:02:28.349750 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 00:02:28.349893 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 00:02:28.354646 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 00:02:28.355033 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 00:02:28.355184 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 00:02:28.357754 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 00:02:28.359218 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 00:02:28.359860 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 00:02:28.359914 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:02:28.362493 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 00:02:28.363253 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 00:02:28.363465 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:02:28.365103 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 00:02:28.365171 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:02:28.368105 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 00:02:28.368169 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 00:02:28.369837 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 00:02:28.369919 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:02:28.370803 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:02:28.375748 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 00:02:28.375853 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:02:28.393765 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 00:02:28.394010 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:02:28.395737 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 00:02:28.395821 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 00:02:28.397416 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 00:02:28.397471 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:02:28.398893 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 00:02:28.399040 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:02:28.399858 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 00:02:28.399967 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 00:02:28.401115 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 00:02:28.401188 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:02:28.403508 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 00:02:28.405572 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 00:02:28.405663 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:02:28.408794 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 00:02:28.408871 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:02:28.411143 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 00:02:28.411198 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:02:28.412647 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 00:02:28.412716 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:02:28.413738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:02:28.413815 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:02:28.416518 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 4 00:02:28.416602 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 4 00:02:28.416654 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 00:02:28.416709 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:02:28.417246 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 00:02:28.417565 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 00:02:28.427653 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 00:02:28.427805 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 00:02:28.488818 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 00:02:28.488947 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 00:02:28.490261 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 00:02:28.490743 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 00:02:28.490813 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 00:02:28.492625 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 00:02:28.508863 systemd[1]: Switching root. Sep 4 00:02:28.548646 systemd-journald[207]: Journal stopped Sep 4 00:02:30.537562 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Sep 4 00:02:30.537666 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 00:02:30.537693 kernel: SELinux: policy capability open_perms=1 Sep 4 00:02:30.537715 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 00:02:30.537734 kernel: SELinux: policy capability always_check_network=0 Sep 4 00:02:30.537760 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 00:02:30.537786 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 00:02:30.537812 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 00:02:30.537830 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 00:02:30.537848 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 00:02:30.537867 kernel: audit: type=1403 audit(1756944148.936:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 00:02:30.537891 systemd[1]: Successfully loaded SELinux policy in 76.231ms. Sep 4 00:02:30.537949 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.382ms. Sep 4 00:02:30.537981 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:02:30.538004 systemd[1]: Detected virtualization amazon. Sep 4 00:02:30.538023 systemd[1]: Detected architecture x86-64. Sep 4 00:02:30.538045 systemd[1]: Detected first boot. Sep 4 00:02:30.538069 systemd[1]: Initializing machine ID from VM UUID. Sep 4 00:02:30.538094 kernel: Guest personality initialized and is inactive Sep 4 00:02:30.538116 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 00:02:30.538135 kernel: Initialized host personality Sep 4 00:02:30.538163 zram_generator::config[1440]: No configuration found. Sep 4 00:02:30.538191 kernel: NET: Registered PF_VSOCK protocol family Sep 4 00:02:30.538216 systemd[1]: Populated /etc with preset unit settings. Sep 4 00:02:30.538243 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 00:02:30.538268 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 00:02:30.538293 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 00:02:30.538316 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 00:02:30.538340 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 00:02:30.538363 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 00:02:30.538386 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 00:02:30.538413 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 00:02:30.538438 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 00:02:30.538462 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 00:02:30.538486 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 00:02:30.538513 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 00:02:30.538539 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:02:30.538562 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:02:30.538588 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 00:02:30.538616 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 00:02:30.538640 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 00:02:30.538664 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:02:30.538689 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 00:02:30.538713 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:02:30.538741 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:02:30.538766 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 00:02:30.538788 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 00:02:30.538819 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 00:02:30.538843 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 00:02:30.538867 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:02:30.538892 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:02:30.538915 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:02:30.553583 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:02:30.553629 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 00:02:30.553657 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 00:02:30.553684 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 00:02:30.553717 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:02:30.553741 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:02:30.553766 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:02:30.553791 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 00:02:30.553817 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 00:02:30.553844 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 00:02:30.553866 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 00:02:30.553892 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:02:30.553915 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 00:02:30.554607 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 00:02:30.554641 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 00:02:30.554661 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 00:02:30.554680 systemd[1]: Reached target machines.target - Containers. Sep 4 00:02:30.554700 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 00:02:30.554724 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:02:30.554743 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:02:30.554765 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 00:02:30.554791 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:02:30.554812 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:02:30.554832 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:02:30.554852 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 00:02:30.554871 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:02:30.554892 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 00:02:30.554912 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 00:02:30.554974 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 00:02:30.554999 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 00:02:30.555026 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 00:02:30.555047 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:02:30.555069 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:02:30.555091 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:02:30.555117 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:02:30.555143 kernel: loop: module loaded Sep 4 00:02:30.555166 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 00:02:30.555188 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 00:02:30.555210 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:02:30.555233 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 00:02:30.555256 systemd[1]: Stopped verity-setup.service. Sep 4 00:02:30.555290 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:02:30.555315 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 00:02:30.555336 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 00:02:30.555359 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 00:02:30.555381 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 00:02:30.555403 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 00:02:30.555425 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 00:02:30.555448 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:02:30.555473 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 00:02:30.555493 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 00:02:30.555513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:02:30.568465 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:02:30.568516 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:02:30.568539 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:02:30.568558 kernel: ACPI: bus type drm_connector registered Sep 4 00:02:30.568579 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:02:30.568598 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:02:30.568624 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:02:30.568644 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:02:30.568662 kernel: fuse: init (API version 7.41) Sep 4 00:02:30.568680 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 00:02:30.568699 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:02:30.568719 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 00:02:30.568739 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 00:02:30.568758 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 00:02:30.568782 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:02:30.568804 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 00:02:30.568825 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 00:02:30.568846 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 00:02:30.568868 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:02:30.568891 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 00:02:30.568913 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 00:02:30.577004 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:02:30.577053 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 00:02:30.577073 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:02:30.577140 systemd-journald[1523]: Collecting audit messages is disabled. Sep 4 00:02:30.577180 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 00:02:30.577207 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:02:30.577227 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 00:02:30.577247 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 00:02:30.577266 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:02:30.577285 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 00:02:30.577304 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:02:30.577323 systemd-journald[1523]: Journal started Sep 4 00:02:30.577369 systemd-journald[1523]: Runtime Journal (/run/log/journal/ec27a517ca3e4ee21060ecb6697d8023) is 4.8M, max 38.4M, 33.6M free. Sep 4 00:02:30.065355 systemd[1]: Queued start job for default target multi-user.target. Sep 4 00:02:30.085441 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 00:02:30.579968 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:02:30.086141 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 00:02:30.570783 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Sep 4 00:02:30.570807 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Sep 4 00:02:30.582414 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 00:02:30.588839 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 00:02:30.594671 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 00:02:30.606139 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 00:02:30.610462 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:02:30.630042 systemd-journald[1523]: Time spent on flushing to /var/log/journal/ec27a517ca3e4ee21060ecb6697d8023 is 60.566ms for 1026 entries. Sep 4 00:02:30.630042 systemd-journald[1523]: System Journal (/var/log/journal/ec27a517ca3e4ee21060ecb6697d8023) is 8M, max 195.6M, 187.6M free. Sep 4 00:02:30.697095 systemd-journald[1523]: Received client request to flush runtime journal. Sep 4 00:02:30.697172 kernel: loop0: detected capacity change from 0 to 113872 Sep 4 00:02:30.641624 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 00:02:30.661505 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:02:30.701844 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 00:02:30.708817 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:02:30.720061 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 00:02:30.759953 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 00:02:30.780697 kernel: loop1: detected capacity change from 0 to 229808 Sep 4 00:02:30.790365 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 00:02:30.794165 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:02:30.820769 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Sep 4 00:02:30.821123 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Sep 4 00:02:30.826107 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:02:30.926972 kernel: loop2: detected capacity change from 0 to 146240 Sep 4 00:02:31.058964 kernel: loop3: detected capacity change from 0 to 72352 Sep 4 00:02:31.088478 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 00:02:31.092055 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 00:02:31.106013 kernel: loop4: detected capacity change from 0 to 113872 Sep 4 00:02:31.113458 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 00:02:31.146225 kernel: loop5: detected capacity change from 0 to 229808 Sep 4 00:02:31.189516 kernel: loop6: detected capacity change from 0 to 146240 Sep 4 00:02:31.221974 kernel: loop7: detected capacity change from 0 to 72352 Sep 4 00:02:31.241331 (sd-merge)[1599]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 00:02:31.243998 (sd-merge)[1599]: Merged extensions into '/usr'. Sep 4 00:02:31.250881 systemd[1]: Reload requested from client PID 1556 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 00:02:31.251089 systemd[1]: Reloading... Sep 4 00:02:31.320077 zram_generator::config[1622]: No configuration found. Sep 4 00:02:31.567372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:02:31.726316 systemd[1]: Reloading finished in 473 ms. Sep 4 00:02:31.751560 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 00:02:31.752622 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 00:02:31.765541 systemd[1]: Starting ensure-sysext.service... Sep 4 00:02:31.768803 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:02:31.777319 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:02:31.801369 systemd[1]: Reload requested from client PID 1678 ('systemctl') (unit ensure-sysext.service)... Sep 4 00:02:31.801549 systemd[1]: Reloading... Sep 4 00:02:31.848061 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 00:02:31.848108 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 00:02:31.848479 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 00:02:31.849519 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 00:02:31.853853 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 00:02:31.855215 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Sep 4 00:02:31.858332 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Sep 4 00:02:31.864568 systemd-udevd[1680]: Using default interface naming scheme 'v255'. Sep 4 00:02:31.876027 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:02:31.876426 systemd-tmpfiles[1679]: Skipping /boot Sep 4 00:02:31.939990 zram_generator::config[1711]: No configuration found. Sep 4 00:02:31.940379 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:02:31.940403 systemd-tmpfiles[1679]: Skipping /boot Sep 4 00:02:32.188347 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:02:32.195838 (udev-worker)[1766]: Network interface NamePolicy= disabled on kernel command line. Sep 4 00:02:32.451973 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 00:02:32.457965 kernel: ACPI: button: Power Button [PWRF] Sep 4 00:02:32.478013 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 4 00:02:32.502312 kernel: ACPI: button: Sleep Button [SLPF] Sep 4 00:02:32.510369 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 00:02:32.511066 systemd[1]: Reloading finished in 708 ms. Sep 4 00:02:32.519956 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 00:02:32.526766 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:02:32.529593 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:02:32.575518 ldconfig[1552]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 00:02:32.576269 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:02:32.581208 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 00:02:32.586163 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 00:02:32.598483 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:02:32.603637 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:02:32.609231 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 00:02:32.611993 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 00:02:32.626701 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:02:32.627551 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:02:32.632047 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:02:32.636986 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:02:32.646542 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:02:32.647696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:02:32.647879 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:02:32.648039 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:02:32.662543 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:02:32.662818 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:02:32.664084 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:02:32.664247 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:02:32.664381 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:02:32.671888 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:02:32.673309 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:02:32.675490 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:02:32.677917 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:02:32.678112 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:02:32.678370 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 00:02:32.679107 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:02:32.685738 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 00:02:32.687469 systemd[1]: Finished ensure-sysext.service. Sep 4 00:02:32.706449 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 00:02:32.728601 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 4 00:02:32.766536 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:02:32.766895 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:02:32.768610 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 00:02:32.786925 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:02:32.787799 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:02:32.789636 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:02:32.790048 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:02:32.791407 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:02:32.794875 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:02:32.806783 augenrules[1908]: No rules Sep 4 00:02:32.809536 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:02:32.810492 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:02:32.813509 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:02:32.814857 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:02:32.819187 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 00:02:32.839036 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 00:02:32.840115 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 00:02:32.872376 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 00:02:32.880787 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 00:02:33.032873 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:02:33.065029 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:02:33.065613 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:02:33.068718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:02:33.104762 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 00:02:33.107002 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 00:02:33.156082 systemd-networkd[1850]: lo: Link UP Sep 4 00:02:33.156094 systemd-networkd[1850]: lo: Gained carrier Sep 4 00:02:33.166234 systemd-networkd[1850]: Enumeration completed Sep 4 00:02:33.166380 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:02:33.168492 systemd-networkd[1850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:02:33.168498 systemd-networkd[1850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:02:33.172865 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 00:02:33.180035 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 00:02:33.184109 systemd-networkd[1850]: eth0: Link UP Sep 4 00:02:33.185419 systemd-networkd[1850]: eth0: Gained carrier Sep 4 00:02:33.185461 systemd-networkd[1850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:02:33.198562 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 00:02:33.203801 systemd-networkd[1850]: eth0: DHCPv4 address 172.31.20.83/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 00:02:33.219586 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 00:02:33.246529 systemd-resolved[1851]: Positive Trust Anchors: Sep 4 00:02:33.246553 systemd-resolved[1851]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:02:33.246605 systemd-resolved[1851]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:02:33.252015 systemd-resolved[1851]: Defaulting to hostname 'linux'. Sep 4 00:02:33.255316 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:02:33.256061 systemd[1]: Reached target network.target - Network. Sep 4 00:02:33.256649 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:02:33.262025 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:02:33.262792 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:02:33.263522 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 00:02:33.264050 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 00:02:33.264508 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 4 00:02:33.265096 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 00:02:33.265572 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 00:02:33.266145 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 00:02:33.266521 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 00:02:33.266568 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:02:33.266971 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:02:33.269203 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 00:02:33.271027 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 00:02:33.274479 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 00:02:33.275161 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 00:02:33.275703 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 00:02:33.279236 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 00:02:33.281679 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 00:02:33.282917 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 00:02:33.284451 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:02:33.285060 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:02:33.285539 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:02:33.285577 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:02:33.286735 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 00:02:33.291110 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 00:02:33.302467 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 00:02:33.305506 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 00:02:33.311157 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 00:02:33.316196 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 00:02:33.316832 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 00:02:33.324148 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 4 00:02:33.330314 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 00:02:33.339085 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 00:02:33.360860 google_oslogin_nss_cache[1968]: oslogin_cache_refresh[1968]: Refreshing passwd entry cache Sep 4 00:02:33.361285 oslogin_cache_refresh[1968]: Refreshing passwd entry cache Sep 4 00:02:33.366054 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 00:02:33.368559 oslogin_cache_refresh[1968]: Failure getting users, quitting Sep 4 00:02:33.377385 google_oslogin_nss_cache[1968]: oslogin_cache_refresh[1968]: Failure getting users, quitting Sep 4 00:02:33.377385 google_oslogin_nss_cache[1968]: oslogin_cache_refresh[1968]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:02:33.377385 google_oslogin_nss_cache[1968]: oslogin_cache_refresh[1968]: Refreshing group entry cache Sep 4 00:02:33.368579 oslogin_cache_refresh[1968]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:02:33.368635 oslogin_cache_refresh[1968]: Refreshing group entry cache Sep 4 00:02:33.378632 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 00:02:33.390444 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 00:02:33.390472 oslogin_cache_refresh[1968]: Failure getting groups, quitting Sep 4 00:02:33.394758 google_oslogin_nss_cache[1968]: oslogin_cache_refresh[1968]: Failure getting groups, quitting Sep 4 00:02:33.394758 google_oslogin_nss_cache[1968]: oslogin_cache_refresh[1968]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:02:33.390490 oslogin_cache_refresh[1968]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:02:33.399287 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 00:02:33.413398 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 00:02:33.416872 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 00:02:33.418334 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 00:02:33.421276 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 00:02:33.423657 extend-filesystems[1967]: Found /dev/nvme0n1p6 Sep 4 00:02:33.426596 jq[1966]: false Sep 4 00:02:33.429631 extend-filesystems[1967]: Found /dev/nvme0n1p9 Sep 4 00:02:33.433687 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 00:02:33.443293 extend-filesystems[1967]: Checking size of /dev/nvme0n1p9 Sep 4 00:02:33.449025 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 00:02:33.455891 extend-filesystems[1967]: Resized partition /dev/nvme0n1p9 Sep 4 00:02:33.451333 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 00:02:33.460655 extend-filesystems[1994]: resize2fs 1.47.2 (1-Jan-2025) Sep 4 00:02:33.485926 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 00:02:33.452369 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 00:02:33.452827 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 4 00:02:33.453946 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 4 00:02:33.475529 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 00:02:33.477044 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 00:02:33.512316 coreos-metadata[1963]: Sep 04 00:02:33.510 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 00:02:33.512316 coreos-metadata[1963]: Sep 04 00:02:33.512 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 00:02:33.513520 coreos-metadata[1963]: Sep 04 00:02:33.513 INFO Fetch successful Sep 4 00:02:33.514053 coreos-metadata[1963]: Sep 04 00:02:33.513 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 00:02:33.515083 coreos-metadata[1963]: Sep 04 00:02:33.514 INFO Fetch successful Sep 4 00:02:33.515083 coreos-metadata[1963]: Sep 04 00:02:33.514 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 00:02:33.515975 coreos-metadata[1963]: Sep 04 00:02:33.515 INFO Fetch successful Sep 4 00:02:33.515975 coreos-metadata[1963]: Sep 04 00:02:33.515 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 00:02:33.517094 coreos-metadata[1963]: Sep 04 00:02:33.516 INFO Fetch successful Sep 4 00:02:33.517852 coreos-metadata[1963]: Sep 04 00:02:33.517 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 00:02:33.518797 coreos-metadata[1963]: Sep 04 00:02:33.518 INFO Fetch failed with 404: resource not found Sep 4 00:02:33.518797 coreos-metadata[1963]: Sep 04 00:02:33.518 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 00:02:33.521815 coreos-metadata[1963]: Sep 04 00:02:33.521 INFO Fetch successful Sep 4 00:02:33.521815 coreos-metadata[1963]: Sep 04 00:02:33.521 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 00:02:33.524649 coreos-metadata[1963]: Sep 04 00:02:33.524 INFO Fetch successful Sep 4 00:02:33.524649 coreos-metadata[1963]: Sep 04 00:02:33.524 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 00:02:33.527229 coreos-metadata[1963]: Sep 04 00:02:33.526 INFO Fetch successful Sep 4 00:02:33.527229 coreos-metadata[1963]: Sep 04 00:02:33.526 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 00:02:33.527229 coreos-metadata[1963]: Sep 04 00:02:33.527 INFO Fetch successful Sep 4 00:02:33.527229 coreos-metadata[1963]: Sep 04 00:02:33.527 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 00:02:33.529040 coreos-metadata[1963]: Sep 04 00:02:33.528 INFO Fetch successful Sep 4 00:02:33.547028 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 00:02:33.540553 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 00:02:33.553753 jq[1986]: true Sep 4 00:02:33.540867 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 00:02:33.563157 extend-filesystems[1994]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 00:02:33.563157 extend-filesystems[1994]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 00:02:33.563157 extend-filesystems[1994]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 00:02:33.579886 extend-filesystems[1967]: Resized filesystem in /dev/nvme0n1p9 Sep 4 00:02:33.570536 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 00:02:33.590161 tar[1996]: linux-amd64/LICENSE Sep 4 00:02:33.590161 tar[1996]: linux-amd64/helm Sep 4 00:02:33.571902 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 00:02:33.618628 update_engine[1982]: I20250904 00:02:33.618054 1982 main.cc:92] Flatcar Update Engine starting Sep 4 00:02:33.633669 dbus-daemon[1964]: [system] SELinux support is enabled Sep 4 00:02:33.634967 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 00:02:33.651632 jq[2029]: true Sep 4 00:02:33.645577 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 00:02:33.645647 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 00:02:33.649120 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 00:02:33.652249 ntpd[1970]: ntpd 4.2.8p17@1.4004-o Wed Sep 3 21:33:36 UTC 2025 (1): Starting Sep 4 00:02:33.658212 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: ntpd 4.2.8p17@1.4004-o Wed Sep 3 21:33:36 UTC 2025 (1): Starting Sep 4 00:02:33.658212 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 00:02:33.658212 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: ---------------------------------------------------- Sep 4 00:02:33.658212 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: ntp-4 is maintained by Network Time Foundation, Sep 4 00:02:33.658212 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 00:02:33.658212 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: corporation. Support and training for ntp-4 are Sep 4 00:02:33.658212 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: available at https://www.nwtime.org/support Sep 4 00:02:33.658212 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: ---------------------------------------------------- Sep 4 00:02:33.649155 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 00:02:33.652275 ntpd[1970]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 00:02:33.650788 (ntainerd)[2018]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 00:02:33.690057 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: proto: precision = 0.098 usec (-23) Sep 4 00:02:33.690057 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: basedate set to 2025-08-22 Sep 4 00:02:33.690057 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: gps base set to 2025-08-24 (week 2381) Sep 4 00:02:33.652284 ntpd[1970]: ---------------------------------------------------- Sep 4 00:02:33.658640 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 00:02:33.652293 ntpd[1970]: ntp-4 is maintained by Network Time Foundation, Sep 4 00:02:33.660267 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 00:02:33.652301 ntpd[1970]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 00:02:33.676918 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 00:02:33.652309 ntpd[1970]: corporation. Support and training for ntp-4 are Sep 4 00:02:33.652317 ntpd[1970]: available at https://www.nwtime.org/support Sep 4 00:02:33.652326 ntpd[1970]: ---------------------------------------------------- Sep 4 00:02:33.660873 dbus-daemon[1964]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1850 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 00:02:33.668191 dbus-daemon[1964]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 00:02:33.681347 ntpd[1970]: proto: precision = 0.098 usec (-23) Sep 4 00:02:33.684395 ntpd[1970]: basedate set to 2025-08-22 Sep 4 00:02:33.684417 ntpd[1970]: gps base set to 2025-08-24 (week 2381) Sep 4 00:02:33.692802 systemd[1]: Started update-engine.service - Update Engine. Sep 4 00:02:33.704110 update_engine[1982]: I20250904 00:02:33.694674 1982 update_check_scheduler.cc:74] Next update check in 4m49s Sep 4 00:02:33.707124 ntpd[1970]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 00:02:33.715178 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 00:02:33.715178 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 00:02:33.715178 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 00:02:33.715178 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: Listen normally on 3 eth0 172.31.20.83:123 Sep 4 00:02:33.715178 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: Listen normally on 4 lo [::1]:123 Sep 4 00:02:33.715178 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: bind(21) AF_INET6 fe80::42c:ddff:fe5a:8d73%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 00:02:33.715178 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: unable to create socket on eth0 (5) for fe80::42c:ddff:fe5a:8d73%2#123 Sep 4 00:02:33.715178 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: failed to init interface for address fe80::42c:ddff:fe5a:8d73%2 Sep 4 00:02:33.715178 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: Listening on routing socket on fd #21 for interface updates Sep 4 00:02:33.707185 ntpd[1970]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 00:02:33.708583 ntpd[1970]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 00:02:33.708628 ntpd[1970]: Listen normally on 3 eth0 172.31.20.83:123 Sep 4 00:02:33.708675 ntpd[1970]: Listen normally on 4 lo [::1]:123 Sep 4 00:02:33.708733 ntpd[1970]: bind(21) AF_INET6 fe80::42c:ddff:fe5a:8d73%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 00:02:33.708754 ntpd[1970]: unable to create socket on eth0 (5) for fe80::42c:ddff:fe5a:8d73%2#123 Sep 4 00:02:33.708770 ntpd[1970]: failed to init interface for address fe80::42c:ddff:fe5a:8d73%2 Sep 4 00:02:33.708804 ntpd[1970]: Listening on routing socket on fd #21 for interface updates Sep 4 00:02:33.744660 ntpd[1970]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 00:02:33.747093 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 00:02:33.747093 ntpd[1970]: 4 Sep 00:02:33 ntpd[1970]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 00:02:33.744704 ntpd[1970]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 00:02:33.771764 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 00:02:33.773754 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 00:02:33.809267 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 00:02:33.882028 bash[2105]: Updated "/home/core/.ssh/authorized_keys" Sep 4 00:02:33.885019 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 00:02:33.899292 systemd[1]: Starting sshkeys.service... Sep 4 00:02:33.996091 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 00:02:34.005058 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 00:02:34.074990 systemd-logind[1981]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 00:02:34.075023 systemd-logind[1981]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 4 00:02:34.075045 systemd-logind[1981]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 00:02:34.076141 systemd-logind[1981]: New seat seat0. Sep 4 00:02:34.077103 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 00:02:34.181284 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 00:02:34.184436 dbus-daemon[1964]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 00:02:34.192923 dbus-daemon[1964]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2047 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 00:02:34.205073 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 00:02:34.209067 locksmithd[2066]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 00:02:34.337991 coreos-metadata[2150]: Sep 04 00:02:34.335 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 00:02:34.339547 coreos-metadata[2150]: Sep 04 00:02:34.339 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 00:02:34.340367 coreos-metadata[2150]: Sep 04 00:02:34.340 INFO Fetch successful Sep 4 00:02:34.340821 coreos-metadata[2150]: Sep 04 00:02:34.340 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 00:02:34.342203 coreos-metadata[2150]: Sep 04 00:02:34.340 INFO Fetch successful Sep 4 00:02:34.344079 unknown[2150]: wrote ssh authorized keys file for user: core Sep 4 00:02:34.399293 sshd_keygen[2040]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 00:02:34.405463 update-ssh-keys[2164]: Updated "/home/core/.ssh/authorized_keys" Sep 4 00:02:34.401767 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 00:02:34.408003 systemd[1]: Finished sshkeys.service. Sep 4 00:02:34.458561 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 00:02:34.467294 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 00:02:34.471259 systemd[1]: Started sshd@0-172.31.20.83:22-139.178.68.195:45608.service - OpenSSH per-connection server daemon (139.178.68.195:45608). Sep 4 00:02:34.498491 polkitd[2161]: Started polkitd version 126 Sep 4 00:02:34.508898 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 00:02:34.509683 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 00:02:34.517944 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 00:02:34.527161 polkitd[2161]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 00:02:34.535622 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 00:02:34.527723 polkitd[2161]: Loading rules from directory /run/polkit-1/rules.d Sep 4 00:02:34.527775 polkitd[2161]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 4 00:02:34.530274 polkitd[2161]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 4 00:02:34.530332 polkitd[2161]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 4 00:02:34.530384 polkitd[2161]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 00:02:34.534624 polkitd[2161]: Finished loading, compiling and executing 2 rules Sep 4 00:02:34.540241 dbus-daemon[1964]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 00:02:34.542000 polkitd[2161]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 00:02:34.550682 containerd[2018]: time="2025-09-04T00:02:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 00:02:34.553481 containerd[2018]: time="2025-09-04T00:02:34.553417908Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 4 00:02:34.573215 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 00:02:34.579733 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 00:02:34.587755 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 00:02:34.589605 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 00:02:34.624490 systemd-hostnamed[2047]: Hostname set to (transient) Sep 4 00:02:34.624490 systemd-resolved[1851]: System hostname changed to 'ip-172-31-20-83'. Sep 4 00:02:34.633961 containerd[2018]: time="2025-09-04T00:02:34.633190283Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.153µs" Sep 4 00:02:34.633961 containerd[2018]: time="2025-09-04T00:02:34.633236824Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 00:02:34.633961 containerd[2018]: time="2025-09-04T00:02:34.633262350Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 00:02:34.633961 containerd[2018]: time="2025-09-04T00:02:34.633445641Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 00:02:34.633961 containerd[2018]: time="2025-09-04T00:02:34.633468610Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 00:02:34.633961 containerd[2018]: time="2025-09-04T00:02:34.633501452Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:02:34.633961 containerd[2018]: time="2025-09-04T00:02:34.633571027Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:02:34.633961 containerd[2018]: time="2025-09-04T00:02:34.633586296Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:02:34.633961 containerd[2018]: time="2025-09-04T00:02:34.633897134Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:02:34.633961 containerd[2018]: time="2025-09-04T00:02:34.633918068Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:02:34.636154 containerd[2018]: time="2025-09-04T00:02:34.634959717Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:02:34.636154 containerd[2018]: time="2025-09-04T00:02:34.634989071Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 00:02:34.636154 containerd[2018]: time="2025-09-04T00:02:34.635118408Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 00:02:34.636154 containerd[2018]: time="2025-09-04T00:02:34.635422453Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:02:34.636154 containerd[2018]: time="2025-09-04T00:02:34.635462374Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:02:34.636154 containerd[2018]: time="2025-09-04T00:02:34.635481099Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 00:02:34.636611 containerd[2018]: time="2025-09-04T00:02:34.636537258Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 00:02:34.639694 containerd[2018]: time="2025-09-04T00:02:34.639319768Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 00:02:34.639694 containerd[2018]: time="2025-09-04T00:02:34.639438087Z" level=info msg="metadata content store policy set" policy=shared Sep 4 00:02:34.644677 containerd[2018]: time="2025-09-04T00:02:34.644088414Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 00:02:34.644677 containerd[2018]: time="2025-09-04T00:02:34.644162684Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 00:02:34.644677 containerd[2018]: time="2025-09-04T00:02:34.644184178Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 00:02:34.644677 containerd[2018]: time="2025-09-04T00:02:34.644196607Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 00:02:34.644677 containerd[2018]: time="2025-09-04T00:02:34.644240263Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 00:02:34.644677 containerd[2018]: time="2025-09-04T00:02:34.644251803Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 00:02:34.644677 containerd[2018]: time="2025-09-04T00:02:34.644265279Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 00:02:34.644677 containerd[2018]: time="2025-09-04T00:02:34.644284654Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 00:02:34.644677 containerd[2018]: time="2025-09-04T00:02:34.644295260Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 00:02:34.644677 containerd[2018]: time="2025-09-04T00:02:34.644306747Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 00:02:34.644677 containerd[2018]: time="2025-09-04T00:02:34.644315710Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 00:02:34.644677 containerd[2018]: time="2025-09-04T00:02:34.644327203Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 00:02:34.644677 containerd[2018]: time="2025-09-04T00:02:34.644446081Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 00:02:34.644677 containerd[2018]: time="2025-09-04T00:02:34.644464440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 00:02:34.645126 containerd[2018]: time="2025-09-04T00:02:34.644478672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 00:02:34.645126 containerd[2018]: time="2025-09-04T00:02:34.644489752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 00:02:34.645126 containerd[2018]: time="2025-09-04T00:02:34.644499878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 00:02:34.645126 containerd[2018]: time="2025-09-04T00:02:34.644510658Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 00:02:34.645126 containerd[2018]: time="2025-09-04T00:02:34.644523071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 00:02:34.645126 containerd[2018]: time="2025-09-04T00:02:34.644533698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 00:02:34.645126 containerd[2018]: time="2025-09-04T00:02:34.644545233Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 00:02:34.645126 containerd[2018]: time="2025-09-04T00:02:34.644555150Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 00:02:34.645126 containerd[2018]: time="2025-09-04T00:02:34.644572541Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 00:02:34.645126 containerd[2018]: time="2025-09-04T00:02:34.644633157Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 00:02:34.645126 containerd[2018]: time="2025-09-04T00:02:34.644646350Z" level=info msg="Start snapshots syncer" Sep 4 00:02:34.646965 containerd[2018]: time="2025-09-04T00:02:34.645602589Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 00:02:34.646965 containerd[2018]: time="2025-09-04T00:02:34.645971672Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 00:02:34.647266 containerd[2018]: time="2025-09-04T00:02:34.646079340Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 00:02:34.647266 containerd[2018]: time="2025-09-04T00:02:34.646183070Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 00:02:34.647266 containerd[2018]: time="2025-09-04T00:02:34.646331044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 00:02:34.647266 containerd[2018]: time="2025-09-04T00:02:34.646359043Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 00:02:34.647266 containerd[2018]: time="2025-09-04T00:02:34.646374077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 00:02:34.647266 containerd[2018]: time="2025-09-04T00:02:34.646388784Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 00:02:34.647266 containerd[2018]: time="2025-09-04T00:02:34.646407815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 00:02:34.647266 containerd[2018]: time="2025-09-04T00:02:34.646422671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 00:02:34.647266 containerd[2018]: time="2025-09-04T00:02:34.646436745Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 00:02:34.647266 containerd[2018]: time="2025-09-04T00:02:34.646467479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 00:02:34.647266 containerd[2018]: time="2025-09-04T00:02:34.646485695Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 00:02:34.647266 containerd[2018]: time="2025-09-04T00:02:34.646501569Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 00:02:34.647266 containerd[2018]: time="2025-09-04T00:02:34.646538161Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:02:34.647266 containerd[2018]: time="2025-09-04T00:02:34.646561967Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:02:34.647743 containerd[2018]: time="2025-09-04T00:02:34.646596137Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:02:34.647743 containerd[2018]: time="2025-09-04T00:02:34.646611512Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:02:34.647743 containerd[2018]: time="2025-09-04T00:02:34.646623862Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 00:02:34.647743 containerd[2018]: time="2025-09-04T00:02:34.646637568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 00:02:34.647743 containerd[2018]: time="2025-09-04T00:02:34.646652967Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 00:02:34.647743 containerd[2018]: time="2025-09-04T00:02:34.646673648Z" level=info msg="runtime interface created" Sep 4 00:02:34.647743 containerd[2018]: time="2025-09-04T00:02:34.646681615Z" level=info msg="created NRI interface" Sep 4 00:02:34.647743 containerd[2018]: time="2025-09-04T00:02:34.646694270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 00:02:34.647743 containerd[2018]: time="2025-09-04T00:02:34.646710018Z" level=info msg="Connect containerd service" Sep 4 00:02:34.647743 containerd[2018]: time="2025-09-04T00:02:34.646741982Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 00:02:34.651090 containerd[2018]: time="2025-09-04T00:02:34.651050139Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 00:02:34.652816 ntpd[1970]: bind(24) AF_INET6 fe80::42c:ddff:fe5a:8d73%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 00:02:34.653379 ntpd[1970]: 4 Sep 00:02:34 ntpd[1970]: bind(24) AF_INET6 fe80::42c:ddff:fe5a:8d73%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 00:02:34.653379 ntpd[1970]: 4 Sep 00:02:34 ntpd[1970]: unable to create socket on eth0 (6) for fe80::42c:ddff:fe5a:8d73%2#123 Sep 4 00:02:34.653379 ntpd[1970]: 4 Sep 00:02:34 ntpd[1970]: failed to init interface for address fe80::42c:ddff:fe5a:8d73%2 Sep 4 00:02:34.653241 ntpd[1970]: unable to create socket on eth0 (6) for fe80::42c:ddff:fe5a:8d73%2#123 Sep 4 00:02:34.653257 ntpd[1970]: failed to init interface for address fe80::42c:ddff:fe5a:8d73%2 Sep 4 00:02:34.686319 systemd-networkd[1850]: eth0: Gained IPv6LL Sep 4 00:02:34.690793 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 00:02:34.692964 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 00:02:34.697259 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 00:02:34.703057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:02:34.708311 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 00:02:34.813807 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 00:02:34.904506 amazon-ssm-agent[2198]: Initializing new seelog logger Sep 4 00:02:34.905602 amazon-ssm-agent[2198]: New Seelog Logger Creation Complete Sep 4 00:02:34.905602 amazon-ssm-agent[2198]: 2025/09/04 00:02:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:02:34.905602 amazon-ssm-agent[2198]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:02:34.905602 amazon-ssm-agent[2198]: 2025/09/04 00:02:34 processing appconfig overrides Sep 4 00:02:34.906528 amazon-ssm-agent[2198]: 2025/09/04 00:02:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:02:34.906600 amazon-ssm-agent[2198]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:02:34.906766 amazon-ssm-agent[2198]: 2025/09/04 00:02:34 processing appconfig overrides Sep 4 00:02:34.907142 amazon-ssm-agent[2198]: 2025/09/04 00:02:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:02:34.907203 amazon-ssm-agent[2198]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:02:34.907336 amazon-ssm-agent[2198]: 2025/09/04 00:02:34 processing appconfig overrides Sep 4 00:02:34.907881 amazon-ssm-agent[2198]: 2025-09-04 00:02:34.9059 INFO Proxy environment variables: Sep 4 00:02:34.912168 amazon-ssm-agent[2198]: 2025/09/04 00:02:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:02:34.912168 amazon-ssm-agent[2198]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:02:34.912313 amazon-ssm-agent[2198]: 2025/09/04 00:02:34 processing appconfig overrides Sep 4 00:02:34.927615 sshd[2176]: Accepted publickey for core from 139.178.68.195 port 45608 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:02:34.935473 sshd-session[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:02:34.953842 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 00:02:34.962283 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 00:02:34.966196 tar[1996]: linux-amd64/README.md Sep 4 00:02:35.006365 systemd-logind[1981]: New session 1 of user core. Sep 4 00:02:35.010487 amazon-ssm-agent[2198]: 2025-09-04 00:02:34.9064 INFO https_proxy: Sep 4 00:02:35.029831 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 00:02:35.032540 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 00:02:35.042031 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 00:02:35.071604 (systemd)[2226]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 00:02:35.079429 systemd-logind[1981]: New session c1 of user core. Sep 4 00:02:35.107475 amazon-ssm-agent[2198]: 2025-09-04 00:02:34.9064 INFO http_proxy: Sep 4 00:02:35.151068 containerd[2018]: time="2025-09-04T00:02:35.149987976Z" level=info msg="Start subscribing containerd event" Sep 4 00:02:35.151068 containerd[2018]: time="2025-09-04T00:02:35.151021911Z" level=info msg="Start recovering state" Sep 4 00:02:35.151213 containerd[2018]: time="2025-09-04T00:02:35.151150429Z" level=info msg="Start event monitor" Sep 4 00:02:35.151213 containerd[2018]: time="2025-09-04T00:02:35.151168223Z" level=info msg="Start cni network conf syncer for default" Sep 4 00:02:35.151213 containerd[2018]: time="2025-09-04T00:02:35.151177162Z" level=info msg="Start streaming server" Sep 4 00:02:35.151213 containerd[2018]: time="2025-09-04T00:02:35.151191391Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 00:02:35.151213 containerd[2018]: time="2025-09-04T00:02:35.151201444Z" level=info msg="runtime interface starting up..." Sep 4 00:02:35.151213 containerd[2018]: time="2025-09-04T00:02:35.151209937Z" level=info msg="starting plugins..." Sep 4 00:02:35.151434 containerd[2018]: time="2025-09-04T00:02:35.151226153Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 00:02:35.151729 containerd[2018]: time="2025-09-04T00:02:35.151520775Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 00:02:35.151729 containerd[2018]: time="2025-09-04T00:02:35.151710652Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 00:02:35.152043 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 00:02:35.154135 containerd[2018]: time="2025-09-04T00:02:35.153720754Z" level=info msg="containerd successfully booted in 0.604894s" Sep 4 00:02:35.209038 amazon-ssm-agent[2198]: 2025-09-04 00:02:34.9064 INFO no_proxy: Sep 4 00:02:35.305951 amazon-ssm-agent[2198]: 2025-09-04 00:02:34.9068 INFO Checking if agent identity type OnPrem can be assumed Sep 4 00:02:35.399126 systemd[2226]: Queued start job for default target default.target. Sep 4 00:02:35.403793 amazon-ssm-agent[2198]: 2025-09-04 00:02:34.9070 INFO Checking if agent identity type EC2 can be assumed Sep 4 00:02:35.405744 systemd[2226]: Created slice app.slice - User Application Slice. Sep 4 00:02:35.405784 systemd[2226]: Reached target paths.target - Paths. Sep 4 00:02:35.405839 systemd[2226]: Reached target timers.target - Timers. Sep 4 00:02:35.409044 systemd[2226]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 00:02:35.434985 systemd[2226]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 00:02:35.435141 systemd[2226]: Reached target sockets.target - Sockets. Sep 4 00:02:35.435294 systemd[2226]: Reached target basic.target - Basic System. Sep 4 00:02:35.435361 systemd[2226]: Reached target default.target - Main User Target. Sep 4 00:02:35.435399 systemd[2226]: Startup finished in 340ms. Sep 4 00:02:35.435491 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 00:02:35.441217 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 00:02:35.476011 amazon-ssm-agent[2198]: 2025/09/04 00:02:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:02:35.476011 amazon-ssm-agent[2198]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:02:35.476148 amazon-ssm-agent[2198]: 2025/09/04 00:02:35 processing appconfig overrides Sep 4 00:02:35.503500 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.0139 INFO Agent will take identity from EC2 Sep 4 00:02:35.504275 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.0170 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 4 00:02:35.504275 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.0171 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 4 00:02:35.504275 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.0171 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 00:02:35.504275 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.0171 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 4 00:02:35.504275 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.0171 INFO [Registrar] Starting registrar module Sep 4 00:02:35.504275 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.0243 INFO [EC2Identity] Checking disk for registration info Sep 4 00:02:35.504275 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.0244 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 4 00:02:35.504275 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.0244 INFO [EC2Identity] Generating registration keypair Sep 4 00:02:35.504275 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.4292 INFO [EC2Identity] Checking write access before registering Sep 4 00:02:35.504275 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.4297 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 4 00:02:35.504275 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.4755 INFO [EC2Identity] EC2 registration was successful. Sep 4 00:02:35.504275 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.4756 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 4 00:02:35.504275 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.4758 INFO [CredentialRefresher] credentialRefresher has started Sep 4 00:02:35.504275 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.4758 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 00:02:35.504275 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.5038 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 00:02:35.504275 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.5040 INFO [CredentialRefresher] Credentials ready Sep 4 00:02:35.592027 systemd[1]: Started sshd@1-172.31.20.83:22-139.178.68.195:45616.service - OpenSSH per-connection server daemon (139.178.68.195:45616). Sep 4 00:02:35.601615 amazon-ssm-agent[2198]: 2025-09-04 00:02:35.5042 INFO [CredentialRefresher] Next credential rotation will be in 29.999994629066666 minutes Sep 4 00:02:35.765493 sshd[2244]: Accepted publickey for core from 139.178.68.195 port 45616 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:02:35.767149 sshd-session[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:02:35.772561 systemd-logind[1981]: New session 2 of user core. Sep 4 00:02:35.779185 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 00:02:35.897837 sshd[2246]: Connection closed by 139.178.68.195 port 45616 Sep 4 00:02:35.898389 sshd-session[2244]: pam_unix(sshd:session): session closed for user core Sep 4 00:02:35.902632 systemd[1]: sshd@1-172.31.20.83:22-139.178.68.195:45616.service: Deactivated successfully. Sep 4 00:02:35.903116 systemd-logind[1981]: Session 2 logged out. Waiting for processes to exit. Sep 4 00:02:35.904823 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 00:02:35.906743 systemd-logind[1981]: Removed session 2. Sep 4 00:02:35.931713 systemd[1]: Started sshd@2-172.31.20.83:22-139.178.68.195:45632.service - OpenSSH per-connection server daemon (139.178.68.195:45632). Sep 4 00:02:36.107861 sshd[2252]: Accepted publickey for core from 139.178.68.195 port 45632 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:02:36.109725 sshd-session[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:02:36.117091 systemd-logind[1981]: New session 3 of user core. Sep 4 00:02:36.124175 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 00:02:36.243743 sshd[2254]: Connection closed by 139.178.68.195 port 45632 Sep 4 00:02:36.244445 sshd-session[2252]: pam_unix(sshd:session): session closed for user core Sep 4 00:02:36.249877 systemd[1]: sshd@2-172.31.20.83:22-139.178.68.195:45632.service: Deactivated successfully. Sep 4 00:02:36.252891 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 00:02:36.254110 systemd-logind[1981]: Session 3 logged out. Waiting for processes to exit. Sep 4 00:02:36.256466 systemd-logind[1981]: Removed session 3. Sep 4 00:02:36.517827 amazon-ssm-agent[2198]: 2025-09-04 00:02:36.5175 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 00:02:36.619131 amazon-ssm-agent[2198]: 2025-09-04 00:02:36.5201 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2261) started Sep 4 00:02:36.720266 amazon-ssm-agent[2198]: 2025-09-04 00:02:36.5201 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 00:02:37.460537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:02:37.462461 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 00:02:37.466054 systemd[1]: Startup finished in 2.723s (kernel) + 9.243s (initrd) + 8.603s (userspace) = 20.570s. Sep 4 00:02:37.473006 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:02:37.652749 ntpd[1970]: Listen normally on 7 eth0 [fe80::42c:ddff:fe5a:8d73%2]:123 Sep 4 00:02:37.653351 ntpd[1970]: 4 Sep 00:02:37 ntpd[1970]: Listen normally on 7 eth0 [fe80::42c:ddff:fe5a:8d73%2]:123 Sep 4 00:02:38.725540 kubelet[2278]: E0904 00:02:38.725454 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:02:38.728831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:02:38.729047 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:02:38.729676 systemd[1]: kubelet.service: Consumed 1.094s CPU time, 268.1M memory peak. Sep 4 00:02:41.632570 systemd-resolved[1851]: Clock change detected. Flushing caches. Sep 4 00:02:47.268674 systemd[1]: Started sshd@3-172.31.20.83:22-139.178.68.195:53548.service - OpenSSH per-connection server daemon (139.178.68.195:53548). Sep 4 00:02:47.453433 sshd[2291]: Accepted publickey for core from 139.178.68.195 port 53548 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:02:47.456265 sshd-session[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:02:47.462539 systemd-logind[1981]: New session 4 of user core. Sep 4 00:02:47.473112 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 00:02:47.591130 sshd[2293]: Connection closed by 139.178.68.195 port 53548 Sep 4 00:02:47.592418 sshd-session[2291]: pam_unix(sshd:session): session closed for user core Sep 4 00:02:47.596110 systemd[1]: sshd@3-172.31.20.83:22-139.178.68.195:53548.service: Deactivated successfully. Sep 4 00:02:47.598569 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 00:02:47.601199 systemd-logind[1981]: Session 4 logged out. Waiting for processes to exit. Sep 4 00:02:47.602511 systemd-logind[1981]: Removed session 4. Sep 4 00:02:47.631983 systemd[1]: Started sshd@4-172.31.20.83:22-139.178.68.195:53550.service - OpenSSH per-connection server daemon (139.178.68.195:53550). Sep 4 00:02:47.815032 sshd[2299]: Accepted publickey for core from 139.178.68.195 port 53550 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:02:47.816630 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:02:47.821969 systemd-logind[1981]: New session 5 of user core. Sep 4 00:02:47.826058 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 00:02:47.943796 sshd[2301]: Connection closed by 139.178.68.195 port 53550 Sep 4 00:02:47.945367 sshd-session[2299]: pam_unix(sshd:session): session closed for user core Sep 4 00:02:47.949822 systemd[1]: sshd@4-172.31.20.83:22-139.178.68.195:53550.service: Deactivated successfully. Sep 4 00:02:47.952058 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 00:02:47.953255 systemd-logind[1981]: Session 5 logged out. Waiting for processes to exit. Sep 4 00:02:47.954864 systemd-logind[1981]: Removed session 5. Sep 4 00:02:47.977082 systemd[1]: Started sshd@5-172.31.20.83:22-139.178.68.195:53554.service - OpenSSH per-connection server daemon (139.178.68.195:53554). Sep 4 00:02:48.155616 sshd[2307]: Accepted publickey for core from 139.178.68.195 port 53554 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:02:48.156995 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:02:48.161909 systemd-logind[1981]: New session 6 of user core. Sep 4 00:02:48.169128 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 00:02:48.288370 sshd[2309]: Connection closed by 139.178.68.195 port 53554 Sep 4 00:02:48.289179 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Sep 4 00:02:48.293129 systemd[1]: sshd@5-172.31.20.83:22-139.178.68.195:53554.service: Deactivated successfully. Sep 4 00:02:48.295037 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 00:02:48.295805 systemd-logind[1981]: Session 6 logged out. Waiting for processes to exit. Sep 4 00:02:48.297239 systemd-logind[1981]: Removed session 6. Sep 4 00:02:48.324095 systemd[1]: Started sshd@6-172.31.20.83:22-139.178.68.195:53558.service - OpenSSH per-connection server daemon (139.178.68.195:53558). Sep 4 00:02:48.506495 sshd[2315]: Accepted publickey for core from 139.178.68.195 port 53558 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:02:48.507805 sshd-session[2315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:02:48.513350 systemd-logind[1981]: New session 7 of user core. Sep 4 00:02:48.519126 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 00:02:48.663774 sudo[2318]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 00:02:48.664074 sudo[2318]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:02:48.686586 sudo[2318]: pam_unix(sudo:session): session closed for user root Sep 4 00:02:48.709484 sshd[2317]: Connection closed by 139.178.68.195 port 53558 Sep 4 00:02:48.710344 sshd-session[2315]: pam_unix(sshd:session): session closed for user core Sep 4 00:02:48.714641 systemd-logind[1981]: Session 7 logged out. Waiting for processes to exit. Sep 4 00:02:48.715092 systemd[1]: sshd@6-172.31.20.83:22-139.178.68.195:53558.service: Deactivated successfully. Sep 4 00:02:48.716841 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 00:02:48.718797 systemd-logind[1981]: Removed session 7. Sep 4 00:02:48.744107 systemd[1]: Started sshd@7-172.31.20.83:22-139.178.68.195:53574.service - OpenSSH per-connection server daemon (139.178.68.195:53574). Sep 4 00:02:48.928061 sshd[2324]: Accepted publickey for core from 139.178.68.195 port 53574 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:02:48.929501 sshd-session[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:02:48.935146 systemd-logind[1981]: New session 8 of user core. Sep 4 00:02:48.944132 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 00:02:49.040608 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 00:02:49.041012 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:02:49.046589 sudo[2328]: pam_unix(sudo:session): session closed for user root Sep 4 00:02:49.052058 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 00:02:49.052318 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:02:49.062919 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:02:49.101115 augenrules[2350]: No rules Sep 4 00:02:49.102455 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:02:49.102698 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:02:49.103628 sudo[2327]: pam_unix(sudo:session): session closed for user root Sep 4 00:02:49.126263 sshd[2326]: Connection closed by 139.178.68.195 port 53574 Sep 4 00:02:49.126760 sshd-session[2324]: pam_unix(sshd:session): session closed for user core Sep 4 00:02:49.130537 systemd[1]: sshd@7-172.31.20.83:22-139.178.68.195:53574.service: Deactivated successfully. Sep 4 00:02:49.132265 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 00:02:49.133011 systemd-logind[1981]: Session 8 logged out. Waiting for processes to exit. Sep 4 00:02:49.134576 systemd-logind[1981]: Removed session 8. Sep 4 00:02:49.169856 systemd[1]: Started sshd@8-172.31.20.83:22-139.178.68.195:53578.service - OpenSSH per-connection server daemon (139.178.68.195:53578). Sep 4 00:02:49.352497 sshd[2359]: Accepted publickey for core from 139.178.68.195 port 53578 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:02:49.353997 sshd-session[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:02:49.359952 systemd-logind[1981]: New session 9 of user core. Sep 4 00:02:49.367119 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 00:02:49.464510 sudo[2362]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 00:02:49.464920 sudo[2362]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:02:49.958674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 00:02:49.960363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:02:50.201451 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 00:02:50.214462 (dockerd)[2385]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 00:02:50.268029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:02:50.280351 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:02:50.338584 kubelet[2391]: E0904 00:02:50.338532 2391 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:02:50.343999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:02:50.344265 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:02:50.344837 systemd[1]: kubelet.service: Consumed 187ms CPU time, 110M memory peak. Sep 4 00:02:50.675106 dockerd[2385]: time="2025-09-04T00:02:50.674743265Z" level=info msg="Starting up" Sep 4 00:02:50.676302 dockerd[2385]: time="2025-09-04T00:02:50.676265996Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 00:02:50.728393 systemd[1]: var-lib-docker-metacopy\x2dcheck3324283468-merged.mount: Deactivated successfully. Sep 4 00:02:50.746727 dockerd[2385]: time="2025-09-04T00:02:50.746501253Z" level=info msg="Loading containers: start." Sep 4 00:02:50.758906 kernel: Initializing XFRM netlink socket Sep 4 00:02:51.043400 (udev-worker)[2418]: Network interface NamePolicy= disabled on kernel command line. Sep 4 00:02:51.092456 systemd-networkd[1850]: docker0: Link UP Sep 4 00:02:51.098388 dockerd[2385]: time="2025-09-04T00:02:51.098333734Z" level=info msg="Loading containers: done." Sep 4 00:02:51.116261 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck654042456-merged.mount: Deactivated successfully. Sep 4 00:02:51.124298 dockerd[2385]: time="2025-09-04T00:02:51.124228318Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 00:02:51.124464 dockerd[2385]: time="2025-09-04T00:02:51.124322394Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 4 00:02:51.124464 dockerd[2385]: time="2025-09-04T00:02:51.124444989Z" level=info msg="Initializing buildkit" Sep 4 00:02:51.153585 dockerd[2385]: time="2025-09-04T00:02:51.153541807Z" level=info msg="Completed buildkit initialization" Sep 4 00:02:51.163095 dockerd[2385]: time="2025-09-04T00:02:51.162852082Z" level=info msg="Daemon has completed initialization" Sep 4 00:02:51.163095 dockerd[2385]: time="2025-09-04T00:02:51.162932041Z" level=info msg="API listen on /run/docker.sock" Sep 4 00:02:51.163220 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 00:02:52.606757 containerd[2018]: time="2025-09-04T00:02:52.606719431Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 4 00:02:53.225473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1408340527.mount: Deactivated successfully. Sep 4 00:02:55.068568 containerd[2018]: time="2025-09-04T00:02:55.068504112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:02:55.069738 containerd[2018]: time="2025-09-04T00:02:55.069560654Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 4 00:02:55.071161 containerd[2018]: time="2025-09-04T00:02:55.071122268Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:02:55.074044 containerd[2018]: time="2025-09-04T00:02:55.073970341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:02:55.075868 containerd[2018]: time="2025-09-04T00:02:55.075222419Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 2.468462177s" Sep 4 00:02:55.075868 containerd[2018]: time="2025-09-04T00:02:55.075270561Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 4 00:02:55.076045 containerd[2018]: time="2025-09-04T00:02:55.075892509Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 4 00:02:57.544364 containerd[2018]: time="2025-09-04T00:02:57.544293961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:02:57.545412 containerd[2018]: time="2025-09-04T00:02:57.545277332Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 4 00:02:57.546614 containerd[2018]: time="2025-09-04T00:02:57.546578061Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:02:57.549026 containerd[2018]: time="2025-09-04T00:02:57.548976449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:02:57.550143 containerd[2018]: time="2025-09-04T00:02:57.549691526Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 2.473773152s" Sep 4 00:02:57.550143 containerd[2018]: time="2025-09-04T00:02:57.549725104Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 4 00:02:57.550797 containerd[2018]: time="2025-09-04T00:02:57.550762922Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 4 00:02:59.539242 containerd[2018]: time="2025-09-04T00:02:59.539191572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:02:59.540399 containerd[2018]: time="2025-09-04T00:02:59.540310030Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 4 00:02:59.541312 containerd[2018]: time="2025-09-04T00:02:59.541277892Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:02:59.544895 containerd[2018]: time="2025-09-04T00:02:59.544123701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:02:59.545252 containerd[2018]: time="2025-09-04T00:02:59.545221192Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 1.994415667s" Sep 4 00:02:59.545361 containerd[2018]: time="2025-09-04T00:02:59.545343494Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 4 00:02:59.546467 containerd[2018]: time="2025-09-04T00:02:59.546443073Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 4 00:03:00.468128 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 00:03:00.472320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:00.860104 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:00.873363 (kubelet)[2673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:03:00.949546 kubelet[2673]: E0904 00:03:00.949482 2673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:03:00.954750 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:03:00.955189 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:03:00.955975 systemd[1]: kubelet.service: Consumed 212ms CPU time, 108.1M memory peak. Sep 4 00:03:01.073148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3294558264.mount: Deactivated successfully. Sep 4 00:03:02.732449 containerd[2018]: time="2025-09-04T00:03:02.732274955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:02.734713 containerd[2018]: time="2025-09-04T00:03:02.734633035Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 4 00:03:02.738592 containerd[2018]: time="2025-09-04T00:03:02.738445508Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:02.751756 containerd[2018]: time="2025-09-04T00:03:02.749612169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:02.752203 containerd[2018]: time="2025-09-04T00:03:02.752158505Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 3.205584408s" Sep 4 00:03:02.752288 containerd[2018]: time="2025-09-04T00:03:02.752210137Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 4 00:03:02.752993 containerd[2018]: time="2025-09-04T00:03:02.752962948Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 4 00:03:03.366256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3285033037.mount: Deactivated successfully. Sep 4 00:03:04.840456 containerd[2018]: time="2025-09-04T00:03:04.840381946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:04.845111 containerd[2018]: time="2025-09-04T00:03:04.845032427Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 4 00:03:04.850335 containerd[2018]: time="2025-09-04T00:03:04.850260160Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:04.855030 containerd[2018]: time="2025-09-04T00:03:04.854949506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:04.857518 containerd[2018]: time="2025-09-04T00:03:04.857129760Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.104126842s" Sep 4 00:03:04.857518 containerd[2018]: time="2025-09-04T00:03:04.857189607Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 4 00:03:04.858758 containerd[2018]: time="2025-09-04T00:03:04.858706127Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 00:03:05.327558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4242188137.mount: Deactivated successfully. Sep 4 00:03:05.332033 containerd[2018]: time="2025-09-04T00:03:05.331979445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:03:05.332979 containerd[2018]: time="2025-09-04T00:03:05.332913165Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 00:03:05.335679 containerd[2018]: time="2025-09-04T00:03:05.333955261Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:03:05.336865 containerd[2018]: time="2025-09-04T00:03:05.336825326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:03:05.337805 containerd[2018]: time="2025-09-04T00:03:05.337771318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 479.026366ms" Sep 4 00:03:05.337964 containerd[2018]: time="2025-09-04T00:03:05.337943332Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 00:03:05.338848 containerd[2018]: time="2025-09-04T00:03:05.338817479Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 4 00:03:05.635842 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 00:03:05.833682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681982611.mount: Deactivated successfully. Sep 4 00:03:08.329210 containerd[2018]: time="2025-09-04T00:03:08.329152104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:08.331136 containerd[2018]: time="2025-09-04T00:03:08.331090159Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 4 00:03:08.333103 containerd[2018]: time="2025-09-04T00:03:08.333039830Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:08.337864 containerd[2018]: time="2025-09-04T00:03:08.336917490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:08.337864 containerd[2018]: time="2025-09-04T00:03:08.337742655Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.99889452s" Sep 4 00:03:08.337864 containerd[2018]: time="2025-09-04T00:03:08.337773015Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 4 00:03:10.968124 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 00:03:10.970743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:11.258066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:11.273503 (kubelet)[2826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:03:11.330234 kubelet[2826]: E0904 00:03:11.330181 2826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:03:11.334206 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:03:11.334397 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:03:11.334799 systemd[1]: kubelet.service: Consumed 205ms CPU time, 107.8M memory peak. Sep 4 00:03:11.814567 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:11.814829 systemd[1]: kubelet.service: Consumed 205ms CPU time, 107.8M memory peak. Sep 4 00:03:11.817772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:11.853074 systemd[1]: Reload requested from client PID 2840 ('systemctl') (unit session-9.scope)... Sep 4 00:03:11.853095 systemd[1]: Reloading... Sep 4 00:03:11.985908 zram_generator::config[2881]: No configuration found. Sep 4 00:03:12.139540 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:03:12.277395 systemd[1]: Reloading finished in 423 ms. Sep 4 00:03:12.347424 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 00:03:12.347516 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 00:03:12.348083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:12.348191 systemd[1]: kubelet.service: Consumed 146ms CPU time, 98M memory peak. Sep 4 00:03:12.350566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:12.633776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:12.646350 (kubelet)[2947]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:03:12.769865 kubelet[2947]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:03:12.769865 kubelet[2947]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 00:03:12.769865 kubelet[2947]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:03:12.775733 kubelet[2947]: I0904 00:03:12.775634 2947 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:03:13.585901 kubelet[2947]: I0904 00:03:13.585224 2947 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 4 00:03:13.585901 kubelet[2947]: I0904 00:03:13.585253 2947 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:03:13.585901 kubelet[2947]: I0904 00:03:13.585629 2947 server.go:956] "Client rotation is on, will bootstrap in background" Sep 4 00:03:13.651670 kubelet[2947]: I0904 00:03:13.651626 2947 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:03:13.665136 kubelet[2947]: E0904 00:03:13.664950 2947 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.20.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 4 00:03:13.710283 kubelet[2947]: I0904 00:03:13.710242 2947 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:03:13.718697 kubelet[2947]: I0904 00:03:13.718647 2947 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:03:13.727203 kubelet[2947]: I0904 00:03:13.727124 2947 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:03:13.731581 kubelet[2947]: I0904 00:03:13.727199 2947 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-83","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:03:13.731581 kubelet[2947]: I0904 00:03:13.731589 2947 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:03:13.731863 kubelet[2947]: I0904 00:03:13.731607 2947 container_manager_linux.go:303] "Creating device plugin manager" Sep 4 00:03:13.731863 kubelet[2947]: I0904 00:03:13.731785 2947 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:03:13.736090 kubelet[2947]: I0904 00:03:13.735900 2947 kubelet.go:480] "Attempting to sync node with API server" Sep 4 00:03:13.736090 kubelet[2947]: I0904 00:03:13.735957 2947 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:03:13.736090 kubelet[2947]: I0904 00:03:13.736000 2947 kubelet.go:386] "Adding apiserver pod source" Sep 4 00:03:13.738253 kubelet[2947]: I0904 00:03:13.738045 2947 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:03:13.750668 kubelet[2947]: E0904 00:03:13.750628 2947 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-83&limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 4 00:03:13.756557 kubelet[2947]: I0904 00:03:13.755970 2947 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:03:13.756557 kubelet[2947]: I0904 00:03:13.756445 2947 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 4 00:03:13.757755 kubelet[2947]: W0904 00:03:13.757333 2947 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 00:03:13.760236 kubelet[2947]: E0904 00:03:13.759678 2947 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 00:03:13.763033 kubelet[2947]: I0904 00:03:13.763001 2947 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 00:03:13.763163 kubelet[2947]: I0904 00:03:13.763073 2947 server.go:1289] "Started kubelet" Sep 4 00:03:13.764972 kubelet[2947]: I0904 00:03:13.764916 2947 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:03:13.770291 kubelet[2947]: I0904 00:03:13.770207 2947 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:03:13.770726 kubelet[2947]: I0904 00:03:13.770668 2947 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:03:13.773144 kubelet[2947]: I0904 00:03:13.773112 2947 server.go:317] "Adding debug handlers to kubelet server" Sep 4 00:03:13.786632 kubelet[2947]: E0904 00:03:13.779316 2947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.83:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.83:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-83.1861eb6f7f6e2e8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-83,UID:ip-172-31-20-83,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-83,},FirstTimestamp:2025-09-04 00:03:13.763028622 +0000 UTC m=+1.103730607,LastTimestamp:2025-09-04 00:03:13.763028622 +0000 UTC m=+1.103730607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-83,}" Sep 4 00:03:13.787289 kubelet[2947]: I0904 00:03:13.787267 2947 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:03:13.787774 kubelet[2947]: I0904 00:03:13.787758 2947 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:03:13.789427 kubelet[2947]: E0904 00:03:13.789408 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:13.789509 kubelet[2947]: I0904 00:03:13.789503 2947 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 00:03:13.789756 kubelet[2947]: I0904 00:03:13.789747 2947 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 00:03:13.789863 kubelet[2947]: I0904 00:03:13.789856 2947 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:03:13.791531 kubelet[2947]: E0904 00:03:13.791326 2947 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 4 00:03:13.793814 kubelet[2947]: E0904 00:03:13.793777 2947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-83?timeout=10s\": dial tcp 172.31.20.83:6443: connect: connection refused" interval="200ms" Sep 4 00:03:13.801851 kubelet[2947]: I0904 00:03:13.801822 2947 factory.go:223] Registration of the systemd container factory successfully Sep 4 00:03:13.802156 kubelet[2947]: I0904 00:03:13.802134 2947 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:03:13.803731 kubelet[2947]: E0904 00:03:13.803709 2947 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:03:13.804424 kubelet[2947]: I0904 00:03:13.804400 2947 factory.go:223] Registration of the containerd container factory successfully Sep 4 00:03:13.817960 kubelet[2947]: I0904 00:03:13.817184 2947 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 00:03:13.817960 kubelet[2947]: I0904 00:03:13.817198 2947 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 00:03:13.817960 kubelet[2947]: I0904 00:03:13.817217 2947 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:03:13.825371 kubelet[2947]: I0904 00:03:13.824919 2947 policy_none.go:49] "None policy: Start" Sep 4 00:03:13.825371 kubelet[2947]: I0904 00:03:13.824959 2947 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 00:03:13.825371 kubelet[2947]: I0904 00:03:13.824993 2947 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:03:13.829309 kubelet[2947]: I0904 00:03:13.829272 2947 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 4 00:03:13.832414 kubelet[2947]: I0904 00:03:13.832386 2947 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 4 00:03:13.832591 kubelet[2947]: I0904 00:03:13.832579 2947 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 4 00:03:13.832996 kubelet[2947]: I0904 00:03:13.832642 2947 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 00:03:13.832996 kubelet[2947]: I0904 00:03:13.832654 2947 kubelet.go:2436] "Starting kubelet main sync loop" Sep 4 00:03:13.832996 kubelet[2947]: E0904 00:03:13.832701 2947 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:03:13.838871 kubelet[2947]: E0904 00:03:13.838752 2947 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 4 00:03:13.845606 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 00:03:13.860859 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 00:03:13.865259 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 00:03:13.875226 kubelet[2947]: E0904 00:03:13.875198 2947 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 4 00:03:13.875739 kubelet[2947]: I0904 00:03:13.875720 2947 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:03:13.875978 kubelet[2947]: I0904 00:03:13.875843 2947 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:03:13.876313 kubelet[2947]: I0904 00:03:13.876302 2947 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:03:13.878427 kubelet[2947]: E0904 00:03:13.878392 2947 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 00:03:13.878622 kubelet[2947]: E0904 00:03:13.878585 2947 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-83\" not found" Sep 4 00:03:13.946647 systemd[1]: Created slice kubepods-burstable-podec22c44167151ef6fcc07c25b1eb77eb.slice - libcontainer container kubepods-burstable-podec22c44167151ef6fcc07c25b1eb77eb.slice. Sep 4 00:03:13.964792 kubelet[2947]: E0904 00:03:13.964580 2947 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-83\" not found" node="ip-172-31-20-83" Sep 4 00:03:13.968367 systemd[1]: Created slice kubepods-burstable-pod86eb50475b112416430f028a43ba4ef0.slice - libcontainer container kubepods-burstable-pod86eb50475b112416430f028a43ba4ef0.slice. Sep 4 00:03:13.978892 kubelet[2947]: I0904 00:03:13.978801 2947 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-83" Sep 4 00:03:13.979511 kubelet[2947]: E0904 00:03:13.979316 2947 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.83:6443/api/v1/nodes\": dial tcp 172.31.20.83:6443: connect: connection refused" node="ip-172-31-20-83" Sep 4 00:03:13.979675 kubelet[2947]: E0904 00:03:13.979619 2947 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-83\" not found" node="ip-172-31-20-83" Sep 4 00:03:13.982518 systemd[1]: Created slice kubepods-burstable-pod6bf482dfc29e641c3129a03506137cca.slice - libcontainer container kubepods-burstable-pod6bf482dfc29e641c3129a03506137cca.slice. Sep 4 00:03:13.985263 kubelet[2947]: E0904 00:03:13.985222 2947 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-83\" not found" node="ip-172-31-20-83" Sep 4 00:03:13.992592 kubelet[2947]: I0904 00:03:13.992558 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec22c44167151ef6fcc07c25b1eb77eb-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-83\" (UID: \"ec22c44167151ef6fcc07c25b1eb77eb\") " pod="kube-system/kube-apiserver-ip-172-31-20-83" Sep 4 00:03:13.992715 kubelet[2947]: I0904 00:03:13.992660 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec22c44167151ef6fcc07c25b1eb77eb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-83\" (UID: \"ec22c44167151ef6fcc07c25b1eb77eb\") " pod="kube-system/kube-apiserver-ip-172-31-20-83" Sep 4 00:03:13.992715 kubelet[2947]: I0904 00:03:13.992693 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/86eb50475b112416430f028a43ba4ef0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"86eb50475b112416430f028a43ba4ef0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Sep 4 00:03:13.992715 kubelet[2947]: I0904 00:03:13.992710 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/86eb50475b112416430f028a43ba4ef0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"86eb50475b112416430f028a43ba4ef0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Sep 4 00:03:13.992802 kubelet[2947]: I0904 00:03:13.992727 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6bf482dfc29e641c3129a03506137cca-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-83\" (UID: \"6bf482dfc29e641c3129a03506137cca\") " pod="kube-system/kube-scheduler-ip-172-31-20-83" Sep 4 00:03:13.992802 kubelet[2947]: I0904 00:03:13.992741 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec22c44167151ef6fcc07c25b1eb77eb-ca-certs\") pod \"kube-apiserver-ip-172-31-20-83\" (UID: \"ec22c44167151ef6fcc07c25b1eb77eb\") " pod="kube-system/kube-apiserver-ip-172-31-20-83" Sep 4 00:03:13.992802 kubelet[2947]: I0904 00:03:13.992755 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/86eb50475b112416430f028a43ba4ef0-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"86eb50475b112416430f028a43ba4ef0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Sep 4 00:03:13.992802 kubelet[2947]: I0904 00:03:13.992769 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/86eb50475b112416430f028a43ba4ef0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"86eb50475b112416430f028a43ba4ef0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Sep 4 00:03:13.992802 kubelet[2947]: I0904 00:03:13.992787 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86eb50475b112416430f028a43ba4ef0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"86eb50475b112416430f028a43ba4ef0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Sep 4 00:03:13.995139 kubelet[2947]: E0904 00:03:13.995098 2947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-83?timeout=10s\": dial tcp 172.31.20.83:6443: connect: connection refused" interval="400ms" Sep 4 00:03:14.182329 kubelet[2947]: I0904 00:03:14.182220 2947 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-83" Sep 4 00:03:14.183451 kubelet[2947]: E0904 00:03:14.183413 2947 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.83:6443/api/v1/nodes\": dial tcp 172.31.20.83:6443: connect: connection refused" node="ip-172-31-20-83" Sep 4 00:03:14.266582 containerd[2018]: time="2025-09-04T00:03:14.266332858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-83,Uid:ec22c44167151ef6fcc07c25b1eb77eb,Namespace:kube-system,Attempt:0,}" Sep 4 00:03:14.286418 containerd[2018]: time="2025-09-04T00:03:14.285725606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-83,Uid:86eb50475b112416430f028a43ba4ef0,Namespace:kube-system,Attempt:0,}" Sep 4 00:03:14.293820 containerd[2018]: time="2025-09-04T00:03:14.293560960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-83,Uid:6bf482dfc29e641c3129a03506137cca,Namespace:kube-system,Attempt:0,}" Sep 4 00:03:14.402687 kubelet[2947]: E0904 00:03:14.402625 2947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-83?timeout=10s\": dial tcp 172.31.20.83:6443: connect: connection refused" interval="800ms" Sep 4 00:03:14.435357 containerd[2018]: time="2025-09-04T00:03:14.435148252Z" level=info msg="connecting to shim 941312204d466ab6db0be62542b66c2c8eb10cbda6a63c57f37fead996f2e7f5" address="unix:///run/containerd/s/e291f1b24b30fa4ffc4d76f6da57ae064bb951b611e950891a81f05117f1a513" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:03:14.443453 containerd[2018]: time="2025-09-04T00:03:14.443369623Z" level=info msg="connecting to shim 129a0b39cbc045991fa5e1122a16fe6119912b3133f8020bd557540d615f1e5a" address="unix:///run/containerd/s/d3863e43e2de5f1b8b93c491901f69b949bc9d662cefe7d5fa373069875336eb" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:03:14.455251 containerd[2018]: time="2025-09-04T00:03:14.455201496Z" level=info msg="connecting to shim cd44b0b8fffa7e8293ef998c7be16b4f1caa97a9c03e11a2028aea472f70cebc" address="unix:///run/containerd/s/b4c8a819ad742f5ed8380c344342251ac6191953ae89e790413fe7a74073dddd" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:03:14.563121 systemd[1]: Started cri-containerd-129a0b39cbc045991fa5e1122a16fe6119912b3133f8020bd557540d615f1e5a.scope - libcontainer container 129a0b39cbc045991fa5e1122a16fe6119912b3133f8020bd557540d615f1e5a. Sep 4 00:03:14.571984 systemd[1]: Started cri-containerd-941312204d466ab6db0be62542b66c2c8eb10cbda6a63c57f37fead996f2e7f5.scope - libcontainer container 941312204d466ab6db0be62542b66c2c8eb10cbda6a63c57f37fead996f2e7f5. Sep 4 00:03:14.575169 systemd[1]: Started cri-containerd-cd44b0b8fffa7e8293ef998c7be16b4f1caa97a9c03e11a2028aea472f70cebc.scope - libcontainer container cd44b0b8fffa7e8293ef998c7be16b4f1caa97a9c03e11a2028aea472f70cebc. Sep 4 00:03:14.591481 kubelet[2947]: I0904 00:03:14.591441 2947 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-83" Sep 4 00:03:14.591848 kubelet[2947]: E0904 00:03:14.591807 2947 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.83:6443/api/v1/nodes\": dial tcp 172.31.20.83:6443: connect: connection refused" node="ip-172-31-20-83" Sep 4 00:03:14.692170 containerd[2018]: time="2025-09-04T00:03:14.691941192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-83,Uid:86eb50475b112416430f028a43ba4ef0,Namespace:kube-system,Attempt:0,} returns sandbox id \"129a0b39cbc045991fa5e1122a16fe6119912b3133f8020bd557540d615f1e5a\"" Sep 4 00:03:14.698783 containerd[2018]: time="2025-09-04T00:03:14.698519354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-83,Uid:ec22c44167151ef6fcc07c25b1eb77eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"941312204d466ab6db0be62542b66c2c8eb10cbda6a63c57f37fead996f2e7f5\"" Sep 4 00:03:14.702041 containerd[2018]: time="2025-09-04T00:03:14.701437692Z" level=info msg="CreateContainer within sandbox \"129a0b39cbc045991fa5e1122a16fe6119912b3133f8020bd557540d615f1e5a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 00:03:14.710397 kubelet[2947]: E0904 00:03:14.710348 2947 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 00:03:14.710674 kubelet[2947]: E0904 00:03:14.710646 2947 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 4 00:03:14.710770 containerd[2018]: time="2025-09-04T00:03:14.710665922Z" level=info msg="CreateContainer within sandbox \"941312204d466ab6db0be62542b66c2c8eb10cbda6a63c57f37fead996f2e7f5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 00:03:14.711529 containerd[2018]: time="2025-09-04T00:03:14.711497750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-83,Uid:6bf482dfc29e641c3129a03506137cca,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd44b0b8fffa7e8293ef998c7be16b4f1caa97a9c03e11a2028aea472f70cebc\"" Sep 4 00:03:14.718263 containerd[2018]: time="2025-09-04T00:03:14.718214324Z" level=info msg="CreateContainer within sandbox \"cd44b0b8fffa7e8293ef998c7be16b4f1caa97a9c03e11a2028aea472f70cebc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 00:03:14.731903 containerd[2018]: time="2025-09-04T00:03:14.731718280Z" level=info msg="Container a7fb62876e041c598be248ca70445eec48cc076e3d44bb0d2f4b0dab676176c9: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:03:14.733096 containerd[2018]: time="2025-09-04T00:03:14.733011474Z" level=info msg="Container 545619100626abe97ccbe4267e5845fab4a6e6d07e68836f204510ddf85ce3f4: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:03:14.736472 containerd[2018]: time="2025-09-04T00:03:14.736408545Z" level=info msg="Container 7cdf9c769401ef66c0c2adb8961f1c8df744fe6ffd1702b8e319308f9d96a230: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:03:14.746409 containerd[2018]: time="2025-09-04T00:03:14.746335324Z" level=info msg="CreateContainer within sandbox \"129a0b39cbc045991fa5e1122a16fe6119912b3133f8020bd557540d615f1e5a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a7fb62876e041c598be248ca70445eec48cc076e3d44bb0d2f4b0dab676176c9\"" Sep 4 00:03:14.748356 containerd[2018]: time="2025-09-04T00:03:14.748316418Z" level=info msg="StartContainer for \"a7fb62876e041c598be248ca70445eec48cc076e3d44bb0d2f4b0dab676176c9\"" Sep 4 00:03:14.749655 containerd[2018]: time="2025-09-04T00:03:14.749582923Z" level=info msg="connecting to shim a7fb62876e041c598be248ca70445eec48cc076e3d44bb0d2f4b0dab676176c9" address="unix:///run/containerd/s/d3863e43e2de5f1b8b93c491901f69b949bc9d662cefe7d5fa373069875336eb" protocol=ttrpc version=3 Sep 4 00:03:14.753018 containerd[2018]: time="2025-09-04T00:03:14.752915334Z" level=info msg="CreateContainer within sandbox \"941312204d466ab6db0be62542b66c2c8eb10cbda6a63c57f37fead996f2e7f5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"545619100626abe97ccbe4267e5845fab4a6e6d07e68836f204510ddf85ce3f4\"" Sep 4 00:03:14.755687 containerd[2018]: time="2025-09-04T00:03:14.754071634Z" level=info msg="StartContainer for \"545619100626abe97ccbe4267e5845fab4a6e6d07e68836f204510ddf85ce3f4\"" Sep 4 00:03:14.755687 containerd[2018]: time="2025-09-04T00:03:14.755440459Z" level=info msg="connecting to shim 545619100626abe97ccbe4267e5845fab4a6e6d07e68836f204510ddf85ce3f4" address="unix:///run/containerd/s/e291f1b24b30fa4ffc4d76f6da57ae064bb951b611e950891a81f05117f1a513" protocol=ttrpc version=3 Sep 4 00:03:14.755687 containerd[2018]: time="2025-09-04T00:03:14.755555624Z" level=info msg="CreateContainer within sandbox \"cd44b0b8fffa7e8293ef998c7be16b4f1caa97a9c03e11a2028aea472f70cebc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7cdf9c769401ef66c0c2adb8961f1c8df744fe6ffd1702b8e319308f9d96a230\"" Sep 4 00:03:14.756383 containerd[2018]: time="2025-09-04T00:03:14.756350655Z" level=info msg="StartContainer for \"7cdf9c769401ef66c0c2adb8961f1c8df744fe6ffd1702b8e319308f9d96a230\"" Sep 4 00:03:14.758285 containerd[2018]: time="2025-09-04T00:03:14.758248280Z" level=info msg="connecting to shim 7cdf9c769401ef66c0c2adb8961f1c8df744fe6ffd1702b8e319308f9d96a230" address="unix:///run/containerd/s/b4c8a819ad742f5ed8380c344342251ac6191953ae89e790413fe7a74073dddd" protocol=ttrpc version=3 Sep 4 00:03:14.782177 systemd[1]: Started cri-containerd-a7fb62876e041c598be248ca70445eec48cc076e3d44bb0d2f4b0dab676176c9.scope - libcontainer container a7fb62876e041c598be248ca70445eec48cc076e3d44bb0d2f4b0dab676176c9. Sep 4 00:03:14.801089 systemd[1]: Started cri-containerd-545619100626abe97ccbe4267e5845fab4a6e6d07e68836f204510ddf85ce3f4.scope - libcontainer container 545619100626abe97ccbe4267e5845fab4a6e6d07e68836f204510ddf85ce3f4. Sep 4 00:03:14.804482 systemd[1]: Started cri-containerd-7cdf9c769401ef66c0c2adb8961f1c8df744fe6ffd1702b8e319308f9d96a230.scope - libcontainer container 7cdf9c769401ef66c0c2adb8961f1c8df744fe6ffd1702b8e319308f9d96a230. Sep 4 00:03:14.916180 containerd[2018]: time="2025-09-04T00:03:14.916074266Z" level=info msg="StartContainer for \"a7fb62876e041c598be248ca70445eec48cc076e3d44bb0d2f4b0dab676176c9\" returns successfully" Sep 4 00:03:14.930903 containerd[2018]: time="2025-09-04T00:03:14.930380925Z" level=info msg="StartContainer for \"545619100626abe97ccbe4267e5845fab4a6e6d07e68836f204510ddf85ce3f4\" returns successfully" Sep 4 00:03:14.933001 containerd[2018]: time="2025-09-04T00:03:14.932939699Z" level=info msg="StartContainer for \"7cdf9c769401ef66c0c2adb8961f1c8df744fe6ffd1702b8e319308f9d96a230\" returns successfully" Sep 4 00:03:14.982486 kubelet[2947]: E0904 00:03:14.982424 2947 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 4 00:03:15.203495 kubelet[2947]: E0904 00:03:15.203445 2947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-83?timeout=10s\": dial tcp 172.31.20.83:6443: connect: connection refused" interval="1.6s" Sep 4 00:03:15.301663 kubelet[2947]: E0904 00:03:15.301388 2947 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-83&limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 4 00:03:15.395102 kubelet[2947]: I0904 00:03:15.394670 2947 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-83" Sep 4 00:03:15.395102 kubelet[2947]: E0904 00:03:15.395055 2947 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.83:6443/api/v1/nodes\": dial tcp 172.31.20.83:6443: connect: connection refused" node="ip-172-31-20-83" Sep 4 00:03:15.772326 kubelet[2947]: E0904 00:03:15.772284 2947 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.20.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 4 00:03:15.866544 kubelet[2947]: E0904 00:03:15.866314 2947 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-83\" not found" node="ip-172-31-20-83" Sep 4 00:03:15.868951 kubelet[2947]: E0904 00:03:15.868700 2947 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-83\" not found" node="ip-172-31-20-83" Sep 4 00:03:15.873238 kubelet[2947]: E0904 00:03:15.873207 2947 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-83\" not found" node="ip-172-31-20-83" Sep 4 00:03:16.716084 kubelet[2947]: E0904 00:03:16.716043 2947 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 4 00:03:16.805007 kubelet[2947]: E0904 00:03:16.804956 2947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-83?timeout=10s\": dial tcp 172.31.20.83:6443: connect: connection refused" interval="3.2s" Sep 4 00:03:16.874658 kubelet[2947]: E0904 00:03:16.874403 2947 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-83\" not found" node="ip-172-31-20-83" Sep 4 00:03:16.874658 kubelet[2947]: E0904 00:03:16.874522 2947 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-83\" not found" node="ip-172-31-20-83" Sep 4 00:03:16.875128 kubelet[2947]: E0904 00:03:16.875105 2947 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-83\" not found" node="ip-172-31-20-83" Sep 4 00:03:16.997558 kubelet[2947]: I0904 00:03:16.997529 2947 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-83" Sep 4 00:03:16.998244 kubelet[2947]: E0904 00:03:16.998135 2947 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.83:6443/api/v1/nodes\": dial tcp 172.31.20.83:6443: connect: connection refused" node="ip-172-31-20-83" Sep 4 00:03:17.399605 kubelet[2947]: E0904 00:03:17.399227 2947 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 4 00:03:17.581380 kubelet[2947]: E0904 00:03:17.581329 2947 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 00:03:17.785444 kubelet[2947]: E0904 00:03:17.785395 2947 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-83&limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 4 00:03:17.876503 kubelet[2947]: E0904 00:03:17.876467 2947 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-83\" not found" node="ip-172-31-20-83" Sep 4 00:03:17.877634 kubelet[2947]: E0904 00:03:17.877606 2947 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-83\" not found" node="ip-172-31-20-83" Sep 4 00:03:19.879668 update_engine[1982]: I20250904 00:03:19.878937 1982 update_attempter.cc:509] Updating boot flags... Sep 4 00:03:20.205563 kubelet[2947]: I0904 00:03:20.205061 2947 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-83" Sep 4 00:03:20.390431 kubelet[2947]: E0904 00:03:20.390101 2947 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-83\" not found" node="ip-172-31-20-83" Sep 4 00:03:20.500097 kubelet[2947]: I0904 00:03:20.499840 2947 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-83" Sep 4 00:03:20.500097 kubelet[2947]: E0904 00:03:20.499910 2947 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-20-83\": node \"ip-172-31-20-83\" not found" Sep 4 00:03:20.604102 kubelet[2947]: E0904 00:03:20.603920 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:20.704470 kubelet[2947]: E0904 00:03:20.704428 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:20.805435 kubelet[2947]: E0904 00:03:20.805322 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:20.906950 kubelet[2947]: E0904 00:03:20.905715 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:21.007326 kubelet[2947]: E0904 00:03:21.007282 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:21.107718 kubelet[2947]: E0904 00:03:21.107622 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:21.207794 kubelet[2947]: E0904 00:03:21.207752 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:21.209303 kubelet[2947]: E0904 00:03:21.209259 2947 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-83\" not found" node="ip-172-31-20-83" Sep 4 00:03:21.309168 kubelet[2947]: E0904 00:03:21.309123 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:21.410419 kubelet[2947]: E0904 00:03:21.410089 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:21.510291 kubelet[2947]: E0904 00:03:21.510246 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:21.611271 kubelet[2947]: E0904 00:03:21.611216 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:21.714243 kubelet[2947]: E0904 00:03:21.713789 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:21.814721 kubelet[2947]: E0904 00:03:21.814677 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:21.906638 kubelet[2947]: E0904 00:03:21.906570 2947 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-83\" not found" node="ip-172-31-20-83" Sep 4 00:03:21.915617 kubelet[2947]: E0904 00:03:21.915562 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:22.016647 kubelet[2947]: E0904 00:03:22.016602 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:22.116737 kubelet[2947]: E0904 00:03:22.116700 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:22.218181 kubelet[2947]: E0904 00:03:22.217730 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:22.319692 kubelet[2947]: E0904 00:03:22.319583 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:22.420133 kubelet[2947]: E0904 00:03:22.420086 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:22.521030 kubelet[2947]: E0904 00:03:22.520962 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:22.622124 kubelet[2947]: E0904 00:03:22.621821 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:22.637372 systemd[1]: Reload requested from client PID 3499 ('systemctl') (unit session-9.scope)... Sep 4 00:03:22.637391 systemd[1]: Reloading... Sep 4 00:03:22.722861 kubelet[2947]: E0904 00:03:22.722822 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:22.774920 zram_generator::config[3549]: No configuration found. Sep 4 00:03:22.823923 kubelet[2947]: E0904 00:03:22.823645 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:22.885729 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:03:22.924473 kubelet[2947]: E0904 00:03:22.924411 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:23.025066 kubelet[2947]: E0904 00:03:23.025015 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Sep 4 00:03:23.042664 systemd[1]: Reloading finished in 404 ms. Sep 4 00:03:23.085621 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:23.093810 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 00:03:23.094184 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:23.094238 systemd[1]: kubelet.service: Consumed 1.459s CPU time, 130.5M memory peak. Sep 4 00:03:23.099093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:23.393754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:23.407445 (kubelet)[3603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:03:23.504376 kubelet[3603]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:03:23.505314 kubelet[3603]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 00:03:23.505314 kubelet[3603]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:03:23.505314 kubelet[3603]: I0904 00:03:23.504987 3603 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:03:23.513501 kubelet[3603]: I0904 00:03:23.513472 3603 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 4 00:03:23.513616 kubelet[3603]: I0904 00:03:23.513609 3603 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:03:23.513966 kubelet[3603]: I0904 00:03:23.513948 3603 server.go:956] "Client rotation is on, will bootstrap in background" Sep 4 00:03:23.515284 kubelet[3603]: I0904 00:03:23.515268 3603 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 4 00:03:23.531452 kubelet[3603]: I0904 00:03:23.531417 3603 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:03:23.544285 kubelet[3603]: I0904 00:03:23.544258 3603 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:03:23.547082 kubelet[3603]: I0904 00:03:23.547015 3603 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:03:23.549171 kubelet[3603]: I0904 00:03:23.548762 3603 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:03:23.549171 kubelet[3603]: I0904 00:03:23.548796 3603 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-83","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:03:23.549171 kubelet[3603]: I0904 00:03:23.548987 3603 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:03:23.549171 kubelet[3603]: I0904 00:03:23.548998 3603 container_manager_linux.go:303] "Creating device plugin manager" Sep 4 00:03:23.551085 kubelet[3603]: I0904 00:03:23.551059 3603 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:03:23.553208 kubelet[3603]: I0904 00:03:23.553187 3603 kubelet.go:480] "Attempting to sync node with API server" Sep 4 00:03:23.553208 kubelet[3603]: I0904 00:03:23.553209 3603 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:03:23.553373 kubelet[3603]: I0904 00:03:23.553238 3603 kubelet.go:386] "Adding apiserver pod source" Sep 4 00:03:23.553373 kubelet[3603]: I0904 00:03:23.553261 3603 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:03:23.563435 kubelet[3603]: I0904 00:03:23.563328 3603 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:03:23.571897 kubelet[3603]: I0904 00:03:23.571433 3603 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 4 00:03:23.583266 kubelet[3603]: I0904 00:03:23.583235 3603 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 00:03:23.583585 kubelet[3603]: I0904 00:03:23.583495 3603 server.go:1289] "Started kubelet" Sep 4 00:03:23.585515 kubelet[3603]: I0904 00:03:23.585456 3603 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:03:23.588514 kubelet[3603]: I0904 00:03:23.586738 3603 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:03:23.595047 kubelet[3603]: I0904 00:03:23.595020 3603 server.go:317] "Adding debug handlers to kubelet server" Sep 4 00:03:23.595316 kubelet[3603]: I0904 00:03:23.595293 3603 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:03:23.596649 kubelet[3603]: I0904 00:03:23.596603 3603 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:03:23.597354 kubelet[3603]: I0904 00:03:23.595121 3603 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:03:23.606175 kubelet[3603]: I0904 00:03:23.606149 3603 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 00:03:23.606865 kubelet[3603]: I0904 00:03:23.606807 3603 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 00:03:23.607255 kubelet[3603]: I0904 00:03:23.607237 3603 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:03:23.612056 kubelet[3603]: I0904 00:03:23.611932 3603 factory.go:223] Registration of the systemd container factory successfully Sep 4 00:03:23.612362 kubelet[3603]: I0904 00:03:23.612334 3603 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:03:23.617464 kubelet[3603]: I0904 00:03:23.617432 3603 factory.go:223] Registration of the containerd container factory successfully Sep 4 00:03:23.632192 kubelet[3603]: I0904 00:03:23.632048 3603 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 4 00:03:23.635991 kubelet[3603]: I0904 00:03:23.635767 3603 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 4 00:03:23.639091 kubelet[3603]: I0904 00:03:23.638748 3603 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 4 00:03:23.639091 kubelet[3603]: I0904 00:03:23.638814 3603 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 00:03:23.639091 kubelet[3603]: I0904 00:03:23.638836 3603 kubelet.go:2436] "Starting kubelet main sync loop" Sep 4 00:03:23.639091 kubelet[3603]: E0904 00:03:23.638919 3603 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:03:23.654001 kubelet[3603]: E0904 00:03:23.653630 3603 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:03:23.712735 kubelet[3603]: I0904 00:03:23.712703 3603 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 00:03:23.712735 kubelet[3603]: I0904 00:03:23.712721 3603 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 00:03:23.712735 kubelet[3603]: I0904 00:03:23.712743 3603 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:03:23.713130 kubelet[3603]: I0904 00:03:23.712946 3603 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 00:03:23.713130 kubelet[3603]: I0904 00:03:23.712961 3603 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 00:03:23.713130 kubelet[3603]: I0904 00:03:23.712982 3603 policy_none.go:49] "None policy: Start" Sep 4 00:03:23.713130 kubelet[3603]: I0904 00:03:23.712994 3603 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 00:03:23.713130 kubelet[3603]: I0904 00:03:23.713007 3603 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:03:23.713308 kubelet[3603]: I0904 00:03:23.713171 3603 state_mem.go:75] "Updated machine memory state" Sep 4 00:03:23.720130 kubelet[3603]: E0904 00:03:23.720096 3603 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 4 00:03:23.720307 kubelet[3603]: I0904 00:03:23.720288 3603 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:03:23.720371 kubelet[3603]: I0904 00:03:23.720306 3603 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:03:23.721162 kubelet[3603]: I0904 00:03:23.721093 3603 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:03:23.723780 kubelet[3603]: E0904 00:03:23.723754 3603 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 00:03:23.741967 kubelet[3603]: I0904 00:03:23.741836 3603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-83" Sep 4 00:03:23.744951 kubelet[3603]: I0904 00:03:23.743226 3603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-83" Sep 4 00:03:23.746198 kubelet[3603]: I0904 00:03:23.746166 3603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-83" Sep 4 00:03:23.837597 kubelet[3603]: I0904 00:03:23.837565 3603 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-83" Sep 4 00:03:23.851692 kubelet[3603]: I0904 00:03:23.851097 3603 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-20-83" Sep 4 00:03:23.852037 kubelet[3603]: I0904 00:03:23.851983 3603 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-83" Sep 4 00:03:23.910979 kubelet[3603]: I0904 00:03:23.910689 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec22c44167151ef6fcc07c25b1eb77eb-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-83\" (UID: \"ec22c44167151ef6fcc07c25b1eb77eb\") " pod="kube-system/kube-apiserver-ip-172-31-20-83" Sep 4 00:03:23.910979 kubelet[3603]: I0904 00:03:23.910734 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/86eb50475b112416430f028a43ba4ef0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"86eb50475b112416430f028a43ba4ef0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Sep 4 00:03:23.910979 kubelet[3603]: I0904 00:03:23.910762 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86eb50475b112416430f028a43ba4ef0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"86eb50475b112416430f028a43ba4ef0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Sep 4 00:03:23.910979 kubelet[3603]: I0904 00:03:23.910778 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6bf482dfc29e641c3129a03506137cca-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-83\" (UID: \"6bf482dfc29e641c3129a03506137cca\") " pod="kube-system/kube-scheduler-ip-172-31-20-83" Sep 4 00:03:23.910979 kubelet[3603]: I0904 00:03:23.910796 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec22c44167151ef6fcc07c25b1eb77eb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-83\" (UID: \"ec22c44167151ef6fcc07c25b1eb77eb\") " pod="kube-system/kube-apiserver-ip-172-31-20-83" Sep 4 00:03:23.911193 kubelet[3603]: I0904 00:03:23.910815 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/86eb50475b112416430f028a43ba4ef0-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"86eb50475b112416430f028a43ba4ef0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Sep 4 00:03:23.911193 kubelet[3603]: I0904 00:03:23.910839 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/86eb50475b112416430f028a43ba4ef0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"86eb50475b112416430f028a43ba4ef0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Sep 4 00:03:23.911193 kubelet[3603]: I0904 00:03:23.910854 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/86eb50475b112416430f028a43ba4ef0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"86eb50475b112416430f028a43ba4ef0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Sep 4 00:03:23.911193 kubelet[3603]: I0904 00:03:23.910871 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec22c44167151ef6fcc07c25b1eb77eb-ca-certs\") pod \"kube-apiserver-ip-172-31-20-83\" (UID: \"ec22c44167151ef6fcc07c25b1eb77eb\") " pod="kube-system/kube-apiserver-ip-172-31-20-83" Sep 4 00:03:24.561584 kubelet[3603]: I0904 00:03:24.561229 3603 apiserver.go:52] "Watching apiserver" Sep 4 00:03:24.606967 kubelet[3603]: I0904 00:03:24.606916 3603 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 00:03:24.686918 kubelet[3603]: I0904 00:03:24.686855 3603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-83" Sep 4 00:03:24.695134 kubelet[3603]: E0904 00:03:24.695055 3603 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-83\" already exists" pod="kube-system/kube-scheduler-ip-172-31-20-83" Sep 4 00:03:24.724218 kubelet[3603]: I0904 00:03:24.724160 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-83" podStartSLOduration=1.7241414879999999 podStartE2EDuration="1.724141488s" podCreationTimestamp="2025-09-04 00:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:03:24.712915017 +0000 UTC m=+1.295148014" watchObservedRunningTime="2025-09-04 00:03:24.724141488 +0000 UTC m=+1.306374475" Sep 4 00:03:24.735919 kubelet[3603]: I0904 00:03:24.735752 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-83" podStartSLOduration=1.7357352750000001 podStartE2EDuration="1.735735275s" podCreationTimestamp="2025-09-04 00:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:03:24.724788489 +0000 UTC m=+1.307021497" watchObservedRunningTime="2025-09-04 00:03:24.735735275 +0000 UTC m=+1.317968273" Sep 4 00:03:24.751521 kubelet[3603]: I0904 00:03:24.751235 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-83" podStartSLOduration=1.7512200180000002 podStartE2EDuration="1.751220018s" podCreationTimestamp="2025-09-04 00:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:03:24.73624279 +0000 UTC m=+1.318475785" watchObservedRunningTime="2025-09-04 00:03:24.751220018 +0000 UTC m=+1.333453024" Sep 4 00:03:28.121330 kubelet[3603]: I0904 00:03:28.121277 3603 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 00:03:28.121741 containerd[2018]: time="2025-09-04T00:03:28.121710882Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 00:03:28.122269 kubelet[3603]: I0904 00:03:28.122228 3603 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 00:03:28.792866 systemd[1]: Created slice kubepods-besteffort-pod779a4cbe_3678_458c_a8af_1ad76dc2a93a.slice - libcontainer container kubepods-besteffort-pod779a4cbe_3678_458c_a8af_1ad76dc2a93a.slice. Sep 4 00:03:28.844357 kubelet[3603]: I0904 00:03:28.844315 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69nts\" (UniqueName: \"kubernetes.io/projected/779a4cbe-3678-458c-a8af-1ad76dc2a93a-kube-api-access-69nts\") pod \"kube-proxy-z844v\" (UID: \"779a4cbe-3678-458c-a8af-1ad76dc2a93a\") " pod="kube-system/kube-proxy-z844v" Sep 4 00:03:28.844357 kubelet[3603]: I0904 00:03:28.844357 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/779a4cbe-3678-458c-a8af-1ad76dc2a93a-kube-proxy\") pod \"kube-proxy-z844v\" (UID: \"779a4cbe-3678-458c-a8af-1ad76dc2a93a\") " pod="kube-system/kube-proxy-z844v" Sep 4 00:03:28.844357 kubelet[3603]: I0904 00:03:28.844378 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/779a4cbe-3678-458c-a8af-1ad76dc2a93a-xtables-lock\") pod \"kube-proxy-z844v\" (UID: \"779a4cbe-3678-458c-a8af-1ad76dc2a93a\") " pod="kube-system/kube-proxy-z844v" Sep 4 00:03:28.844357 kubelet[3603]: I0904 00:03:28.844392 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/779a4cbe-3678-458c-a8af-1ad76dc2a93a-lib-modules\") pod \"kube-proxy-z844v\" (UID: \"779a4cbe-3678-458c-a8af-1ad76dc2a93a\") " pod="kube-system/kube-proxy-z844v" Sep 4 00:03:28.955175 kubelet[3603]: E0904 00:03:28.955142 3603 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 00:03:28.955175 kubelet[3603]: E0904 00:03:28.955171 3603 projected.go:194] Error preparing data for projected volume kube-api-access-69nts for pod kube-system/kube-proxy-z844v: configmap "kube-root-ca.crt" not found Sep 4 00:03:28.955341 kubelet[3603]: E0904 00:03:28.955232 3603 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/779a4cbe-3678-458c-a8af-1ad76dc2a93a-kube-api-access-69nts podName:779a4cbe-3678-458c-a8af-1ad76dc2a93a nodeName:}" failed. No retries permitted until 2025-09-04 00:03:29.455213127 +0000 UTC m=+6.037446114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-69nts" (UniqueName: "kubernetes.io/projected/779a4cbe-3678-458c-a8af-1ad76dc2a93a-kube-api-access-69nts") pod "kube-proxy-z844v" (UID: "779a4cbe-3678-458c-a8af-1ad76dc2a93a") : configmap "kube-root-ca.crt" not found Sep 4 00:03:29.260446 systemd[1]: Created slice kubepods-besteffort-podfc988561_0072_46e7_8e12_ddfba82b61b1.slice - libcontainer container kubepods-besteffort-podfc988561_0072_46e7_8e12_ddfba82b61b1.slice. Sep 4 00:03:29.348830 kubelet[3603]: I0904 00:03:29.348777 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x54pg\" (UniqueName: \"kubernetes.io/projected/fc988561-0072-46e7-8e12-ddfba82b61b1-kube-api-access-x54pg\") pod \"tigera-operator-755d956888-9tcx7\" (UID: \"fc988561-0072-46e7-8e12-ddfba82b61b1\") " pod="tigera-operator/tigera-operator-755d956888-9tcx7" Sep 4 00:03:29.348830 kubelet[3603]: I0904 00:03:29.348833 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc988561-0072-46e7-8e12-ddfba82b61b1-var-lib-calico\") pod \"tigera-operator-755d956888-9tcx7\" (UID: \"fc988561-0072-46e7-8e12-ddfba82b61b1\") " pod="tigera-operator/tigera-operator-755d956888-9tcx7" Sep 4 00:03:29.564749 containerd[2018]: time="2025-09-04T00:03:29.564644691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-9tcx7,Uid:fc988561-0072-46e7-8e12-ddfba82b61b1,Namespace:tigera-operator,Attempt:0,}" Sep 4 00:03:29.599171 containerd[2018]: time="2025-09-04T00:03:29.599127425Z" level=info msg="connecting to shim 9ea78a2acd03f407d8a5cc442ef4fc0564e8c29f84181385a5d2e3ce7f22a60f" address="unix:///run/containerd/s/e5422f1ce3ddecc578eb8107c9391e2912f229b177077362e9737cf22d35876b" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:03:29.634235 systemd[1]: Started cri-containerd-9ea78a2acd03f407d8a5cc442ef4fc0564e8c29f84181385a5d2e3ce7f22a60f.scope - libcontainer container 9ea78a2acd03f407d8a5cc442ef4fc0564e8c29f84181385a5d2e3ce7f22a60f. Sep 4 00:03:29.693585 containerd[2018]: time="2025-09-04T00:03:29.693532002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-9tcx7,Uid:fc988561-0072-46e7-8e12-ddfba82b61b1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9ea78a2acd03f407d8a5cc442ef4fc0564e8c29f84181385a5d2e3ce7f22a60f\"" Sep 4 00:03:29.698223 containerd[2018]: time="2025-09-04T00:03:29.698023389Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 4 00:03:29.704248 containerd[2018]: time="2025-09-04T00:03:29.703867144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z844v,Uid:779a4cbe-3678-458c-a8af-1ad76dc2a93a,Namespace:kube-system,Attempt:0,}" Sep 4 00:03:29.742913 containerd[2018]: time="2025-09-04T00:03:29.741220063Z" level=info msg="connecting to shim 28ba9008e99c4c7c8d93ace4c54f7dcfbe599c05e12875f6192c154ebc82dc10" address="unix:///run/containerd/s/061b4459ff08791bcac70e302949c90b6a67dbc77df3731978a8ba7f357e6bc2" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:03:29.765085 systemd[1]: Started cri-containerd-28ba9008e99c4c7c8d93ace4c54f7dcfbe599c05e12875f6192c154ebc82dc10.scope - libcontainer container 28ba9008e99c4c7c8d93ace4c54f7dcfbe599c05e12875f6192c154ebc82dc10. Sep 4 00:03:29.802866 containerd[2018]: time="2025-09-04T00:03:29.802814990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z844v,Uid:779a4cbe-3678-458c-a8af-1ad76dc2a93a,Namespace:kube-system,Attempt:0,} returns sandbox id \"28ba9008e99c4c7c8d93ace4c54f7dcfbe599c05e12875f6192c154ebc82dc10\"" Sep 4 00:03:29.810871 containerd[2018]: time="2025-09-04T00:03:29.810829557Z" level=info msg="CreateContainer within sandbox \"28ba9008e99c4c7c8d93ace4c54f7dcfbe599c05e12875f6192c154ebc82dc10\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 00:03:29.828913 containerd[2018]: time="2025-09-04T00:03:29.828730445Z" level=info msg="Container 41a7bd67042e73d6219704ea7a0a6058c5194433ae275e859417a75bfe48a770: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:03:29.846434 containerd[2018]: time="2025-09-04T00:03:29.846387639Z" level=info msg="CreateContainer within sandbox \"28ba9008e99c4c7c8d93ace4c54f7dcfbe599c05e12875f6192c154ebc82dc10\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"41a7bd67042e73d6219704ea7a0a6058c5194433ae275e859417a75bfe48a770\"" Sep 4 00:03:29.847315 containerd[2018]: time="2025-09-04T00:03:29.847227149Z" level=info msg="StartContainer for \"41a7bd67042e73d6219704ea7a0a6058c5194433ae275e859417a75bfe48a770\"" Sep 4 00:03:29.852554 containerd[2018]: time="2025-09-04T00:03:29.852502970Z" level=info msg="connecting to shim 41a7bd67042e73d6219704ea7a0a6058c5194433ae275e859417a75bfe48a770" address="unix:///run/containerd/s/061b4459ff08791bcac70e302949c90b6a67dbc77df3731978a8ba7f357e6bc2" protocol=ttrpc version=3 Sep 4 00:03:29.881113 systemd[1]: Started cri-containerd-41a7bd67042e73d6219704ea7a0a6058c5194433ae275e859417a75bfe48a770.scope - libcontainer container 41a7bd67042e73d6219704ea7a0a6058c5194433ae275e859417a75bfe48a770. Sep 4 00:03:29.929026 containerd[2018]: time="2025-09-04T00:03:29.928987286Z" level=info msg="StartContainer for \"41a7bd67042e73d6219704ea7a0a6058c5194433ae275e859417a75bfe48a770\" returns successfully" Sep 4 00:03:31.245439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount201316025.mount: Deactivated successfully. Sep 4 00:03:31.823424 kubelet[3603]: I0904 00:03:31.822931 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z844v" podStartSLOduration=3.8229070800000002 podStartE2EDuration="3.82290708s" podCreationTimestamp="2025-09-04 00:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:03:30.717719095 +0000 UTC m=+7.299952091" watchObservedRunningTime="2025-09-04 00:03:31.82290708 +0000 UTC m=+8.405140070" Sep 4 00:03:32.303684 containerd[2018]: time="2025-09-04T00:03:32.303607463Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:32.330210 containerd[2018]: time="2025-09-04T00:03:32.330114880Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 4 00:03:32.332558 containerd[2018]: time="2025-09-04T00:03:32.332484951Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:32.337899 containerd[2018]: time="2025-09-04T00:03:32.337265020Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:32.338848 containerd[2018]: time="2025-09-04T00:03:32.338818878Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.640264691s" Sep 4 00:03:32.339000 containerd[2018]: time="2025-09-04T00:03:32.338977583Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 4 00:03:32.351906 containerd[2018]: time="2025-09-04T00:03:32.351636743Z" level=info msg="CreateContainer within sandbox \"9ea78a2acd03f407d8a5cc442ef4fc0564e8c29f84181385a5d2e3ce7f22a60f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 00:03:32.386030 containerd[2018]: time="2025-09-04T00:03:32.384476921Z" level=info msg="Container b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:03:32.395743 containerd[2018]: time="2025-09-04T00:03:32.395658908Z" level=info msg="CreateContainer within sandbox \"9ea78a2acd03f407d8a5cc442ef4fc0564e8c29f84181385a5d2e3ce7f22a60f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77\"" Sep 4 00:03:32.396906 containerd[2018]: time="2025-09-04T00:03:32.396528209Z" level=info msg="StartContainer for \"b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77\"" Sep 4 00:03:32.397678 containerd[2018]: time="2025-09-04T00:03:32.397649777Z" level=info msg="connecting to shim b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77" address="unix:///run/containerd/s/e5422f1ce3ddecc578eb8107c9391e2912f229b177077362e9737cf22d35876b" protocol=ttrpc version=3 Sep 4 00:03:32.429128 systemd[1]: Started cri-containerd-b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77.scope - libcontainer container b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77. Sep 4 00:03:32.468155 containerd[2018]: time="2025-09-04T00:03:32.468099607Z" level=info msg="StartContainer for \"b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77\" returns successfully" Sep 4 00:03:33.010044 kubelet[3603]: I0904 00:03:33.009958 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-9tcx7" podStartSLOduration=1.364599793 podStartE2EDuration="4.009933642s" podCreationTimestamp="2025-09-04 00:03:29 +0000 UTC" firstStartedPulling="2025-09-04 00:03:29.697191806 +0000 UTC m=+6.279424790" lastFinishedPulling="2025-09-04 00:03:32.342525652 +0000 UTC m=+8.924758639" observedRunningTime="2025-09-04 00:03:32.725748769 +0000 UTC m=+9.307981778" watchObservedRunningTime="2025-09-04 00:03:33.009933642 +0000 UTC m=+9.592166636" Sep 4 00:03:39.751956 sudo[2362]: pam_unix(sudo:session): session closed for user root Sep 4 00:03:39.775016 sshd[2361]: Connection closed by 139.178.68.195 port 53578 Sep 4 00:03:39.776415 sshd-session[2359]: pam_unix(sshd:session): session closed for user core Sep 4 00:03:39.783071 systemd-logind[1981]: Session 9 logged out. Waiting for processes to exit. Sep 4 00:03:39.784556 systemd[1]: sshd@8-172.31.20.83:22-139.178.68.195:53578.service: Deactivated successfully. Sep 4 00:03:39.790290 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 00:03:39.790701 systemd[1]: session-9.scope: Consumed 5.783s CPU time, 154.3M memory peak. Sep 4 00:03:39.796589 systemd-logind[1981]: Removed session 9. Sep 4 00:03:44.876177 systemd[1]: Created slice kubepods-besteffort-pod3740ff5c_b2f0_402c_9fd4_88a1470b928d.slice - libcontainer container kubepods-besteffort-pod3740ff5c_b2f0_402c_9fd4_88a1470b928d.slice. Sep 4 00:03:45.003484 kubelet[3603]: I0904 00:03:45.003426 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3740ff5c-b2f0-402c-9fd4-88a1470b928d-tigera-ca-bundle\") pod \"calico-typha-55495b5988-957zc\" (UID: \"3740ff5c-b2f0-402c-9fd4-88a1470b928d\") " pod="calico-system/calico-typha-55495b5988-957zc" Sep 4 00:03:45.005588 kubelet[3603]: I0904 00:03:45.003507 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3740ff5c-b2f0-402c-9fd4-88a1470b928d-typha-certs\") pod \"calico-typha-55495b5988-957zc\" (UID: \"3740ff5c-b2f0-402c-9fd4-88a1470b928d\") " pod="calico-system/calico-typha-55495b5988-957zc" Sep 4 00:03:45.005588 kubelet[3603]: I0904 00:03:45.003535 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qd6c\" (UniqueName: \"kubernetes.io/projected/3740ff5c-b2f0-402c-9fd4-88a1470b928d-kube-api-access-4qd6c\") pod \"calico-typha-55495b5988-957zc\" (UID: \"3740ff5c-b2f0-402c-9fd4-88a1470b928d\") " pod="calico-system/calico-typha-55495b5988-957zc" Sep 4 00:03:45.183989 containerd[2018]: time="2025-09-04T00:03:45.183585379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55495b5988-957zc,Uid:3740ff5c-b2f0-402c-9fd4-88a1470b928d,Namespace:calico-system,Attempt:0,}" Sep 4 00:03:45.233956 containerd[2018]: time="2025-09-04T00:03:45.233071700Z" level=info msg="connecting to shim 0ce9783589dea5a33df8d2fffcfbb9964e1c4d1bdd2fce57721f399482770404" address="unix:///run/containerd/s/501c603c8b51ee5561bda5e0483b07a6bc5fadd5c0d9cd9f2eaf453109337109" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:03:45.280169 systemd[1]: Created slice kubepods-besteffort-pod073b248a_243a_4a93_b559_2c36887e2e89.slice - libcontainer container kubepods-besteffort-pod073b248a_243a_4a93_b559_2c36887e2e89.slice. Sep 4 00:03:45.310153 kubelet[3603]: I0904 00:03:45.310098 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/073b248a-243a-4a93-b559-2c36887e2e89-flexvol-driver-host\") pod \"calico-node-l95pt\" (UID: \"073b248a-243a-4a93-b559-2c36887e2e89\") " pod="calico-system/calico-node-l95pt" Sep 4 00:03:45.310327 kubelet[3603]: I0904 00:03:45.310168 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/073b248a-243a-4a93-b559-2c36887e2e89-lib-modules\") pod \"calico-node-l95pt\" (UID: \"073b248a-243a-4a93-b559-2c36887e2e89\") " pod="calico-system/calico-node-l95pt" Sep 4 00:03:45.310327 kubelet[3603]: I0904 00:03:45.310192 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/073b248a-243a-4a93-b559-2c36887e2e89-node-certs\") pod \"calico-node-l95pt\" (UID: \"073b248a-243a-4a93-b559-2c36887e2e89\") " pod="calico-system/calico-node-l95pt" Sep 4 00:03:45.310327 kubelet[3603]: I0904 00:03:45.310215 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvnd4\" (UniqueName: \"kubernetes.io/projected/073b248a-243a-4a93-b559-2c36887e2e89-kube-api-access-mvnd4\") pod \"calico-node-l95pt\" (UID: \"073b248a-243a-4a93-b559-2c36887e2e89\") " pod="calico-system/calico-node-l95pt" Sep 4 00:03:45.310327 kubelet[3603]: I0904 00:03:45.310238 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/073b248a-243a-4a93-b559-2c36887e2e89-cni-net-dir\") pod \"calico-node-l95pt\" (UID: \"073b248a-243a-4a93-b559-2c36887e2e89\") " pod="calico-system/calico-node-l95pt" Sep 4 00:03:45.310327 kubelet[3603]: I0904 00:03:45.310268 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/073b248a-243a-4a93-b559-2c36887e2e89-cni-bin-dir\") pod \"calico-node-l95pt\" (UID: \"073b248a-243a-4a93-b559-2c36887e2e89\") " pod="calico-system/calico-node-l95pt" Sep 4 00:03:45.310534 kubelet[3603]: I0904 00:03:45.310292 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/073b248a-243a-4a93-b559-2c36887e2e89-policysync\") pod \"calico-node-l95pt\" (UID: \"073b248a-243a-4a93-b559-2c36887e2e89\") " pod="calico-system/calico-node-l95pt" Sep 4 00:03:45.310534 kubelet[3603]: I0904 00:03:45.310314 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/073b248a-243a-4a93-b559-2c36887e2e89-var-lib-calico\") pod \"calico-node-l95pt\" (UID: \"073b248a-243a-4a93-b559-2c36887e2e89\") " pod="calico-system/calico-node-l95pt" Sep 4 00:03:45.310534 kubelet[3603]: I0904 00:03:45.310348 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/073b248a-243a-4a93-b559-2c36887e2e89-cni-log-dir\") pod \"calico-node-l95pt\" (UID: \"073b248a-243a-4a93-b559-2c36887e2e89\") " pod="calico-system/calico-node-l95pt" Sep 4 00:03:45.310534 kubelet[3603]: I0904 00:03:45.310376 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/073b248a-243a-4a93-b559-2c36887e2e89-tigera-ca-bundle\") pod \"calico-node-l95pt\" (UID: \"073b248a-243a-4a93-b559-2c36887e2e89\") " pod="calico-system/calico-node-l95pt" Sep 4 00:03:45.310534 kubelet[3603]: I0904 00:03:45.310398 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/073b248a-243a-4a93-b559-2c36887e2e89-var-run-calico\") pod \"calico-node-l95pt\" (UID: \"073b248a-243a-4a93-b559-2c36887e2e89\") " pod="calico-system/calico-node-l95pt" Sep 4 00:03:45.310723 kubelet[3603]: I0904 00:03:45.310421 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/073b248a-243a-4a93-b559-2c36887e2e89-xtables-lock\") pod \"calico-node-l95pt\" (UID: \"073b248a-243a-4a93-b559-2c36887e2e89\") " pod="calico-system/calico-node-l95pt" Sep 4 00:03:45.310770 systemd[1]: Started cri-containerd-0ce9783589dea5a33df8d2fffcfbb9964e1c4d1bdd2fce57721f399482770404.scope - libcontainer container 0ce9783589dea5a33df8d2fffcfbb9964e1c4d1bdd2fce57721f399482770404. Sep 4 00:03:45.416691 kubelet[3603]: E0904 00:03:45.416646 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.416825 kubelet[3603]: W0904 00:03:45.416774 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.421978 kubelet[3603]: E0904 00:03:45.421934 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.422341 kubelet[3603]: E0904 00:03:45.422315 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.422341 kubelet[3603]: W0904 00:03:45.422340 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.422481 kubelet[3603]: E0904 00:03:45.422365 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.422678 kubelet[3603]: E0904 00:03:45.422659 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.422737 kubelet[3603]: W0904 00:03:45.422679 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.422737 kubelet[3603]: E0904 00:03:45.422693 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.425211 kubelet[3603]: E0904 00:03:45.425180 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.425211 kubelet[3603]: W0904 00:03:45.425206 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.425372 kubelet[3603]: E0904 00:03:45.425227 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.428995 kubelet[3603]: E0904 00:03:45.428962 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.428995 kubelet[3603]: W0904 00:03:45.428984 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.429268 kubelet[3603]: E0904 00:03:45.429004 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.430287 kubelet[3603]: E0904 00:03:45.430245 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.430287 kubelet[3603]: W0904 00:03:45.430285 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.430421 kubelet[3603]: E0904 00:03:45.430306 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.431638 kubelet[3603]: E0904 00:03:45.431612 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.431744 kubelet[3603]: W0904 00:03:45.431728 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.431829 kubelet[3603]: E0904 00:03:45.431817 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.433313 kubelet[3603]: E0904 00:03:45.433213 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.433313 kubelet[3603]: W0904 00:03:45.433232 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.433313 kubelet[3603]: E0904 00:03:45.433251 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.440238 kubelet[3603]: E0904 00:03:45.440150 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.440520 kubelet[3603]: W0904 00:03:45.440364 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.440520 kubelet[3603]: E0904 00:03:45.440397 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.440768 kubelet[3603]: E0904 00:03:45.440757 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.440835 kubelet[3603]: W0904 00:03:45.440826 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.442200 kubelet[3603]: E0904 00:03:45.440912 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.442894 kubelet[3603]: E0904 00:03:45.442464 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.442894 kubelet[3603]: W0904 00:03:45.442480 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.442894 kubelet[3603]: E0904 00:03:45.442497 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.443999 kubelet[3603]: E0904 00:03:45.443945 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.443999 kubelet[3603]: W0904 00:03:45.443963 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.443999 kubelet[3603]: E0904 00:03:45.443980 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.445295 kubelet[3603]: E0904 00:03:45.445272 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.446823 kubelet[3603]: W0904 00:03:45.446589 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.446823 kubelet[3603]: E0904 00:03:45.446618 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.448606 kubelet[3603]: E0904 00:03:45.448550 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.448606 kubelet[3603]: W0904 00:03:45.448568 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.448606 kubelet[3603]: E0904 00:03:45.448587 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.449915 kubelet[3603]: E0904 00:03:45.449063 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.450075 kubelet[3603]: W0904 00:03:45.450013 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.450075 kubelet[3603]: E0904 00:03:45.450039 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.453183 kubelet[3603]: E0904 00:03:45.453109 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.453183 kubelet[3603]: W0904 00:03:45.453127 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.453183 kubelet[3603]: E0904 00:03:45.453145 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.593229 containerd[2018]: time="2025-09-04T00:03:45.593097642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l95pt,Uid:073b248a-243a-4a93-b559-2c36887e2e89,Namespace:calico-system,Attempt:0,}" Sep 4 00:03:45.594824 kubelet[3603]: E0904 00:03:45.594730 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlj2d" podUID="4c03246a-81f1-4d02-b2e6-f80b3ba3c00c" Sep 4 00:03:45.641943 containerd[2018]: time="2025-09-04T00:03:45.640594837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55495b5988-957zc,Uid:3740ff5c-b2f0-402c-9fd4-88a1470b928d,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ce9783589dea5a33df8d2fffcfbb9964e1c4d1bdd2fce57721f399482770404\"" Sep 4 00:03:45.649628 containerd[2018]: time="2025-09-04T00:03:45.649573656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 4 00:03:45.655053 containerd[2018]: time="2025-09-04T00:03:45.654362539Z" level=info msg="connecting to shim 66692b3bb5f86f609d95a82289ac28ca2335069ac3c47376b0043161920da5eb" address="unix:///run/containerd/s/7070ddb9a8f602df415a2773bd12ec3eef273f615592d6c429668cd42c8dee61" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:03:45.671527 kubelet[3603]: E0904 00:03:45.671290 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.671527 kubelet[3603]: W0904 00:03:45.671319 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.671527 kubelet[3603]: E0904 00:03:45.671342 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.673386 kubelet[3603]: E0904 00:03:45.672798 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.673386 kubelet[3603]: W0904 00:03:45.672822 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.673386 kubelet[3603]: E0904 00:03:45.672846 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.673386 kubelet[3603]: E0904 00:03:45.673120 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.673386 kubelet[3603]: W0904 00:03:45.673132 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.673386 kubelet[3603]: E0904 00:03:45.673148 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.673868 kubelet[3603]: E0904 00:03:45.673762 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.673868 kubelet[3603]: W0904 00:03:45.673778 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.673868 kubelet[3603]: E0904 00:03:45.673796 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.674638 kubelet[3603]: E0904 00:03:45.674510 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.674638 kubelet[3603]: W0904 00:03:45.674525 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.674638 kubelet[3603]: E0904 00:03:45.674539 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.675184 kubelet[3603]: E0904 00:03:45.675099 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.675184 kubelet[3603]: W0904 00:03:45.675114 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.675184 kubelet[3603]: E0904 00:03:45.675128 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.675577 kubelet[3603]: E0904 00:03:45.675550 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.675747 kubelet[3603]: W0904 00:03:45.675664 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.675747 kubelet[3603]: E0904 00:03:45.675681 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.676748 kubelet[3603]: E0904 00:03:45.676735 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.676906 kubelet[3603]: W0904 00:03:45.676869 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.677051 kubelet[3603]: E0904 00:03:45.676980 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.677359 kubelet[3603]: E0904 00:03:45.677328 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.677514 kubelet[3603]: W0904 00:03:45.677432 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.677514 kubelet[3603]: E0904 00:03:45.677455 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.678012 kubelet[3603]: E0904 00:03:45.677926 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.678012 kubelet[3603]: W0904 00:03:45.677942 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.678012 kubelet[3603]: E0904 00:03:45.677955 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.678385 kubelet[3603]: E0904 00:03:45.678352 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.678573 kubelet[3603]: W0904 00:03:45.678471 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.678573 kubelet[3603]: E0904 00:03:45.678503 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.678973 kubelet[3603]: E0904 00:03:45.678923 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.678973 kubelet[3603]: W0904 00:03:45.678938 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.679190 kubelet[3603]: E0904 00:03:45.678950 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.679543 kubelet[3603]: E0904 00:03:45.679470 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.679543 kubelet[3603]: W0904 00:03:45.679482 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.679543 kubelet[3603]: E0904 00:03:45.679494 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.679974 kubelet[3603]: E0904 00:03:45.679916 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.679974 kubelet[3603]: W0904 00:03:45.679929 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.679974 kubelet[3603]: E0904 00:03:45.679942 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.680538 kubelet[3603]: E0904 00:03:45.680455 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.680538 kubelet[3603]: W0904 00:03:45.680468 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.680538 kubelet[3603]: E0904 00:03:45.680482 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.680945 kubelet[3603]: E0904 00:03:45.680915 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.681135 kubelet[3603]: W0904 00:03:45.681026 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.681135 kubelet[3603]: E0904 00:03:45.681048 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.681493 kubelet[3603]: E0904 00:03:45.681400 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.681493 kubelet[3603]: W0904 00:03:45.681412 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.681493 kubelet[3603]: E0904 00:03:45.681424 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.681867 kubelet[3603]: E0904 00:03:45.681798 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.681867 kubelet[3603]: W0904 00:03:45.681811 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.681867 kubelet[3603]: E0904 00:03:45.681823 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.683394 kubelet[3603]: E0904 00:03:45.683054 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.683394 kubelet[3603]: W0904 00:03:45.683070 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.683394 kubelet[3603]: E0904 00:03:45.683085 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.683394 kubelet[3603]: E0904 00:03:45.683289 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.683394 kubelet[3603]: W0904 00:03:45.683300 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.683394 kubelet[3603]: E0904 00:03:45.683310 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.712479 systemd[1]: Started cri-containerd-66692b3bb5f86f609d95a82289ac28ca2335069ac3c47376b0043161920da5eb.scope - libcontainer container 66692b3bb5f86f609d95a82289ac28ca2335069ac3c47376b0043161920da5eb. Sep 4 00:03:45.717827 kubelet[3603]: E0904 00:03:45.717790 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.718055 kubelet[3603]: W0904 00:03:45.717836 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.718055 kubelet[3603]: E0904 00:03:45.717861 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.718055 kubelet[3603]: I0904 00:03:45.717923 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c03246a-81f1-4d02-b2e6-f80b3ba3c00c-kubelet-dir\") pod \"csi-node-driver-vlj2d\" (UID: \"4c03246a-81f1-4d02-b2e6-f80b3ba3c00c\") " pod="calico-system/csi-node-driver-vlj2d" Sep 4 00:03:45.719049 kubelet[3603]: E0904 00:03:45.719024 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.719144 kubelet[3603]: W0904 00:03:45.719048 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.719144 kubelet[3603]: E0904 00:03:45.719084 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.719324 kubelet[3603]: I0904 00:03:45.719127 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c03246a-81f1-4d02-b2e6-f80b3ba3c00c-registration-dir\") pod \"csi-node-driver-vlj2d\" (UID: \"4c03246a-81f1-4d02-b2e6-f80b3ba3c00c\") " pod="calico-system/csi-node-driver-vlj2d" Sep 4 00:03:45.719511 kubelet[3603]: E0904 00:03:45.719496 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.719580 kubelet[3603]: W0904 00:03:45.719512 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.719580 kubelet[3603]: E0904 00:03:45.719530 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.721241 kubelet[3603]: E0904 00:03:45.721221 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.721241 kubelet[3603]: W0904 00:03:45.721240 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.721241 kubelet[3603]: E0904 00:03:45.721256 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.721675 kubelet[3603]: E0904 00:03:45.721558 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.721675 kubelet[3603]: W0904 00:03:45.721568 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.721675 kubelet[3603]: E0904 00:03:45.721596 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.721993 kubelet[3603]: I0904 00:03:45.721690 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4c03246a-81f1-4d02-b2e6-f80b3ba3c00c-varrun\") pod \"csi-node-driver-vlj2d\" (UID: \"4c03246a-81f1-4d02-b2e6-f80b3ba3c00c\") " pod="calico-system/csi-node-driver-vlj2d" Sep 4 00:03:45.724034 kubelet[3603]: E0904 00:03:45.724015 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.724034 kubelet[3603]: W0904 00:03:45.724034 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.724172 kubelet[3603]: E0904 00:03:45.724050 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.724970 kubelet[3603]: E0904 00:03:45.724276 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.724970 kubelet[3603]: W0904 00:03:45.724286 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.724970 kubelet[3603]: E0904 00:03:45.724325 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.725127 kubelet[3603]: E0904 00:03:45.725035 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.725127 kubelet[3603]: W0904 00:03:45.725055 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.725127 kubelet[3603]: E0904 00:03:45.725070 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.725127 kubelet[3603]: I0904 00:03:45.725101 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2xwt\" (UniqueName: \"kubernetes.io/projected/4c03246a-81f1-4d02-b2e6-f80b3ba3c00c-kube-api-access-l2xwt\") pod \"csi-node-driver-vlj2d\" (UID: \"4c03246a-81f1-4d02-b2e6-f80b3ba3c00c\") " pod="calico-system/csi-node-driver-vlj2d" Sep 4 00:03:45.726871 kubelet[3603]: E0904 00:03:45.726838 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.726871 kubelet[3603]: W0904 00:03:45.726862 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.727060 kubelet[3603]: E0904 00:03:45.726905 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.727060 kubelet[3603]: I0904 00:03:45.726936 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c03246a-81f1-4d02-b2e6-f80b3ba3c00c-socket-dir\") pod \"csi-node-driver-vlj2d\" (UID: \"4c03246a-81f1-4d02-b2e6-f80b3ba3c00c\") " pod="calico-system/csi-node-driver-vlj2d" Sep 4 00:03:45.727715 kubelet[3603]: E0904 00:03:45.727675 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.727715 kubelet[3603]: W0904 00:03:45.727693 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.727715 kubelet[3603]: E0904 00:03:45.727708 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.728207 kubelet[3603]: E0904 00:03:45.728001 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.728207 kubelet[3603]: W0904 00:03:45.728017 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.728207 kubelet[3603]: E0904 00:03:45.728032 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.729167 kubelet[3603]: E0904 00:03:45.729141 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.729167 kubelet[3603]: W0904 00:03:45.729155 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.729296 kubelet[3603]: E0904 00:03:45.729171 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.729895 kubelet[3603]: E0904 00:03:45.729342 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.729895 kubelet[3603]: W0904 00:03:45.729368 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.729895 kubelet[3603]: E0904 00:03:45.729379 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.729895 kubelet[3603]: E0904 00:03:45.729577 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.729895 kubelet[3603]: W0904 00:03:45.729586 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.729895 kubelet[3603]: E0904 00:03:45.729596 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.730402 kubelet[3603]: E0904 00:03:45.730382 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.730474 kubelet[3603]: W0904 00:03:45.730422 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.730474 kubelet[3603]: E0904 00:03:45.730439 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.800249 containerd[2018]: time="2025-09-04T00:03:45.800184815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l95pt,Uid:073b248a-243a-4a93-b559-2c36887e2e89,Namespace:calico-system,Attempt:0,} returns sandbox id \"66692b3bb5f86f609d95a82289ac28ca2335069ac3c47376b0043161920da5eb\"" Sep 4 00:03:45.828477 kubelet[3603]: E0904 00:03:45.828376 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.828477 kubelet[3603]: W0904 00:03:45.828413 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.828477 kubelet[3603]: E0904 00:03:45.828432 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.828977 kubelet[3603]: E0904 00:03:45.828952 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.829069 kubelet[3603]: W0904 00:03:45.829058 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.829136 kubelet[3603]: E0904 00:03:45.829112 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.829432 kubelet[3603]: E0904 00:03:45.829401 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.829432 kubelet[3603]: W0904 00:03:45.829412 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.829432 kubelet[3603]: E0904 00:03:45.829421 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.830167 kubelet[3603]: E0904 00:03:45.830133 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.830167 kubelet[3603]: W0904 00:03:45.830146 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.830167 kubelet[3603]: E0904 00:03:45.830156 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.830536 kubelet[3603]: E0904 00:03:45.830508 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.830536 kubelet[3603]: W0904 00:03:45.830518 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.830536 kubelet[3603]: E0904 00:03:45.830527 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.830904 kubelet[3603]: E0904 00:03:45.830870 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.830992 kubelet[3603]: W0904 00:03:45.830953 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.830992 kubelet[3603]: E0904 00:03:45.830966 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.831356 kubelet[3603]: E0904 00:03:45.831316 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.831356 kubelet[3603]: W0904 00:03:45.831326 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.831356 kubelet[3603]: E0904 00:03:45.831335 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.831771 kubelet[3603]: E0904 00:03:45.831686 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.831771 kubelet[3603]: W0904 00:03:45.831695 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.831771 kubelet[3603]: E0904 00:03:45.831705 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.832750 kubelet[3603]: E0904 00:03:45.832679 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.832937 kubelet[3603]: W0904 00:03:45.832920 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.833381 kubelet[3603]: E0904 00:03:45.833273 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.833561 kubelet[3603]: E0904 00:03:45.833553 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.833717 kubelet[3603]: W0904 00:03:45.833605 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.833717 kubelet[3603]: E0904 00:03:45.833618 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.834167 kubelet[3603]: E0904 00:03:45.833937 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.834167 kubelet[3603]: W0904 00:03:45.833948 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.834167 kubelet[3603]: E0904 00:03:45.833958 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.834724 kubelet[3603]: E0904 00:03:45.834713 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.834823 kubelet[3603]: W0904 00:03:45.834782 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.834823 kubelet[3603]: E0904 00:03:45.834797 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.835228 kubelet[3603]: E0904 00:03:45.835198 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.835228 kubelet[3603]: W0904 00:03:45.835207 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.835228 kubelet[3603]: E0904 00:03:45.835218 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.836601 kubelet[3603]: E0904 00:03:45.836568 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.836601 kubelet[3603]: W0904 00:03:45.836580 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.836756 kubelet[3603]: E0904 00:03:45.836686 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.837079 kubelet[3603]: E0904 00:03:45.837016 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.837079 kubelet[3603]: W0904 00:03:45.837058 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.837079 kubelet[3603]: E0904 00:03:45.837068 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.837437 kubelet[3603]: E0904 00:03:45.837409 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.837437 kubelet[3603]: W0904 00:03:45.837418 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.837437 kubelet[3603]: E0904 00:03:45.837427 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.837802 kubelet[3603]: E0904 00:03:45.837755 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.837802 kubelet[3603]: W0904 00:03:45.837766 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.837802 kubelet[3603]: E0904 00:03:45.837778 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.838129 kubelet[3603]: E0904 00:03:45.838094 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.838129 kubelet[3603]: W0904 00:03:45.838104 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.838129 kubelet[3603]: E0904 00:03:45.838112 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.838536 kubelet[3603]: E0904 00:03:45.838484 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.838603 kubelet[3603]: W0904 00:03:45.838593 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.838660 kubelet[3603]: E0904 00:03:45.838642 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.839051 kubelet[3603]: E0904 00:03:45.839012 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.839051 kubelet[3603]: W0904 00:03:45.839023 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.839051 kubelet[3603]: E0904 00:03:45.839032 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.839648 kubelet[3603]: E0904 00:03:45.839630 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.839867 kubelet[3603]: W0904 00:03:45.839828 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.839867 kubelet[3603]: E0904 00:03:45.839843 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.840182 kubelet[3603]: E0904 00:03:45.840133 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.840182 kubelet[3603]: W0904 00:03:45.840152 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.840182 kubelet[3603]: E0904 00:03:45.840165 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.840418 kubelet[3603]: E0904 00:03:45.840359 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.840418 kubelet[3603]: W0904 00:03:45.840377 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.840418 kubelet[3603]: E0904 00:03:45.840386 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.840576 kubelet[3603]: E0904 00:03:45.840547 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.840576 kubelet[3603]: W0904 00:03:45.840556 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.840576 kubelet[3603]: E0904 00:03:45.840563 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.840744 kubelet[3603]: E0904 00:03:45.840717 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.840744 kubelet[3603]: W0904 00:03:45.840726 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.840744 kubelet[3603]: E0904 00:03:45.840733 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:45.849036 kubelet[3603]: E0904 00:03:45.848934 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:45.849036 kubelet[3603]: W0904 00:03:45.848958 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:45.849036 kubelet[3603]: E0904 00:03:45.848985 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:46.957889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount9876175.mount: Deactivated successfully. Sep 4 00:03:47.639959 kubelet[3603]: E0904 00:03:47.639921 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlj2d" podUID="4c03246a-81f1-4d02-b2e6-f80b3ba3c00c" Sep 4 00:03:48.342890 containerd[2018]: time="2025-09-04T00:03:48.342815713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:48.344548 containerd[2018]: time="2025-09-04T00:03:48.344368492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 4 00:03:48.346589 containerd[2018]: time="2025-09-04T00:03:48.346536275Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:48.353378 containerd[2018]: time="2025-09-04T00:03:48.353324031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:48.354498 containerd[2018]: time="2025-09-04T00:03:48.354452931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.7048333s" Sep 4 00:03:48.354659 containerd[2018]: time="2025-09-04T00:03:48.354596896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 4 00:03:48.356712 containerd[2018]: time="2025-09-04T00:03:48.356679238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 4 00:03:48.389385 containerd[2018]: time="2025-09-04T00:03:48.389330362Z" level=info msg="CreateContainer within sandbox \"0ce9783589dea5a33df8d2fffcfbb9964e1c4d1bdd2fce57721f399482770404\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 00:03:48.426927 containerd[2018]: time="2025-09-04T00:03:48.424681266Z" level=info msg="Container d2159b014d372bbbaf53ac9c1d149daeb267167e28d940fe70dc414650cef606: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:03:48.436840 containerd[2018]: time="2025-09-04T00:03:48.436791062Z" level=info msg="CreateContainer within sandbox \"0ce9783589dea5a33df8d2fffcfbb9964e1c4d1bdd2fce57721f399482770404\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d2159b014d372bbbaf53ac9c1d149daeb267167e28d940fe70dc414650cef606\"" Sep 4 00:03:48.437396 containerd[2018]: time="2025-09-04T00:03:48.437373218Z" level=info msg="StartContainer for \"d2159b014d372bbbaf53ac9c1d149daeb267167e28d940fe70dc414650cef606\"" Sep 4 00:03:48.439168 containerd[2018]: time="2025-09-04T00:03:48.439111962Z" level=info msg="connecting to shim d2159b014d372bbbaf53ac9c1d149daeb267167e28d940fe70dc414650cef606" address="unix:///run/containerd/s/501c603c8b51ee5561bda5e0483b07a6bc5fadd5c0d9cd9f2eaf453109337109" protocol=ttrpc version=3 Sep 4 00:03:48.502128 systemd[1]: Started cri-containerd-d2159b014d372bbbaf53ac9c1d149daeb267167e28d940fe70dc414650cef606.scope - libcontainer container d2159b014d372bbbaf53ac9c1d149daeb267167e28d940fe70dc414650cef606. Sep 4 00:03:48.563352 containerd[2018]: time="2025-09-04T00:03:48.563279080Z" level=info msg="StartContainer for \"d2159b014d372bbbaf53ac9c1d149daeb267167e28d940fe70dc414650cef606\" returns successfully" Sep 4 00:03:48.802886 kubelet[3603]: E0904 00:03:48.802765 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.802886 kubelet[3603]: W0904 00:03:48.802809 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.802886 kubelet[3603]: E0904 00:03:48.802844 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.804489 kubelet[3603]: E0904 00:03:48.804375 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.804805 kubelet[3603]: W0904 00:03:48.804574 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.804805 kubelet[3603]: E0904 00:03:48.804604 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.805901 kubelet[3603]: E0904 00:03:48.805783 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.805901 kubelet[3603]: W0904 00:03:48.805803 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.805901 kubelet[3603]: E0904 00:03:48.805821 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.807196 kubelet[3603]: E0904 00:03:48.807110 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.807196 kubelet[3603]: W0904 00:03:48.807126 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.807196 kubelet[3603]: E0904 00:03:48.807142 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.807814 kubelet[3603]: E0904 00:03:48.807631 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.807814 kubelet[3603]: W0904 00:03:48.807646 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.807814 kubelet[3603]: E0904 00:03:48.807661 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.808437 kubelet[3603]: E0904 00:03:48.808343 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.808437 kubelet[3603]: W0904 00:03:48.808373 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.808437 kubelet[3603]: E0904 00:03:48.808387 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.809219 kubelet[3603]: E0904 00:03:48.809178 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.810011 kubelet[3603]: W0904 00:03:48.809919 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.810011 kubelet[3603]: E0904 00:03:48.809940 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.810376 kubelet[3603]: E0904 00:03:48.810343 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.810376 kubelet[3603]: W0904 00:03:48.810356 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.810566 kubelet[3603]: E0904 00:03:48.810490 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.810836 kubelet[3603]: E0904 00:03:48.810809 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.811008 kubelet[3603]: W0904 00:03:48.810822 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.811008 kubelet[3603]: E0904 00:03:48.810955 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.811713 kubelet[3603]: E0904 00:03:48.811603 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.811713 kubelet[3603]: W0904 00:03:48.811625 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.811713 kubelet[3603]: E0904 00:03:48.811637 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.812029 kubelet[3603]: E0904 00:03:48.812005 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.812183 kubelet[3603]: W0904 00:03:48.812017 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.812183 kubelet[3603]: E0904 00:03:48.812129 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.813141 kubelet[3603]: E0904 00:03:48.813102 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.813141 kubelet[3603]: W0904 00:03:48.813116 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.813393 kubelet[3603]: E0904 00:03:48.813128 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.814119 kubelet[3603]: E0904 00:03:48.814053 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.814119 kubelet[3603]: W0904 00:03:48.814069 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.814119 kubelet[3603]: E0904 00:03:48.814082 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.814670 kubelet[3603]: E0904 00:03:48.814527 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.814670 kubelet[3603]: W0904 00:03:48.814540 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.814670 kubelet[3603]: E0904 00:03:48.814554 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.815307 kubelet[3603]: E0904 00:03:48.815157 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.815307 kubelet[3603]: W0904 00:03:48.815213 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.815307 kubelet[3603]: E0904 00:03:48.815227 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.859504 kubelet[3603]: E0904 00:03:48.859456 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.859791 kubelet[3603]: W0904 00:03:48.859587 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.859791 kubelet[3603]: E0904 00:03:48.859613 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.860206 kubelet[3603]: E0904 00:03:48.860191 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.860384 kubelet[3603]: W0904 00:03:48.860319 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.860809 kubelet[3603]: E0904 00:03:48.860787 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.868471 kubelet[3603]: E0904 00:03:48.868432 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.868593 kubelet[3603]: W0904 00:03:48.868464 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.868593 kubelet[3603]: E0904 00:03:48.868552 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.869919 kubelet[3603]: E0904 00:03:48.869024 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.869919 kubelet[3603]: W0904 00:03:48.869042 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.869919 kubelet[3603]: E0904 00:03:48.869059 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.871123 kubelet[3603]: E0904 00:03:48.871082 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.871908 kubelet[3603]: W0904 00:03:48.871220 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.871908 kubelet[3603]: E0904 00:03:48.871242 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.872325 kubelet[3603]: E0904 00:03:48.872306 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.872401 kubelet[3603]: W0904 00:03:48.872325 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.872401 kubelet[3603]: E0904 00:03:48.872341 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.872765 kubelet[3603]: E0904 00:03:48.872747 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.872765 kubelet[3603]: W0904 00:03:48.872765 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.872889 kubelet[3603]: E0904 00:03:48.872779 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.876454 kubelet[3603]: E0904 00:03:48.876432 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.876454 kubelet[3603]: W0904 00:03:48.876454 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.876614 kubelet[3603]: E0904 00:03:48.876479 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.876791 kubelet[3603]: E0904 00:03:48.876766 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.876791 kubelet[3603]: W0904 00:03:48.876784 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.876901 kubelet[3603]: E0904 00:03:48.876812 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.877065 kubelet[3603]: E0904 00:03:48.877050 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.877124 kubelet[3603]: W0904 00:03:48.877066 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.877124 kubelet[3603]: E0904 00:03:48.877089 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.877652 kubelet[3603]: E0904 00:03:48.877634 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.877725 kubelet[3603]: W0904 00:03:48.877658 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.877725 kubelet[3603]: E0904 00:03:48.877672 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.879947 kubelet[3603]: E0904 00:03:48.879928 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.879947 kubelet[3603]: W0904 00:03:48.879946 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.880123 kubelet[3603]: E0904 00:03:48.879962 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.880609 kubelet[3603]: E0904 00:03:48.880591 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.880609 kubelet[3603]: W0904 00:03:48.880608 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.880721 kubelet[3603]: E0904 00:03:48.880623 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.880956 kubelet[3603]: E0904 00:03:48.880940 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.881037 kubelet[3603]: W0904 00:03:48.880956 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.881037 kubelet[3603]: E0904 00:03:48.880971 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.883179 kubelet[3603]: E0904 00:03:48.883023 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.883179 kubelet[3603]: W0904 00:03:48.883039 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.883179 kubelet[3603]: E0904 00:03:48.883064 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.887924 kubelet[3603]: E0904 00:03:48.887870 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.888059 kubelet[3603]: W0904 00:03:48.887923 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.888059 kubelet[3603]: E0904 00:03:48.887955 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.888774 kubelet[3603]: E0904 00:03:48.888242 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.888774 kubelet[3603]: W0904 00:03:48.888263 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.888774 kubelet[3603]: E0904 00:03:48.888277 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:48.889056 kubelet[3603]: E0904 00:03:48.889021 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:03:48.889056 kubelet[3603]: W0904 00:03:48.889035 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:03:48.889141 kubelet[3603]: E0904 00:03:48.889059 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:03:49.642447 kubelet[3603]: E0904 00:03:49.642407 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlj2d" podUID="4c03246a-81f1-4d02-b2e6-f80b3ba3c00c" Sep 4 00:03:49.646159 containerd[2018]: time="2025-09-04T00:03:49.646107976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:49.647189 containerd[2018]: time="2025-09-04T00:03:49.647149721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 4 00:03:49.648546 containerd[2018]: time="2025-09-04T00:03:49.648519871Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:49.651764 containerd[2018]: time="2025-09-04T00:03:49.651182056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:49.651764 containerd[2018]: time="2025-09-04T00:03:49.651652606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.294943166s" Sep 4 00:03:49.651764 containerd[2018]: time="2025-09-04T00:03:49.651679523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 4 00:03:49.656837 containerd[2018]: time="2025-09-04T00:03:49.656793888Z" level=info msg="CreateContainer within sandbox \"66692b3bb5f86f609d95a82289ac28ca2335069ac3c47376b0043161920da5eb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 00:03:49.669369 containerd[2018]: time="2025-09-04T00:03:49.668105818Z" level=info msg="Container 03115b895bf1c2ce09c41b1a6ff2953c3dfe5f37f2d3bff221f4bfddaddb2565: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:03:49.682339 containerd[2018]: time="2025-09-04T00:03:49.682285978Z" level=info msg="CreateContainer within sandbox \"66692b3bb5f86f609d95a82289ac28ca2335069ac3c47376b0043161920da5eb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"03115b895bf1c2ce09c41b1a6ff2953c3dfe5f37f2d3bff221f4bfddaddb2565\"" Sep 4 00:03:49.683055 containerd[2018]: time="2025-09-04T00:03:49.682977522Z" level=info msg="StartContainer for \"03115b895bf1c2ce09c41b1a6ff2953c3dfe5f37f2d3bff221f4bfddaddb2565\"" Sep 4 00:03:49.685537 containerd[2018]: time="2025-09-04T00:03:49.685495892Z" level=info msg="connecting to shim 03115b895bf1c2ce09c41b1a6ff2953c3dfe5f37f2d3bff221f4bfddaddb2565" address="unix:///run/containerd/s/7070ddb9a8f602df415a2773bd12ec3eef273f615592d6c429668cd42c8dee61" protocol=ttrpc version=3 Sep 4 00:03:49.720108 systemd[1]: Started cri-containerd-03115b895bf1c2ce09c41b1a6ff2953c3dfe5f37f2d3bff221f4bfddaddb2565.scope - libcontainer container 03115b895bf1c2ce09c41b1a6ff2953c3dfe5f37f2d3bff221f4bfddaddb2565. Sep 4 00:03:49.769207 kubelet[3603]: I0904 00:03:49.769175 3603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:03:49.773536 containerd[2018]: time="2025-09-04T00:03:49.773486387Z" level=info msg="StartContainer for \"03115b895bf1c2ce09c41b1a6ff2953c3dfe5f37f2d3bff221f4bfddaddb2565\" returns successfully" Sep 4 00:03:49.785564 systemd[1]: cri-containerd-03115b895bf1c2ce09c41b1a6ff2953c3dfe5f37f2d3bff221f4bfddaddb2565.scope: Deactivated successfully. Sep 4 00:03:49.850283 containerd[2018]: time="2025-09-04T00:03:49.848739544Z" level=info msg="received exit event container_id:\"03115b895bf1c2ce09c41b1a6ff2953c3dfe5f37f2d3bff221f4bfddaddb2565\" id:\"03115b895bf1c2ce09c41b1a6ff2953c3dfe5f37f2d3bff221f4bfddaddb2565\" pid:4299 exited_at:{seconds:1756944229 nanos:788407211}" Sep 4 00:03:49.871849 containerd[2018]: time="2025-09-04T00:03:49.871426833Z" level=info msg="TaskExit event in podsandbox handler container_id:\"03115b895bf1c2ce09c41b1a6ff2953c3dfe5f37f2d3bff221f4bfddaddb2565\" id:\"03115b895bf1c2ce09c41b1a6ff2953c3dfe5f37f2d3bff221f4bfddaddb2565\" pid:4299 exited_at:{seconds:1756944229 nanos:788407211}" Sep 4 00:03:49.889073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03115b895bf1c2ce09c41b1a6ff2953c3dfe5f37f2d3bff221f4bfddaddb2565-rootfs.mount: Deactivated successfully. Sep 4 00:03:50.774363 containerd[2018]: time="2025-09-04T00:03:50.774323631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 4 00:03:50.794921 kubelet[3603]: I0904 00:03:50.794828 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55495b5988-957zc" podStartSLOduration=4.087334109 podStartE2EDuration="6.794812724s" podCreationTimestamp="2025-09-04 00:03:44 +0000 UTC" firstStartedPulling="2025-09-04 00:03:45.648943317 +0000 UTC m=+22.231176300" lastFinishedPulling="2025-09-04 00:03:48.356421927 +0000 UTC m=+24.938654915" observedRunningTime="2025-09-04 00:03:48.787826417 +0000 UTC m=+25.370059434" watchObservedRunningTime="2025-09-04 00:03:50.794812724 +0000 UTC m=+27.377045726" Sep 4 00:03:51.640450 kubelet[3603]: E0904 00:03:51.640282 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlj2d" podUID="4c03246a-81f1-4d02-b2e6-f80b3ba3c00c" Sep 4 00:03:53.641964 kubelet[3603]: E0904 00:03:53.641154 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlj2d" podUID="4c03246a-81f1-4d02-b2e6-f80b3ba3c00c" Sep 4 00:03:54.695163 containerd[2018]: time="2025-09-04T00:03:54.695115497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:54.696293 containerd[2018]: time="2025-09-04T00:03:54.696147942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 4 00:03:54.697327 containerd[2018]: time="2025-09-04T00:03:54.697284648Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:54.699944 containerd[2018]: time="2025-09-04T00:03:54.699911559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:54.700850 containerd[2018]: time="2025-09-04T00:03:54.700678854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.926305767s" Sep 4 00:03:54.700850 containerd[2018]: time="2025-09-04T00:03:54.700716656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 4 00:03:54.707892 containerd[2018]: time="2025-09-04T00:03:54.706698341Z" level=info msg="CreateContainer within sandbox \"66692b3bb5f86f609d95a82289ac28ca2335069ac3c47376b0043161920da5eb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 00:03:54.720783 containerd[2018]: time="2025-09-04T00:03:54.720744440Z" level=info msg="Container 8d4e05ec6916297b784ac55857e6ebdee5e79d8acd75cf96c563071b9015feec: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:03:54.740316 containerd[2018]: time="2025-09-04T00:03:54.740260224Z" level=info msg="CreateContainer within sandbox \"66692b3bb5f86f609d95a82289ac28ca2335069ac3c47376b0043161920da5eb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8d4e05ec6916297b784ac55857e6ebdee5e79d8acd75cf96c563071b9015feec\"" Sep 4 00:03:54.742632 containerd[2018]: time="2025-09-04T00:03:54.742571127Z" level=info msg="StartContainer for \"8d4e05ec6916297b784ac55857e6ebdee5e79d8acd75cf96c563071b9015feec\"" Sep 4 00:03:54.747992 containerd[2018]: time="2025-09-04T00:03:54.747948432Z" level=info msg="connecting to shim 8d4e05ec6916297b784ac55857e6ebdee5e79d8acd75cf96c563071b9015feec" address="unix:///run/containerd/s/7070ddb9a8f602df415a2773bd12ec3eef273f615592d6c429668cd42c8dee61" protocol=ttrpc version=3 Sep 4 00:03:54.779284 systemd[1]: Started cri-containerd-8d4e05ec6916297b784ac55857e6ebdee5e79d8acd75cf96c563071b9015feec.scope - libcontainer container 8d4e05ec6916297b784ac55857e6ebdee5e79d8acd75cf96c563071b9015feec. Sep 4 00:03:54.869350 containerd[2018]: time="2025-09-04T00:03:54.869306208Z" level=info msg="StartContainer for \"8d4e05ec6916297b784ac55857e6ebdee5e79d8acd75cf96c563071b9015feec\" returns successfully" Sep 4 00:03:55.639861 kubelet[3603]: E0904 00:03:55.639811 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlj2d" podUID="4c03246a-81f1-4d02-b2e6-f80b3ba3c00c" Sep 4 00:03:55.880152 systemd[1]: cri-containerd-8d4e05ec6916297b784ac55857e6ebdee5e79d8acd75cf96c563071b9015feec.scope: Deactivated successfully. Sep 4 00:03:55.880415 systemd[1]: cri-containerd-8d4e05ec6916297b784ac55857e6ebdee5e79d8acd75cf96c563071b9015feec.scope: Consumed 561ms CPU time, 165.7M memory peak, 6.8M read from disk, 171.3M written to disk. Sep 4 00:03:55.958622 containerd[2018]: time="2025-09-04T00:03:55.957947557Z" level=info msg="received exit event container_id:\"8d4e05ec6916297b784ac55857e6ebdee5e79d8acd75cf96c563071b9015feec\" id:\"8d4e05ec6916297b784ac55857e6ebdee5e79d8acd75cf96c563071b9015feec\" pid:4356 exited_at:{seconds:1756944235 nanos:957504562}" Sep 4 00:03:55.958622 containerd[2018]: time="2025-09-04T00:03:55.958247203Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d4e05ec6916297b784ac55857e6ebdee5e79d8acd75cf96c563071b9015feec\" id:\"8d4e05ec6916297b784ac55857e6ebdee5e79d8acd75cf96c563071b9015feec\" pid:4356 exited_at:{seconds:1756944235 nanos:957504562}" Sep 4 00:03:55.962918 kubelet[3603]: I0904 00:03:55.962893 3603 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 00:03:56.004337 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d4e05ec6916297b784ac55857e6ebdee5e79d8acd75cf96c563071b9015feec-rootfs.mount: Deactivated successfully. Sep 4 00:03:56.110292 systemd[1]: Created slice kubepods-besteffort-pod4ad89bb2_eb31_48be_ab76_8e8be76037a2.slice - libcontainer container kubepods-besteffort-pod4ad89bb2_eb31_48be_ab76_8e8be76037a2.slice. Sep 4 00:03:56.149454 systemd[1]: Created slice kubepods-burstable-pod8cd13f3f_b7b5_422b_a7be_6e9466ed1027.slice - libcontainer container kubepods-burstable-pod8cd13f3f_b7b5_422b_a7be_6e9466ed1027.slice. Sep 4 00:03:56.174676 systemd[1]: Created slice kubepods-besteffort-podd11e3c26_312a_4514_8b46_aceefc182a80.slice - libcontainer container kubepods-besteffort-podd11e3c26_312a_4514_8b46_aceefc182a80.slice. Sep 4 00:03:56.189360 systemd[1]: Created slice kubepods-besteffort-pod4bb9afcc_eef2_4e3e_b574_8eef798ccb88.slice - libcontainer container kubepods-besteffort-pod4bb9afcc_eef2_4e3e_b574_8eef798ccb88.slice. Sep 4 00:03:56.200684 systemd[1]: Created slice kubepods-besteffort-pod83db72b0_cd35_4665_ad4e_cfc7b78c4403.slice - libcontainer container kubepods-besteffort-pod83db72b0_cd35_4665_ad4e_cfc7b78c4403.slice. Sep 4 00:03:56.214551 systemd[1]: Created slice kubepods-besteffort-podb9ec538b_9a79_4c37_b676_3bae16c7baee.slice - libcontainer container kubepods-besteffort-podb9ec538b_9a79_4c37_b676_3bae16c7baee.slice. Sep 4 00:03:56.225925 systemd[1]: Created slice kubepods-burstable-pod542d776f_a26a_48e7_95c9_068cfc34a1e2.slice - libcontainer container kubepods-burstable-pod542d776f_a26a_48e7_95c9_068cfc34a1e2.slice. Sep 4 00:03:56.232873 kubelet[3603]: I0904 00:03:56.232834 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11e3c26-312a-4514-8b46-aceefc182a80-tigera-ca-bundle\") pod \"calico-kube-controllers-7fd96bd78c-2tmlb\" (UID: \"d11e3c26-312a-4514-8b46-aceefc182a80\") " pod="calico-system/calico-kube-controllers-7fd96bd78c-2tmlb" Sep 4 00:03:56.233911 kubelet[3603]: I0904 00:03:56.232938 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cd13f3f-b7b5-422b-a7be-6e9466ed1027-config-volume\") pod \"coredns-674b8bbfcf-8jwcz\" (UID: \"8cd13f3f-b7b5-422b-a7be-6e9466ed1027\") " pod="kube-system/coredns-674b8bbfcf-8jwcz" Sep 4 00:03:56.233911 kubelet[3603]: I0904 00:03:56.232999 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj7ln\" (UniqueName: \"kubernetes.io/projected/b9ec538b-9a79-4c37-b676-3bae16c7baee-kube-api-access-rj7ln\") pod \"calico-apiserver-69df694c46-rpcnm\" (UID: \"b9ec538b-9a79-4c37-b676-3bae16c7baee\") " pod="calico-apiserver/calico-apiserver-69df694c46-rpcnm" Sep 4 00:03:56.233911 kubelet[3603]: I0904 00:03:56.233024 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83db72b0-cd35-4665-ad4e-cfc7b78c4403-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-6jcfn\" (UID: \"83db72b0-cd35-4665-ad4e-cfc7b78c4403\") " pod="calico-system/goldmane-54d579b49d-6jcfn" Sep 4 00:03:56.233911 kubelet[3603]: I0904 00:03:56.233090 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/83db72b0-cd35-4665-ad4e-cfc7b78c4403-goldmane-key-pair\") pod \"goldmane-54d579b49d-6jcfn\" (UID: \"83db72b0-cd35-4665-ad4e-cfc7b78c4403\") " pod="calico-system/goldmane-54d579b49d-6jcfn" Sep 4 00:03:56.233911 kubelet[3603]: I0904 00:03:56.233147 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5c9c\" (UniqueName: \"kubernetes.io/projected/d11e3c26-312a-4514-8b46-aceefc182a80-kube-api-access-l5c9c\") pod \"calico-kube-controllers-7fd96bd78c-2tmlb\" (UID: \"d11e3c26-312a-4514-8b46-aceefc182a80\") " pod="calico-system/calico-kube-controllers-7fd96bd78c-2tmlb" Sep 4 00:03:56.234165 kubelet[3603]: I0904 00:03:56.233175 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ad89bb2-eb31-48be-ab76-8e8be76037a2-whisker-ca-bundle\") pod \"whisker-6fc47847cc-bzsrx\" (UID: \"4ad89bb2-eb31-48be-ab76-8e8be76037a2\") " pod="calico-system/whisker-6fc47847cc-bzsrx" Sep 4 00:03:56.234165 kubelet[3603]: I0904 00:03:56.233248 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc2t9\" (UniqueName: \"kubernetes.io/projected/4ad89bb2-eb31-48be-ab76-8e8be76037a2-kube-api-access-hc2t9\") pod \"whisker-6fc47847cc-bzsrx\" (UID: \"4ad89bb2-eb31-48be-ab76-8e8be76037a2\") " pod="calico-system/whisker-6fc47847cc-bzsrx" Sep 4 00:03:56.234165 kubelet[3603]: I0904 00:03:56.233320 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83db72b0-cd35-4665-ad4e-cfc7b78c4403-config\") pod \"goldmane-54d579b49d-6jcfn\" (UID: \"83db72b0-cd35-4665-ad4e-cfc7b78c4403\") " pod="calico-system/goldmane-54d579b49d-6jcfn" Sep 4 00:03:56.234165 kubelet[3603]: I0904 00:03:56.233349 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/542d776f-a26a-48e7-95c9-068cfc34a1e2-config-volume\") pod \"coredns-674b8bbfcf-rrcdc\" (UID: \"542d776f-a26a-48e7-95c9-068cfc34a1e2\") " pod="kube-system/coredns-674b8bbfcf-rrcdc" Sep 4 00:03:56.234165 kubelet[3603]: I0904 00:03:56.233397 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b9ec538b-9a79-4c37-b676-3bae16c7baee-calico-apiserver-certs\") pod \"calico-apiserver-69df694c46-rpcnm\" (UID: \"b9ec538b-9a79-4c37-b676-3bae16c7baee\") " pod="calico-apiserver/calico-apiserver-69df694c46-rpcnm" Sep 4 00:03:56.234292 kubelet[3603]: I0904 00:03:56.233424 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4ad89bb2-eb31-48be-ab76-8e8be76037a2-whisker-backend-key-pair\") pod \"whisker-6fc47847cc-bzsrx\" (UID: \"4ad89bb2-eb31-48be-ab76-8e8be76037a2\") " pod="calico-system/whisker-6fc47847cc-bzsrx" Sep 4 00:03:56.234292 kubelet[3603]: I0904 00:03:56.233480 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rccs\" (UniqueName: \"kubernetes.io/projected/83db72b0-cd35-4665-ad4e-cfc7b78c4403-kube-api-access-2rccs\") pod \"goldmane-54d579b49d-6jcfn\" (UID: \"83db72b0-cd35-4665-ad4e-cfc7b78c4403\") " pod="calico-system/goldmane-54d579b49d-6jcfn" Sep 4 00:03:56.234292 kubelet[3603]: I0904 00:03:56.233569 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4bb9afcc-eef2-4e3e-b574-8eef798ccb88-calico-apiserver-certs\") pod \"calico-apiserver-69df694c46-r4snj\" (UID: \"4bb9afcc-eef2-4e3e-b574-8eef798ccb88\") " pod="calico-apiserver/calico-apiserver-69df694c46-r4snj" Sep 4 00:03:56.234292 kubelet[3603]: I0904 00:03:56.233592 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmhlc\" (UniqueName: \"kubernetes.io/projected/4bb9afcc-eef2-4e3e-b574-8eef798ccb88-kube-api-access-fmhlc\") pod \"calico-apiserver-69df694c46-r4snj\" (UID: \"4bb9afcc-eef2-4e3e-b574-8eef798ccb88\") " pod="calico-apiserver/calico-apiserver-69df694c46-r4snj" Sep 4 00:03:56.234292 kubelet[3603]: I0904 00:03:56.233788 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftfdk\" (UniqueName: \"kubernetes.io/projected/8cd13f3f-b7b5-422b-a7be-6e9466ed1027-kube-api-access-ftfdk\") pod \"coredns-674b8bbfcf-8jwcz\" (UID: \"8cd13f3f-b7b5-422b-a7be-6e9466ed1027\") " pod="kube-system/coredns-674b8bbfcf-8jwcz" Sep 4 00:03:56.234421 kubelet[3603]: I0904 00:03:56.233859 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbz8l\" (UniqueName: \"kubernetes.io/projected/542d776f-a26a-48e7-95c9-068cfc34a1e2-kube-api-access-xbz8l\") pod \"coredns-674b8bbfcf-rrcdc\" (UID: \"542d776f-a26a-48e7-95c9-068cfc34a1e2\") " pod="kube-system/coredns-674b8bbfcf-rrcdc" Sep 4 00:03:56.435206 containerd[2018]: time="2025-09-04T00:03:56.434903804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fc47847cc-bzsrx,Uid:4ad89bb2-eb31-48be-ab76-8e8be76037a2,Namespace:calico-system,Attempt:0,}" Sep 4 00:03:56.483252 containerd[2018]: time="2025-09-04T00:03:56.483202017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8jwcz,Uid:8cd13f3f-b7b5-422b-a7be-6e9466ed1027,Namespace:kube-system,Attempt:0,}" Sep 4 00:03:56.486328 containerd[2018]: time="2025-09-04T00:03:56.486276537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd96bd78c-2tmlb,Uid:d11e3c26-312a-4514-8b46-aceefc182a80,Namespace:calico-system,Attempt:0,}" Sep 4 00:03:56.499420 containerd[2018]: time="2025-09-04T00:03:56.499359181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69df694c46-r4snj,Uid:4bb9afcc-eef2-4e3e-b574-8eef798ccb88,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:03:56.515039 containerd[2018]: time="2025-09-04T00:03:56.514981697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6jcfn,Uid:83db72b0-cd35-4665-ad4e-cfc7b78c4403,Namespace:calico-system,Attempt:0,}" Sep 4 00:03:56.525119 containerd[2018]: time="2025-09-04T00:03:56.524979524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69df694c46-rpcnm,Uid:b9ec538b-9a79-4c37-b676-3bae16c7baee,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:03:56.533869 containerd[2018]: time="2025-09-04T00:03:56.533604796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rrcdc,Uid:542d776f-a26a-48e7-95c9-068cfc34a1e2,Namespace:kube-system,Attempt:0,}" Sep 4 00:03:56.868041 containerd[2018]: time="2025-09-04T00:03:56.867024108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 4 00:03:56.898847 containerd[2018]: time="2025-09-04T00:03:56.898776501Z" level=error msg="Failed to destroy network for sandbox \"d0a9d3b86882cb41ee1b79c93368f2bc62f4dfba2c6b05a3b46ff5d9b4075ec2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.900674 containerd[2018]: time="2025-09-04T00:03:56.900635536Z" level=error msg="Failed to destroy network for sandbox \"1517a485e34bb8efcb99d27748293a9355a599dd76c9e309779d101d0ade0a77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.904048 containerd[2018]: time="2025-09-04T00:03:56.903996657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rrcdc,Uid:542d776f-a26a-48e7-95c9-068cfc34a1e2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a9d3b86882cb41ee1b79c93368f2bc62f4dfba2c6b05a3b46ff5d9b4075ec2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.907198 kubelet[3603]: E0904 00:03:56.907114 3603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a9d3b86882cb41ee1b79c93368f2bc62f4dfba2c6b05a3b46ff5d9b4075ec2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.907586 kubelet[3603]: E0904 00:03:56.907223 3603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a9d3b86882cb41ee1b79c93368f2bc62f4dfba2c6b05a3b46ff5d9b4075ec2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rrcdc" Sep 4 00:03:56.907586 kubelet[3603]: E0904 00:03:56.907242 3603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a9d3b86882cb41ee1b79c93368f2bc62f4dfba2c6b05a3b46ff5d9b4075ec2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rrcdc" Sep 4 00:03:56.907586 kubelet[3603]: E0904 00:03:56.907391 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rrcdc_kube-system(542d776f-a26a-48e7-95c9-068cfc34a1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rrcdc_kube-system(542d776f-a26a-48e7-95c9-068cfc34a1e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0a9d3b86882cb41ee1b79c93368f2bc62f4dfba2c6b05a3b46ff5d9b4075ec2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rrcdc" podUID="542d776f-a26a-48e7-95c9-068cfc34a1e2" Sep 4 00:03:56.907745 containerd[2018]: time="2025-09-04T00:03:56.907338532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6jcfn,Uid:83db72b0-cd35-4665-ad4e-cfc7b78c4403,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1517a485e34bb8efcb99d27748293a9355a599dd76c9e309779d101d0ade0a77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.910609 kubelet[3603]: E0904 00:03:56.910569 3603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1517a485e34bb8efcb99d27748293a9355a599dd76c9e309779d101d0ade0a77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.910726 kubelet[3603]: E0904 00:03:56.910630 3603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1517a485e34bb8efcb99d27748293a9355a599dd76c9e309779d101d0ade0a77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-6jcfn" Sep 4 00:03:56.910726 kubelet[3603]: E0904 00:03:56.910658 3603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1517a485e34bb8efcb99d27748293a9355a599dd76c9e309779d101d0ade0a77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-6jcfn" Sep 4 00:03:56.910953 kubelet[3603]: E0904 00:03:56.910921 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-6jcfn_calico-system(83db72b0-cd35-4665-ad4e-cfc7b78c4403)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-6jcfn_calico-system(83db72b0-cd35-4665-ad4e-cfc7b78c4403)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1517a485e34bb8efcb99d27748293a9355a599dd76c9e309779d101d0ade0a77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-6jcfn" podUID="83db72b0-cd35-4665-ad4e-cfc7b78c4403" Sep 4 00:03:56.946471 containerd[2018]: time="2025-09-04T00:03:56.946391670Z" level=error msg="Failed to destroy network for sandbox \"3cc948b70a97e36777f4737ebfa9c4bd9a4fc0c8bafa0118c982237f7400353f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.948476 containerd[2018]: time="2025-09-04T00:03:56.948258722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fc47847cc-bzsrx,Uid:4ad89bb2-eb31-48be-ab76-8e8be76037a2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc948b70a97e36777f4737ebfa9c4bd9a4fc0c8bafa0118c982237f7400353f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.949024 kubelet[3603]: E0904 00:03:56.948856 3603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc948b70a97e36777f4737ebfa9c4bd9a4fc0c8bafa0118c982237f7400353f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.949024 kubelet[3603]: E0904 00:03:56.948964 3603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc948b70a97e36777f4737ebfa9c4bd9a4fc0c8bafa0118c982237f7400353f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fc47847cc-bzsrx" Sep 4 00:03:56.949024 kubelet[3603]: E0904 00:03:56.948991 3603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc948b70a97e36777f4737ebfa9c4bd9a4fc0c8bafa0118c982237f7400353f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fc47847cc-bzsrx" Sep 4 00:03:56.949583 kubelet[3603]: E0904 00:03:56.949052 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6fc47847cc-bzsrx_calico-system(4ad89bb2-eb31-48be-ab76-8e8be76037a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6fc47847cc-bzsrx_calico-system(4ad89bb2-eb31-48be-ab76-8e8be76037a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cc948b70a97e36777f4737ebfa9c4bd9a4fc0c8bafa0118c982237f7400353f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6fc47847cc-bzsrx" podUID="4ad89bb2-eb31-48be-ab76-8e8be76037a2" Sep 4 00:03:56.955636 containerd[2018]: time="2025-09-04T00:03:56.955557840Z" level=error msg="Failed to destroy network for sandbox \"413bddd61312e0343cc65f82169a5b421268e5ef73c43156f8e34734207226b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.957309 containerd[2018]: time="2025-09-04T00:03:56.957267120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd96bd78c-2tmlb,Uid:d11e3c26-312a-4514-8b46-aceefc182a80,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"413bddd61312e0343cc65f82169a5b421268e5ef73c43156f8e34734207226b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.958075 kubelet[3603]: E0904 00:03:56.958020 3603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"413bddd61312e0343cc65f82169a5b421268e5ef73c43156f8e34734207226b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.958075 kubelet[3603]: E0904 00:03:56.958076 3603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"413bddd61312e0343cc65f82169a5b421268e5ef73c43156f8e34734207226b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fd96bd78c-2tmlb" Sep 4 00:03:56.958075 kubelet[3603]: E0904 00:03:56.958096 3603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"413bddd61312e0343cc65f82169a5b421268e5ef73c43156f8e34734207226b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fd96bd78c-2tmlb" Sep 4 00:03:56.958305 kubelet[3603]: E0904 00:03:56.958150 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fd96bd78c-2tmlb_calico-system(d11e3c26-312a-4514-8b46-aceefc182a80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fd96bd78c-2tmlb_calico-system(d11e3c26-312a-4514-8b46-aceefc182a80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"413bddd61312e0343cc65f82169a5b421268e5ef73c43156f8e34734207226b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fd96bd78c-2tmlb" podUID="d11e3c26-312a-4514-8b46-aceefc182a80" Sep 4 00:03:56.963500 containerd[2018]: time="2025-09-04T00:03:56.963365306Z" level=error msg="Failed to destroy network for sandbox \"a57b375a1939073621e92c1678b2b281023ce3fbd6836883022753024c2b07f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.964381 containerd[2018]: time="2025-09-04T00:03:56.963642169Z" level=error msg="Failed to destroy network for sandbox \"0a5eb37f7456864fa3b3398b25d6d33685078f2134b966b59b0b2f4fcb76470b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.965247 containerd[2018]: time="2025-09-04T00:03:56.965135387Z" level=error msg="Failed to destroy network for sandbox \"a79aa6a072e786048ba0cd421b25300e1c4e5eb6bed091347644db6bd357cd16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.965353 containerd[2018]: time="2025-09-04T00:03:56.965158830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8jwcz,Uid:8cd13f3f-b7b5-422b-a7be-6e9466ed1027,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a5eb37f7456864fa3b3398b25d6d33685078f2134b966b59b0b2f4fcb76470b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.965577 kubelet[3603]: E0904 00:03:56.965487 3603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a5eb37f7456864fa3b3398b25d6d33685078f2134b966b59b0b2f4fcb76470b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.965577 kubelet[3603]: E0904 00:03:56.965558 3603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a5eb37f7456864fa3b3398b25d6d33685078f2134b966b59b0b2f4fcb76470b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8jwcz" Sep 4 00:03:56.966145 kubelet[3603]: E0904 00:03:56.965586 3603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a5eb37f7456864fa3b3398b25d6d33685078f2134b966b59b0b2f4fcb76470b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8jwcz" Sep 4 00:03:56.966145 kubelet[3603]: E0904 00:03:56.965751 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8jwcz_kube-system(8cd13f3f-b7b5-422b-a7be-6e9466ed1027)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8jwcz_kube-system(8cd13f3f-b7b5-422b-a7be-6e9466ed1027)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a5eb37f7456864fa3b3398b25d6d33685078f2134b966b59b0b2f4fcb76470b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8jwcz" podUID="8cd13f3f-b7b5-422b-a7be-6e9466ed1027" Sep 4 00:03:56.966635 containerd[2018]: time="2025-09-04T00:03:56.966400619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69df694c46-rpcnm,Uid:b9ec538b-9a79-4c37-b676-3bae16c7baee,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a57b375a1939073621e92c1678b2b281023ce3fbd6836883022753024c2b07f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.967041 kubelet[3603]: E0904 00:03:56.967004 3603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a57b375a1939073621e92c1678b2b281023ce3fbd6836883022753024c2b07f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.967109 kubelet[3603]: E0904 00:03:56.967054 3603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a57b375a1939073621e92c1678b2b281023ce3fbd6836883022753024c2b07f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69df694c46-rpcnm" Sep 4 00:03:56.967109 kubelet[3603]: E0904 00:03:56.967081 3603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a57b375a1939073621e92c1678b2b281023ce3fbd6836883022753024c2b07f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69df694c46-rpcnm" Sep 4 00:03:56.967310 kubelet[3603]: E0904 00:03:56.967149 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69df694c46-rpcnm_calico-apiserver(b9ec538b-9a79-4c37-b676-3bae16c7baee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69df694c46-rpcnm_calico-apiserver(b9ec538b-9a79-4c37-b676-3bae16c7baee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a57b375a1939073621e92c1678b2b281023ce3fbd6836883022753024c2b07f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69df694c46-rpcnm" podUID="b9ec538b-9a79-4c37-b676-3bae16c7baee" Sep 4 00:03:56.967479 containerd[2018]: time="2025-09-04T00:03:56.967438487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69df694c46-r4snj,Uid:4bb9afcc-eef2-4e3e-b574-8eef798ccb88,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79aa6a072e786048ba0cd421b25300e1c4e5eb6bed091347644db6bd357cd16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.967654 kubelet[3603]: E0904 00:03:56.967625 3603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79aa6a072e786048ba0cd421b25300e1c4e5eb6bed091347644db6bd357cd16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:56.967717 kubelet[3603]: E0904 00:03:56.967672 3603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79aa6a072e786048ba0cd421b25300e1c4e5eb6bed091347644db6bd357cd16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69df694c46-r4snj" Sep 4 00:03:56.967717 kubelet[3603]: E0904 00:03:56.967697 3603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79aa6a072e786048ba0cd421b25300e1c4e5eb6bed091347644db6bd357cd16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69df694c46-r4snj" Sep 4 00:03:56.967805 kubelet[3603]: E0904 00:03:56.967747 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69df694c46-r4snj_calico-apiserver(4bb9afcc-eef2-4e3e-b574-8eef798ccb88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69df694c46-r4snj_calico-apiserver(4bb9afcc-eef2-4e3e-b574-8eef798ccb88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a79aa6a072e786048ba0cd421b25300e1c4e5eb6bed091347644db6bd357cd16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69df694c46-r4snj" podUID="4bb9afcc-eef2-4e3e-b574-8eef798ccb88" Sep 4 00:03:57.645573 systemd[1]: Created slice kubepods-besteffort-pod4c03246a_81f1_4d02_b2e6_f80b3ba3c00c.slice - libcontainer container kubepods-besteffort-pod4c03246a_81f1_4d02_b2e6_f80b3ba3c00c.slice. Sep 4 00:03:57.648966 containerd[2018]: time="2025-09-04T00:03:57.648833732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlj2d,Uid:4c03246a-81f1-4d02-b2e6-f80b3ba3c00c,Namespace:calico-system,Attempt:0,}" Sep 4 00:03:57.725011 containerd[2018]: time="2025-09-04T00:03:57.724959419Z" level=error msg="Failed to destroy network for sandbox \"0c3b85cb031cf17e1d9e31dcfdc074f4ba911716c683a9874397a1752bbf777c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:57.728304 systemd[1]: run-netns-cni\x2d44523af6\x2d6706\x2d3c25\x2df6df\x2dbd8662780c4b.mount: Deactivated successfully. Sep 4 00:03:57.729503 containerd[2018]: time="2025-09-04T00:03:57.729446883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlj2d,Uid:4c03246a-81f1-4d02-b2e6-f80b3ba3c00c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c3b85cb031cf17e1d9e31dcfdc074f4ba911716c683a9874397a1752bbf777c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:57.731305 kubelet[3603]: E0904 00:03:57.730155 3603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c3b85cb031cf17e1d9e31dcfdc074f4ba911716c683a9874397a1752bbf777c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:03:57.731305 kubelet[3603]: E0904 00:03:57.730225 3603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c3b85cb031cf17e1d9e31dcfdc074f4ba911716c683a9874397a1752bbf777c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vlj2d" Sep 4 00:03:57.731305 kubelet[3603]: E0904 00:03:57.730262 3603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c3b85cb031cf17e1d9e31dcfdc074f4ba911716c683a9874397a1752bbf777c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vlj2d" Sep 4 00:03:57.731523 kubelet[3603]: E0904 00:03:57.730338 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vlj2d_calico-system(4c03246a-81f1-4d02-b2e6-f80b3ba3c00c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vlj2d_calico-system(4c03246a-81f1-4d02-b2e6-f80b3ba3c00c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c3b85cb031cf17e1d9e31dcfdc074f4ba911716c683a9874397a1752bbf777c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vlj2d" podUID="4c03246a-81f1-4d02-b2e6-f80b3ba3c00c" Sep 4 00:04:06.499246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1787139381.mount: Deactivated successfully. Sep 4 00:04:06.546873 containerd[2018]: time="2025-09-04T00:04:06.531669997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:06.547905 containerd[2018]: time="2025-09-04T00:04:06.547838479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 4 00:04:06.596932 containerd[2018]: time="2025-09-04T00:04:06.596328235Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:06.599170 containerd[2018]: time="2025-09-04T00:04:06.599129712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:06.600597 containerd[2018]: time="2025-09-04T00:04:06.600546368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.732629898s" Sep 4 00:04:06.600597 containerd[2018]: time="2025-09-04T00:04:06.600588390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 4 00:04:06.640994 containerd[2018]: time="2025-09-04T00:04:06.640936087Z" level=info msg="CreateContainer within sandbox \"66692b3bb5f86f609d95a82289ac28ca2335069ac3c47376b0043161920da5eb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 00:04:06.685101 containerd[2018]: time="2025-09-04T00:04:06.684194952Z" level=info msg="Container 862dba424f1398c2fb721999d14fb282f5f4efaeaa91be78edd29c5aa25e3213: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:06.686857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount164629087.mount: Deactivated successfully. Sep 4 00:04:06.728140 containerd[2018]: time="2025-09-04T00:04:06.728098505Z" level=info msg="CreateContainer within sandbox \"66692b3bb5f86f609d95a82289ac28ca2335069ac3c47376b0043161920da5eb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"862dba424f1398c2fb721999d14fb282f5f4efaeaa91be78edd29c5aa25e3213\"" Sep 4 00:04:06.728818 containerd[2018]: time="2025-09-04T00:04:06.728792825Z" level=info msg="StartContainer for \"862dba424f1398c2fb721999d14fb282f5f4efaeaa91be78edd29c5aa25e3213\"" Sep 4 00:04:06.738799 containerd[2018]: time="2025-09-04T00:04:06.738756455Z" level=info msg="connecting to shim 862dba424f1398c2fb721999d14fb282f5f4efaeaa91be78edd29c5aa25e3213" address="unix:///run/containerd/s/7070ddb9a8f602df415a2773bd12ec3eef273f615592d6c429668cd42c8dee61" protocol=ttrpc version=3 Sep 4 00:04:06.936082 systemd[1]: Started cri-containerd-862dba424f1398c2fb721999d14fb282f5f4efaeaa91be78edd29c5aa25e3213.scope - libcontainer container 862dba424f1398c2fb721999d14fb282f5f4efaeaa91be78edd29c5aa25e3213. Sep 4 00:04:06.983900 kubelet[3603]: I0904 00:04:06.983692 3603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:04:07.025709 containerd[2018]: time="2025-09-04T00:04:07.025537874Z" level=info msg="StartContainer for \"862dba424f1398c2fb721999d14fb282f5f4efaeaa91be78edd29c5aa25e3213\" returns successfully" Sep 4 00:04:07.596695 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 00:04:07.639924 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 00:04:08.095837 kubelet[3603]: I0904 00:04:08.095004 3603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4ad89bb2-eb31-48be-ab76-8e8be76037a2-whisker-backend-key-pair\") pod \"4ad89bb2-eb31-48be-ab76-8e8be76037a2\" (UID: \"4ad89bb2-eb31-48be-ab76-8e8be76037a2\") " Sep 4 00:04:08.095837 kubelet[3603]: I0904 00:04:08.095092 3603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ad89bb2-eb31-48be-ab76-8e8be76037a2-whisker-ca-bundle\") pod \"4ad89bb2-eb31-48be-ab76-8e8be76037a2\" (UID: \"4ad89bb2-eb31-48be-ab76-8e8be76037a2\") " Sep 4 00:04:08.095837 kubelet[3603]: I0904 00:04:08.095118 3603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc2t9\" (UniqueName: \"kubernetes.io/projected/4ad89bb2-eb31-48be-ab76-8e8be76037a2-kube-api-access-hc2t9\") pod \"4ad89bb2-eb31-48be-ab76-8e8be76037a2\" (UID: \"4ad89bb2-eb31-48be-ab76-8e8be76037a2\") " Sep 4 00:04:08.139904 kubelet[3603]: I0904 00:04:08.131301 3603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ad89bb2-eb31-48be-ab76-8e8be76037a2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4ad89bb2-eb31-48be-ab76-8e8be76037a2" (UID: "4ad89bb2-eb31-48be-ab76-8e8be76037a2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 00:04:08.142316 systemd[1]: var-lib-kubelet-pods-4ad89bb2\x2deb31\x2d48be\x2dab76\x2d8e8be76037a2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 4 00:04:08.144248 kubelet[3603]: I0904 00:04:08.144203 3603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad89bb2-eb31-48be-ab76-8e8be76037a2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4ad89bb2-eb31-48be-ab76-8e8be76037a2" (UID: "4ad89bb2-eb31-48be-ab76-8e8be76037a2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 00:04:08.148948 kubelet[3603]: I0904 00:04:08.148888 3603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad89bb2-eb31-48be-ab76-8e8be76037a2-kube-api-access-hc2t9" (OuterVolumeSpecName: "kube-api-access-hc2t9") pod "4ad89bb2-eb31-48be-ab76-8e8be76037a2" (UID: "4ad89bb2-eb31-48be-ab76-8e8be76037a2"). InnerVolumeSpecName "kube-api-access-hc2t9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 00:04:08.151470 systemd[1]: var-lib-kubelet-pods-4ad89bb2\x2deb31\x2d48be\x2dab76\x2d8e8be76037a2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhc2t9.mount: Deactivated successfully. Sep 4 00:04:08.196729 kubelet[3603]: I0904 00:04:08.196574 3603 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4ad89bb2-eb31-48be-ab76-8e8be76037a2-whisker-backend-key-pair\") on node \"ip-172-31-20-83\" DevicePath \"\"" Sep 4 00:04:08.196729 kubelet[3603]: I0904 00:04:08.196681 3603 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ad89bb2-eb31-48be-ab76-8e8be76037a2-whisker-ca-bundle\") on node \"ip-172-31-20-83\" DevicePath \"\"" Sep 4 00:04:08.196729 kubelet[3603]: I0904 00:04:08.196701 3603 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hc2t9\" (UniqueName: \"kubernetes.io/projected/4ad89bb2-eb31-48be-ab76-8e8be76037a2-kube-api-access-hc2t9\") on node \"ip-172-31-20-83\" DevicePath \"\"" Sep 4 00:04:08.390465 containerd[2018]: time="2025-09-04T00:04:08.390347384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862dba424f1398c2fb721999d14fb282f5f4efaeaa91be78edd29c5aa25e3213\" id:\"dac6aaac632119a093f2c9461a573869f36bdc89e1e7eb8363a72b128f0cc613\" pid:4688 exit_status:1 exited_at:{seconds:1756944248 nanos:387685137}" Sep 4 00:04:08.640399 containerd[2018]: time="2025-09-04T00:04:08.640367215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69df694c46-rpcnm,Uid:b9ec538b-9a79-4c37-b676-3bae16c7baee,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:04:08.640693 containerd[2018]: time="2025-09-04T00:04:08.640400902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69df694c46-r4snj,Uid:4bb9afcc-eef2-4e3e-b574-8eef798ccb88,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:04:08.640950 containerd[2018]: time="2025-09-04T00:04:08.640507948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd96bd78c-2tmlb,Uid:d11e3c26-312a-4514-8b46-aceefc182a80,Namespace:calico-system,Attempt:0,}" Sep 4 00:04:08.949958 systemd[1]: Removed slice kubepods-besteffort-pod4ad89bb2_eb31_48be_ab76_8e8be76037a2.slice - libcontainer container kubepods-besteffort-pod4ad89bb2_eb31_48be_ab76_8e8be76037a2.slice. Sep 4 00:04:08.997721 kubelet[3603]: I0904 00:04:08.975807 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l95pt" podStartSLOduration=3.175162935 podStartE2EDuration="23.973398406s" podCreationTimestamp="2025-09-04 00:03:45 +0000 UTC" firstStartedPulling="2025-09-04 00:03:45.80301159 +0000 UTC m=+22.385244572" lastFinishedPulling="2025-09-04 00:04:06.601247056 +0000 UTC m=+43.183480043" observedRunningTime="2025-09-04 00:04:08.016294005 +0000 UTC m=+44.598527054" watchObservedRunningTime="2025-09-04 00:04:08.973398406 +0000 UTC m=+45.555631406" Sep 4 00:04:09.303935 containerd[2018]: time="2025-09-04T00:04:09.303873467Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862dba424f1398c2fb721999d14fb282f5f4efaeaa91be78edd29c5aa25e3213\" id:\"4f028a001c0cae174bb3ec106a3afc6c81eda4ae9f08787174756ae8b179e312\" pid:4773 exit_status:1 exited_at:{seconds:1756944249 nanos:303309499}" Sep 4 00:04:09.345683 systemd-networkd[1850]: calice1584e9d57: Link UP Sep 4 00:04:09.348167 systemd-networkd[1850]: calice1584e9d57: Gained carrier Sep 4 00:04:09.384827 containerd[2018]: 2025-09-04 00:04:08.709 [INFO][4725] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:04:09.384827 containerd[2018]: 2025-09-04 00:04:08.765 [INFO][4725] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--83-k8s-calico--kube--controllers--7fd96bd78c--2tmlb-eth0 calico-kube-controllers-7fd96bd78c- calico-system d11e3c26-312a-4514-8b46-aceefc182a80 852 0 2025-09-04 00:03:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fd96bd78c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-20-83 calico-kube-controllers-7fd96bd78c-2tmlb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calice1584e9d57 [] [] }} ContainerID="7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" Namespace="calico-system" Pod="calico-kube-controllers-7fd96bd78c-2tmlb" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--kube--controllers--7fd96bd78c--2tmlb-" Sep 4 00:04:09.384827 containerd[2018]: 2025-09-04 00:04:08.766 [INFO][4725] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" Namespace="calico-system" Pod="calico-kube-controllers-7fd96bd78c-2tmlb" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--kube--controllers--7fd96bd78c--2tmlb-eth0" Sep 4 00:04:09.384827 containerd[2018]: 2025-09-04 00:04:09.217 [INFO][4747] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" HandleID="k8s-pod-network.7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" Workload="ip--172--31--20--83-k8s-calico--kube--controllers--7fd96bd78c--2tmlb-eth0" Sep 4 00:04:09.386071 containerd[2018]: 2025-09-04 00:04:09.222 [INFO][4747] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" HandleID="k8s-pod-network.7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" Workload="ip--172--31--20--83-k8s-calico--kube--controllers--7fd96bd78c--2tmlb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032a350), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-83", "pod":"calico-kube-controllers-7fd96bd78c-2tmlb", "timestamp":"2025-09-04 00:04:09.215220339 +0000 UTC"}, Hostname:"ip-172-31-20-83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:04:09.386071 containerd[2018]: 2025-09-04 00:04:09.222 [INFO][4747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:04:09.386071 containerd[2018]: 2025-09-04 00:04:09.224 [INFO][4747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:04:09.386071 containerd[2018]: 2025-09-04 00:04:09.224 [INFO][4747] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-83' Sep 4 00:04:09.386071 containerd[2018]: 2025-09-04 00:04:09.253 [INFO][4747] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" host="ip-172-31-20-83" Sep 4 00:04:09.386071 containerd[2018]: 2025-09-04 00:04:09.270 [INFO][4747] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-83" Sep 4 00:04:09.386071 containerd[2018]: 2025-09-04 00:04:09.278 [INFO][4747] ipam/ipam.go 511: Trying affinity for 192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:09.386071 containerd[2018]: 2025-09-04 00:04:09.282 [INFO][4747] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:09.386071 containerd[2018]: 2025-09-04 00:04:09.287 [INFO][4747] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:09.388054 containerd[2018]: 2025-09-04 00:04:09.287 [INFO][4747] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.112.64/26 handle="k8s-pod-network.7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" host="ip-172-31-20-83" Sep 4 00:04:09.388054 containerd[2018]: 2025-09-04 00:04:09.289 [INFO][4747] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace Sep 4 00:04:09.388054 containerd[2018]: 2025-09-04 00:04:09.297 [INFO][4747] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.112.64/26 handle="k8s-pod-network.7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" host="ip-172-31-20-83" Sep 4 00:04:09.388054 containerd[2018]: 2025-09-04 00:04:09.310 [INFO][4747] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.112.65/26] block=192.168.112.64/26 handle="k8s-pod-network.7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" host="ip-172-31-20-83" Sep 4 00:04:09.388054 containerd[2018]: 2025-09-04 00:04:09.310 [INFO][4747] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.65/26] handle="k8s-pod-network.7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" host="ip-172-31-20-83" Sep 4 00:04:09.388054 containerd[2018]: 2025-09-04 00:04:09.310 [INFO][4747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:04:09.388054 containerd[2018]: 2025-09-04 00:04:09.311 [INFO][4747] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.65/26] IPv6=[] ContainerID="7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" HandleID="k8s-pod-network.7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" Workload="ip--172--31--20--83-k8s-calico--kube--controllers--7fd96bd78c--2tmlb-eth0" Sep 4 00:04:09.389039 containerd[2018]: 2025-09-04 00:04:09.320 [INFO][4725] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" Namespace="calico-system" Pod="calico-kube-controllers-7fd96bd78c-2tmlb" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--kube--controllers--7fd96bd78c--2tmlb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--83-k8s-calico--kube--controllers--7fd96bd78c--2tmlb-eth0", GenerateName:"calico-kube-controllers-7fd96bd78c-", Namespace:"calico-system", SelfLink:"", UID:"d11e3c26-312a-4514-8b46-aceefc182a80", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 3, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd96bd78c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-83", ContainerID:"", Pod:"calico-kube-controllers-7fd96bd78c-2tmlb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice1584e9d57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:09.389634 containerd[2018]: 2025-09-04 00:04:09.320 [INFO][4725] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.65/32] ContainerID="7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" Namespace="calico-system" Pod="calico-kube-controllers-7fd96bd78c-2tmlb" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--kube--controllers--7fd96bd78c--2tmlb-eth0" Sep 4 00:04:09.389634 containerd[2018]: 2025-09-04 00:04:09.321 [INFO][4725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice1584e9d57 ContainerID="7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" Namespace="calico-system" Pod="calico-kube-controllers-7fd96bd78c-2tmlb" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--kube--controllers--7fd96bd78c--2tmlb-eth0" Sep 4 00:04:09.389634 containerd[2018]: 2025-09-04 00:04:09.351 [INFO][4725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" Namespace="calico-system" Pod="calico-kube-controllers-7fd96bd78c-2tmlb" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--kube--controllers--7fd96bd78c--2tmlb-eth0" Sep 4 00:04:09.436852 containerd[2018]: 2025-09-04 00:04:09.352 [INFO][4725] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" Namespace="calico-system" Pod="calico-kube-controllers-7fd96bd78c-2tmlb" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--kube--controllers--7fd96bd78c--2tmlb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--83-k8s-calico--kube--controllers--7fd96bd78c--2tmlb-eth0", GenerateName:"calico-kube-controllers-7fd96bd78c-", Namespace:"calico-system", SelfLink:"", UID:"d11e3c26-312a-4514-8b46-aceefc182a80", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 3, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd96bd78c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-83", ContainerID:"7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace", Pod:"calico-kube-controllers-7fd96bd78c-2tmlb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice1584e9d57", MAC:"72:c6:3e:e3:4f:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:09.438223 kubelet[3603]: I0904 00:04:09.419049 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnmjp\" (UniqueName: \"kubernetes.io/projected/86902e8b-4fdb-4ea2-8c95-95b8435492c8-kube-api-access-wnmjp\") pod \"whisker-6bbf6d5bdd-m9zrs\" (UID: \"86902e8b-4fdb-4ea2-8c95-95b8435492c8\") " pod="calico-system/whisker-6bbf6d5bdd-m9zrs" Sep 4 00:04:09.438223 kubelet[3603]: I0904 00:04:09.426537 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/86902e8b-4fdb-4ea2-8c95-95b8435492c8-whisker-backend-key-pair\") pod \"whisker-6bbf6d5bdd-m9zrs\" (UID: \"86902e8b-4fdb-4ea2-8c95-95b8435492c8\") " pod="calico-system/whisker-6bbf6d5bdd-m9zrs" Sep 4 00:04:09.438223 kubelet[3603]: I0904 00:04:09.426619 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86902e8b-4fdb-4ea2-8c95-95b8435492c8-whisker-ca-bundle\") pod \"whisker-6bbf6d5bdd-m9zrs\" (UID: \"86902e8b-4fdb-4ea2-8c95-95b8435492c8\") " pod="calico-system/whisker-6bbf6d5bdd-m9zrs" Sep 4 00:04:09.397737 systemd[1]: Created slice kubepods-besteffort-pod86902e8b_4fdb_4ea2_8c95_95b8435492c8.slice - libcontainer container kubepods-besteffort-pod86902e8b_4fdb_4ea2_8c95_95b8435492c8.slice. Sep 4 00:04:09.440972 containerd[2018]: 2025-09-04 00:04:09.375 [INFO][4725] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" Namespace="calico-system" Pod="calico-kube-controllers-7fd96bd78c-2tmlb" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--kube--controllers--7fd96bd78c--2tmlb-eth0" Sep 4 00:04:09.439039 (udev-worker)[4656]: Network interface NamePolicy= disabled on kernel command line. Sep 4 00:04:09.519194 systemd-networkd[1850]: cali5f7c713629e: Link UP Sep 4 00:04:09.519493 systemd-networkd[1850]: cali5f7c713629e: Gained carrier Sep 4 00:04:09.625079 containerd[2018]: 2025-09-04 00:04:08.717 [INFO][4726] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:04:09.625079 containerd[2018]: 2025-09-04 00:04:08.765 [INFO][4726] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--83-k8s-calico--apiserver--69df694c46--rpcnm-eth0 calico-apiserver-69df694c46- calico-apiserver b9ec538b-9a79-4c37-b676-3bae16c7baee 849 0 2025-09-04 00:03:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69df694c46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-83 calico-apiserver-69df694c46-rpcnm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5f7c713629e [] [] }} ContainerID="a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" Namespace="calico-apiserver" Pod="calico-apiserver-69df694c46-rpcnm" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--rpcnm-" Sep 4 00:04:09.625079 containerd[2018]: 2025-09-04 00:04:08.766 [INFO][4726] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" Namespace="calico-apiserver" Pod="calico-apiserver-69df694c46-rpcnm" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--rpcnm-eth0" Sep 4 00:04:09.625079 containerd[2018]: 2025-09-04 00:04:09.217 [INFO][4751] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" HandleID="k8s-pod-network.a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" Workload="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--rpcnm-eth0" Sep 4 00:04:09.625397 containerd[2018]: 2025-09-04 00:04:09.224 [INFO][4751] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" HandleID="k8s-pod-network.a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" Workload="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--rpcnm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037c2f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-83", "pod":"calico-apiserver-69df694c46-rpcnm", "timestamp":"2025-09-04 00:04:09.217349766 +0000 UTC"}, Hostname:"ip-172-31-20-83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:04:09.625397 containerd[2018]: 2025-09-04 00:04:09.225 [INFO][4751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:04:09.625397 containerd[2018]: 2025-09-04 00:04:09.311 [INFO][4751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:04:09.625397 containerd[2018]: 2025-09-04 00:04:09.311 [INFO][4751] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-83' Sep 4 00:04:09.625397 containerd[2018]: 2025-09-04 00:04:09.358 [INFO][4751] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" host="ip-172-31-20-83" Sep 4 00:04:09.625397 containerd[2018]: 2025-09-04 00:04:09.381 [INFO][4751] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-83" Sep 4 00:04:09.625397 containerd[2018]: 2025-09-04 00:04:09.409 [INFO][4751] ipam/ipam.go 511: Trying affinity for 192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:09.625397 containerd[2018]: 2025-09-04 00:04:09.421 [INFO][4751] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:09.625397 containerd[2018]: 2025-09-04 00:04:09.429 [INFO][4751] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:09.625787 containerd[2018]: 2025-09-04 00:04:09.429 [INFO][4751] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.112.64/26 handle="k8s-pod-network.a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" host="ip-172-31-20-83" Sep 4 00:04:09.625787 containerd[2018]: 2025-09-04 00:04:09.437 [INFO][4751] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10 Sep 4 00:04:09.625787 containerd[2018]: 2025-09-04 00:04:09.456 [INFO][4751] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.112.64/26 handle="k8s-pod-network.a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" host="ip-172-31-20-83" Sep 4 00:04:09.625787 containerd[2018]: 2025-09-04 00:04:09.508 [INFO][4751] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.112.66/26] block=192.168.112.64/26 handle="k8s-pod-network.a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" host="ip-172-31-20-83" Sep 4 00:04:09.625787 containerd[2018]: 2025-09-04 00:04:09.508 [INFO][4751] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.66/26] handle="k8s-pod-network.a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" host="ip-172-31-20-83" Sep 4 00:04:09.625787 containerd[2018]: 2025-09-04 00:04:09.508 [INFO][4751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:04:09.625787 containerd[2018]: 2025-09-04 00:04:09.508 [INFO][4751] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.66/26] IPv6=[] ContainerID="a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" HandleID="k8s-pod-network.a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" Workload="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--rpcnm-eth0" Sep 4 00:04:09.630391 containerd[2018]: 2025-09-04 00:04:09.513 [INFO][4726] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" Namespace="calico-apiserver" Pod="calico-apiserver-69df694c46-rpcnm" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--rpcnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--83-k8s-calico--apiserver--69df694c46--rpcnm-eth0", GenerateName:"calico-apiserver-69df694c46-", Namespace:"calico-apiserver", SelfLink:"", UID:"b9ec538b-9a79-4c37-b676-3bae16c7baee", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 3, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69df694c46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-83", ContainerID:"", Pod:"calico-apiserver-69df694c46-rpcnm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f7c713629e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:09.630525 containerd[2018]: 2025-09-04 00:04:09.513 [INFO][4726] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.66/32] ContainerID="a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" Namespace="calico-apiserver" Pod="calico-apiserver-69df694c46-rpcnm" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--rpcnm-eth0" Sep 4 00:04:09.630525 containerd[2018]: 2025-09-04 00:04:09.513 [INFO][4726] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f7c713629e ContainerID="a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" Namespace="calico-apiserver" Pod="calico-apiserver-69df694c46-rpcnm" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--rpcnm-eth0" Sep 4 00:04:09.630525 containerd[2018]: 2025-09-04 00:04:09.516 [INFO][4726] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" Namespace="calico-apiserver" Pod="calico-apiserver-69df694c46-rpcnm" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--rpcnm-eth0" Sep 4 00:04:09.630649 containerd[2018]: 2025-09-04 00:04:09.516 [INFO][4726] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" Namespace="calico-apiserver" Pod="calico-apiserver-69df694c46-rpcnm" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--rpcnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--83-k8s-calico--apiserver--69df694c46--rpcnm-eth0", GenerateName:"calico-apiserver-69df694c46-", Namespace:"calico-apiserver", SelfLink:"", UID:"b9ec538b-9a79-4c37-b676-3bae16c7baee", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 3, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69df694c46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-83", ContainerID:"a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10", Pod:"calico-apiserver-69df694c46-rpcnm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f7c713629e", MAC:"fa:e8:0a:88:8c:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:09.630750 containerd[2018]: 2025-09-04 00:04:09.589 [INFO][4726] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" Namespace="calico-apiserver" Pod="calico-apiserver-69df694c46-rpcnm" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--rpcnm-eth0" Sep 4 00:04:09.669025 kubelet[3603]: I0904 00:04:09.668522 3603 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad89bb2-eb31-48be-ab76-8e8be76037a2" path="/var/lib/kubelet/pods/4ad89bb2-eb31-48be-ab76-8e8be76037a2/volumes" Sep 4 00:04:09.731589 containerd[2018]: time="2025-09-04T00:04:09.731226296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bbf6d5bdd-m9zrs,Uid:86902e8b-4fdb-4ea2-8c95-95b8435492c8,Namespace:calico-system,Attempt:0,}" Sep 4 00:04:09.755815 systemd-networkd[1850]: cali891f8e3b9a6: Link UP Sep 4 00:04:09.756044 systemd-networkd[1850]: cali891f8e3b9a6: Gained carrier Sep 4 00:04:09.855071 containerd[2018]: 2025-09-04 00:04:08.705 [INFO][4711] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:04:09.855071 containerd[2018]: 2025-09-04 00:04:08.767 [INFO][4711] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--83-k8s-calico--apiserver--69df694c46--r4snj-eth0 calico-apiserver-69df694c46- calico-apiserver 4bb9afcc-eef2-4e3e-b574-8eef798ccb88 846 0 2025-09-04 00:03:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69df694c46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-83 calico-apiserver-69df694c46-r4snj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali891f8e3b9a6 [] [] }} ContainerID="a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" Namespace="calico-apiserver" Pod="calico-apiserver-69df694c46-r4snj" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--r4snj-" Sep 4 00:04:09.855071 containerd[2018]: 2025-09-04 00:04:08.767 [INFO][4711] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" Namespace="calico-apiserver" Pod="calico-apiserver-69df694c46-r4snj" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--r4snj-eth0" Sep 4 00:04:09.855071 containerd[2018]: 2025-09-04 00:04:09.217 [INFO][4749] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" HandleID="k8s-pod-network.a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" Workload="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--r4snj-eth0" Sep 4 00:04:09.855845 containerd[2018]: 2025-09-04 00:04:09.222 [INFO][4749] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" HandleID="k8s-pod-network.a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" Workload="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--r4snj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001026f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-83", "pod":"calico-apiserver-69df694c46-r4snj", "timestamp":"2025-09-04 00:04:09.215855735 +0000 UTC"}, Hostname:"ip-172-31-20-83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:04:09.855845 containerd[2018]: 2025-09-04 00:04:09.224 [INFO][4749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:04:09.855845 containerd[2018]: 2025-09-04 00:04:09.508 [INFO][4749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:04:09.855845 containerd[2018]: 2025-09-04 00:04:09.508 [INFO][4749] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-83' Sep 4 00:04:09.855845 containerd[2018]: 2025-09-04 00:04:09.594 [INFO][4749] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" host="ip-172-31-20-83" Sep 4 00:04:09.855845 containerd[2018]: 2025-09-04 00:04:09.631 [INFO][4749] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-83" Sep 4 00:04:09.855845 containerd[2018]: 2025-09-04 00:04:09.669 [INFO][4749] ipam/ipam.go 511: Trying affinity for 192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:09.855845 containerd[2018]: 2025-09-04 00:04:09.676 [INFO][4749] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:09.855845 containerd[2018]: 2025-09-04 00:04:09.683 [INFO][4749] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:09.858240 containerd[2018]: 2025-09-04 00:04:09.684 [INFO][4749] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.112.64/26 handle="k8s-pod-network.a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" host="ip-172-31-20-83" Sep 4 00:04:09.858240 containerd[2018]: 2025-09-04 00:04:09.689 [INFO][4749] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f Sep 4 00:04:09.858240 containerd[2018]: 2025-09-04 00:04:09.705 [INFO][4749] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.112.64/26 handle="k8s-pod-network.a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" host="ip-172-31-20-83" Sep 4 00:04:09.858240 containerd[2018]: 2025-09-04 00:04:09.725 [INFO][4749] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.112.67/26] block=192.168.112.64/26 handle="k8s-pod-network.a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" host="ip-172-31-20-83" Sep 4 00:04:09.858240 containerd[2018]: 2025-09-04 00:04:09.725 [INFO][4749] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.67/26] handle="k8s-pod-network.a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" host="ip-172-31-20-83" Sep 4 00:04:09.858240 containerd[2018]: 2025-09-04 00:04:09.725 [INFO][4749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:04:09.858240 containerd[2018]: 2025-09-04 00:04:09.725 [INFO][4749] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.67/26] IPv6=[] ContainerID="a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" HandleID="k8s-pod-network.a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" Workload="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--r4snj-eth0" Sep 4 00:04:09.858542 containerd[2018]: 2025-09-04 00:04:09.749 [INFO][4711] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" Namespace="calico-apiserver" Pod="calico-apiserver-69df694c46-r4snj" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--r4snj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--83-k8s-calico--apiserver--69df694c46--r4snj-eth0", GenerateName:"calico-apiserver-69df694c46-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bb9afcc-eef2-4e3e-b574-8eef798ccb88", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 3, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69df694c46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-83", ContainerID:"", Pod:"calico-apiserver-69df694c46-r4snj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali891f8e3b9a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:09.861066 containerd[2018]: 2025-09-04 00:04:09.750 [INFO][4711] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.67/32] ContainerID="a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" Namespace="calico-apiserver" Pod="calico-apiserver-69df694c46-r4snj" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--r4snj-eth0" Sep 4 00:04:09.861066 containerd[2018]: 2025-09-04 00:04:09.750 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali891f8e3b9a6 ContainerID="a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" Namespace="calico-apiserver" Pod="calico-apiserver-69df694c46-r4snj" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--r4snj-eth0" Sep 4 00:04:09.861066 containerd[2018]: 2025-09-04 00:04:09.761 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" Namespace="calico-apiserver" Pod="calico-apiserver-69df694c46-r4snj" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--r4snj-eth0" Sep 4 00:04:09.861231 containerd[2018]: 2025-09-04 00:04:09.764 [INFO][4711] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" Namespace="calico-apiserver" Pod="calico-apiserver-69df694c46-r4snj" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--r4snj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--83-k8s-calico--apiserver--69df694c46--r4snj-eth0", GenerateName:"calico-apiserver-69df694c46-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bb9afcc-eef2-4e3e-b574-8eef798ccb88", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 3, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69df694c46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-83", ContainerID:"a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f", Pod:"calico-apiserver-69df694c46-r4snj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali891f8e3b9a6", MAC:"ae:7f:fd:74:5d:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:09.861342 containerd[2018]: 2025-09-04 00:04:09.832 [INFO][4711] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" Namespace="calico-apiserver" Pod="calico-apiserver-69df694c46-r4snj" WorkloadEndpoint="ip--172--31--20--83-k8s-calico--apiserver--69df694c46--r4snj-eth0" Sep 4 00:04:09.890241 containerd[2018]: time="2025-09-04T00:04:09.888575447Z" level=info msg="connecting to shim 7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace" address="unix:///run/containerd/s/159f1afd87fb760edaca4aae191874402d9ebebb28ec4319c0c7d8254079b246" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:09.934906 containerd[2018]: time="2025-09-04T00:04:09.933370403Z" level=info msg="connecting to shim a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10" address="unix:///run/containerd/s/3f9f3728c9298aa9421b3ca9534fcba4d3667b20a7f0705f153e81e657435003" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:09.995516 containerd[2018]: time="2025-09-04T00:04:09.995301902Z" level=info msg="connecting to shim a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f" address="unix:///run/containerd/s/208de93a76e36608e878540a6f82c806c68814e3f2c77b2c8dc5d019b5377b49" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:10.034123 systemd[1]: Started cri-containerd-a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10.scope - libcontainer container a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10. Sep 4 00:04:10.063546 systemd[1]: Started cri-containerd-7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace.scope - libcontainer container 7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace. Sep 4 00:04:10.187479 systemd[1]: Started cri-containerd-a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f.scope - libcontainer container a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f. Sep 4 00:04:10.360717 systemd-networkd[1850]: cali2977c5249c9: Link UP Sep 4 00:04:10.361825 systemd-networkd[1850]: cali2977c5249c9: Gained carrier Sep 4 00:04:10.407033 containerd[2018]: 2025-09-04 00:04:09.966 [INFO][4869] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:04:10.407033 containerd[2018]: 2025-09-04 00:04:10.080 [INFO][4869] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--83-k8s-whisker--6bbf6d5bdd--m9zrs-eth0 whisker-6bbf6d5bdd- calico-system 86902e8b-4fdb-4ea2-8c95-95b8435492c8 937 0 2025-09-04 00:04:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bbf6d5bdd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-20-83 whisker-6bbf6d5bdd-m9zrs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2977c5249c9 [] [] }} ContainerID="78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" Namespace="calico-system" Pod="whisker-6bbf6d5bdd-m9zrs" WorkloadEndpoint="ip--172--31--20--83-k8s-whisker--6bbf6d5bdd--m9zrs-" Sep 4 00:04:10.407033 containerd[2018]: 2025-09-04 00:04:10.081 [INFO][4869] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" Namespace="calico-system" Pod="whisker-6bbf6d5bdd-m9zrs" WorkloadEndpoint="ip--172--31--20--83-k8s-whisker--6bbf6d5bdd--m9zrs-eth0" Sep 4 00:04:10.407033 containerd[2018]: 2025-09-04 00:04:10.253 [INFO][4991] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" HandleID="k8s-pod-network.78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" Workload="ip--172--31--20--83-k8s-whisker--6bbf6d5bdd--m9zrs-eth0" Sep 4 00:04:10.407431 containerd[2018]: 2025-09-04 00:04:10.253 [INFO][4991] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" HandleID="k8s-pod-network.78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" Workload="ip--172--31--20--83-k8s-whisker--6bbf6d5bdd--m9zrs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cfb70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-83", "pod":"whisker-6bbf6d5bdd-m9zrs", "timestamp":"2025-09-04 00:04:10.252733089 +0000 UTC"}, Hostname:"ip-172-31-20-83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:04:10.407431 containerd[2018]: 2025-09-04 00:04:10.253 [INFO][4991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:04:10.407431 containerd[2018]: 2025-09-04 00:04:10.253 [INFO][4991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:04:10.407431 containerd[2018]: 2025-09-04 00:04:10.253 [INFO][4991] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-83' Sep 4 00:04:10.407431 containerd[2018]: 2025-09-04 00:04:10.265 [INFO][4991] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" host="ip-172-31-20-83" Sep 4 00:04:10.407431 containerd[2018]: 2025-09-04 00:04:10.275 [INFO][4991] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-83" Sep 4 00:04:10.407431 containerd[2018]: 2025-09-04 00:04:10.282 [INFO][4991] ipam/ipam.go 511: Trying affinity for 192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:10.407431 containerd[2018]: 2025-09-04 00:04:10.285 [INFO][4991] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:10.407431 containerd[2018]: 2025-09-04 00:04:10.298 [INFO][4991] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:10.409686 containerd[2018]: 2025-09-04 00:04:10.299 [INFO][4991] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.112.64/26 handle="k8s-pod-network.78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" host="ip-172-31-20-83" Sep 4 00:04:10.409686 containerd[2018]: 2025-09-04 00:04:10.307 [INFO][4991] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302 Sep 4 00:04:10.409686 containerd[2018]: 2025-09-04 00:04:10.325 [INFO][4991] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.112.64/26 handle="k8s-pod-network.78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" host="ip-172-31-20-83" Sep 4 00:04:10.409686 containerd[2018]: 2025-09-04 00:04:10.339 [INFO][4991] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.112.68/26] block=192.168.112.64/26 handle="k8s-pod-network.78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" host="ip-172-31-20-83" Sep 4 00:04:10.409686 containerd[2018]: 2025-09-04 00:04:10.339 [INFO][4991] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.68/26] handle="k8s-pod-network.78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" host="ip-172-31-20-83" Sep 4 00:04:10.409686 containerd[2018]: 2025-09-04 00:04:10.339 [INFO][4991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:04:10.409686 containerd[2018]: 2025-09-04 00:04:10.339 [INFO][4991] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.68/26] IPv6=[] ContainerID="78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" HandleID="k8s-pod-network.78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" Workload="ip--172--31--20--83-k8s-whisker--6bbf6d5bdd--m9zrs-eth0" Sep 4 00:04:10.412143 containerd[2018]: 2025-09-04 00:04:10.345 [INFO][4869] cni-plugin/k8s.go 418: Populated endpoint ContainerID="78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" Namespace="calico-system" Pod="whisker-6bbf6d5bdd-m9zrs" WorkloadEndpoint="ip--172--31--20--83-k8s-whisker--6bbf6d5bdd--m9zrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--83-k8s-whisker--6bbf6d5bdd--m9zrs-eth0", GenerateName:"whisker-6bbf6d5bdd-", Namespace:"calico-system", SelfLink:"", UID:"86902e8b-4fdb-4ea2-8c95-95b8435492c8", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bbf6d5bdd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-83", ContainerID:"", Pod:"whisker-6bbf6d5bdd-m9zrs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.112.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2977c5249c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:10.412143 containerd[2018]: 2025-09-04 00:04:10.345 [INFO][4869] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.68/32] ContainerID="78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" Namespace="calico-system" Pod="whisker-6bbf6d5bdd-m9zrs" WorkloadEndpoint="ip--172--31--20--83-k8s-whisker--6bbf6d5bdd--m9zrs-eth0" Sep 4 00:04:10.412349 containerd[2018]: 2025-09-04 00:04:10.345 [INFO][4869] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2977c5249c9 ContainerID="78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" Namespace="calico-system" Pod="whisker-6bbf6d5bdd-m9zrs" WorkloadEndpoint="ip--172--31--20--83-k8s-whisker--6bbf6d5bdd--m9zrs-eth0" Sep 4 00:04:10.412349 containerd[2018]: 2025-09-04 00:04:10.367 [INFO][4869] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" Namespace="calico-system" Pod="whisker-6bbf6d5bdd-m9zrs" WorkloadEndpoint="ip--172--31--20--83-k8s-whisker--6bbf6d5bdd--m9zrs-eth0" Sep 4 00:04:10.412450 containerd[2018]: 2025-09-04 00:04:10.371 [INFO][4869] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" Namespace="calico-system" Pod="whisker-6bbf6d5bdd-m9zrs" WorkloadEndpoint="ip--172--31--20--83-k8s-whisker--6bbf6d5bdd--m9zrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--83-k8s-whisker--6bbf6d5bdd--m9zrs-eth0", GenerateName:"whisker-6bbf6d5bdd-", Namespace:"calico-system", SelfLink:"", UID:"86902e8b-4fdb-4ea2-8c95-95b8435492c8", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bbf6d5bdd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-83", ContainerID:"78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302", Pod:"whisker-6bbf6d5bdd-m9zrs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.112.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2977c5249c9", MAC:"26:27:d2:66:54:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:10.412572 containerd[2018]: 2025-09-04 00:04:10.399 [INFO][4869] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" Namespace="calico-system" Pod="whisker-6bbf6d5bdd-m9zrs" WorkloadEndpoint="ip--172--31--20--83-k8s-whisker--6bbf6d5bdd--m9zrs-eth0" Sep 4 00:04:10.502566 containerd[2018]: time="2025-09-04T00:04:10.502438674Z" level=info msg="connecting to shim 78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302" address="unix:///run/containerd/s/1ee90fafd14f88bd4be3dcb869fc0562062b829ec6365331aa97713e10b3edfe" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:10.546393 containerd[2018]: time="2025-09-04T00:04:10.546330638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd96bd78c-2tmlb,Uid:d11e3c26-312a-4514-8b46-aceefc182a80,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace\"" Sep 4 00:04:10.581994 containerd[2018]: time="2025-09-04T00:04:10.581934311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 4 00:04:10.602971 containerd[2018]: time="2025-09-04T00:04:10.601827767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69df694c46-rpcnm,Uid:b9ec538b-9a79-4c37-b676-3bae16c7baee,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10\"" Sep 4 00:04:10.627130 systemd[1]: Started cri-containerd-78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302.scope - libcontainer container 78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302. Sep 4 00:04:10.641104 systemd-networkd[1850]: calice1584e9d57: Gained IPv6LL Sep 4 00:04:10.699362 containerd[2018]: time="2025-09-04T00:04:10.699227650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69df694c46-r4snj,Uid:4bb9afcc-eef2-4e3e-b574-8eef798ccb88,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f\"" Sep 4 00:04:10.769104 systemd-networkd[1850]: cali5f7c713629e: Gained IPv6LL Sep 4 00:04:10.855359 containerd[2018]: time="2025-09-04T00:04:10.855296224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bbf6d5bdd-m9zrs,Uid:86902e8b-4fdb-4ea2-8c95-95b8435492c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302\"" Sep 4 00:04:11.140485 containerd[2018]: time="2025-09-04T00:04:11.140370853Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862dba424f1398c2fb721999d14fb282f5f4efaeaa91be78edd29c5aa25e3213\" id:\"a8b665de433988184bebd7d14d5c9e823a9379db3fa670ad7438616ef6f4dfa9\" pid:5017 exit_status:1 exited_at:{seconds:1756944251 nanos:139327712}" Sep 4 00:04:11.409129 systemd-networkd[1850]: cali891f8e3b9a6: Gained IPv6LL Sep 4 00:04:11.857112 systemd-networkd[1850]: cali2977c5249c9: Gained IPv6LL Sep 4 00:04:11.910243 (udev-worker)[4657]: Network interface NamePolicy= disabled on kernel command line. Sep 4 00:04:11.910626 systemd-networkd[1850]: vxlan.calico: Link UP Sep 4 00:04:11.910631 systemd-networkd[1850]: vxlan.calico: Gained carrier Sep 4 00:04:12.641977 containerd[2018]: time="2025-09-04T00:04:12.641028604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlj2d,Uid:4c03246a-81f1-4d02-b2e6-f80b3ba3c00c,Namespace:calico-system,Attempt:0,}" Sep 4 00:04:12.641977 containerd[2018]: time="2025-09-04T00:04:12.641388766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rrcdc,Uid:542d776f-a26a-48e7-95c9-068cfc34a1e2,Namespace:kube-system,Attempt:0,}" Sep 4 00:04:12.641977 containerd[2018]: time="2025-09-04T00:04:12.641674506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8jwcz,Uid:8cd13f3f-b7b5-422b-a7be-6e9466ed1027,Namespace:kube-system,Attempt:0,}" Sep 4 00:04:12.643570 containerd[2018]: time="2025-09-04T00:04:12.642109076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6jcfn,Uid:83db72b0-cd35-4665-ad4e-cfc7b78c4403,Namespace:calico-system,Attempt:0,}" Sep 4 00:04:13.136489 systemd-networkd[1850]: cali408a95cd08a: Link UP Sep 4 00:04:13.137007 systemd-networkd[1850]: cali408a95cd08a: Gained carrier Sep 4 00:04:13.193843 containerd[2018]: 2025-09-04 00:04:12.907 [INFO][5247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--83-k8s-coredns--674b8bbfcf--8jwcz-eth0 coredns-674b8bbfcf- kube-system 8cd13f3f-b7b5-422b-a7be-6e9466ed1027 854 0 2025-09-04 00:03:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-83 coredns-674b8bbfcf-8jwcz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali408a95cd08a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-8jwcz" WorkloadEndpoint="ip--172--31--20--83-k8s-coredns--674b8bbfcf--8jwcz-" Sep 4 00:04:13.193843 containerd[2018]: 2025-09-04 00:04:12.909 [INFO][5247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-8jwcz" WorkloadEndpoint="ip--172--31--20--83-k8s-coredns--674b8bbfcf--8jwcz-eth0" Sep 4 00:04:13.193843 containerd[2018]: 2025-09-04 00:04:12.992 [INFO][5298] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" HandleID="k8s-pod-network.38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" Workload="ip--172--31--20--83-k8s-coredns--674b8bbfcf--8jwcz-eth0" Sep 4 00:04:13.194689 containerd[2018]: 2025-09-04 00:04:12.993 [INFO][5298] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" HandleID="k8s-pod-network.38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" Workload="ip--172--31--20--83-k8s-coredns--674b8bbfcf--8jwcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cfa70), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-83", "pod":"coredns-674b8bbfcf-8jwcz", "timestamp":"2025-09-04 00:04:12.992834763 +0000 UTC"}, Hostname:"ip-172-31-20-83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:04:13.194689 containerd[2018]: 2025-09-04 00:04:12.993 [INFO][5298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:04:13.194689 containerd[2018]: 2025-09-04 00:04:12.994 [INFO][5298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:04:13.194689 containerd[2018]: 2025-09-04 00:04:12.994 [INFO][5298] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-83' Sep 4 00:04:13.194689 containerd[2018]: 2025-09-04 00:04:13.008 [INFO][5298] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" host="ip-172-31-20-83" Sep 4 00:04:13.194689 containerd[2018]: 2025-09-04 00:04:13.027 [INFO][5298] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-83" Sep 4 00:04:13.194689 containerd[2018]: 2025-09-04 00:04:13.050 [INFO][5298] ipam/ipam.go 511: Trying affinity for 192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:13.194689 containerd[2018]: 2025-09-04 00:04:13.064 [INFO][5298] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:13.194689 containerd[2018]: 2025-09-04 00:04:13.075 [INFO][5298] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:13.194689 containerd[2018]: 2025-09-04 00:04:13.075 [INFO][5298] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.112.64/26 handle="k8s-pod-network.38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" host="ip-172-31-20-83" Sep 4 00:04:13.195056 containerd[2018]: 2025-09-04 00:04:13.091 [INFO][5298] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2 Sep 4 00:04:13.195056 containerd[2018]: 2025-09-04 00:04:13.104 [INFO][5298] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.112.64/26 handle="k8s-pod-network.38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" host="ip-172-31-20-83" Sep 4 00:04:13.195056 containerd[2018]: 2025-09-04 00:04:13.117 [INFO][5298] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.112.69/26] block=192.168.112.64/26 handle="k8s-pod-network.38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" host="ip-172-31-20-83" Sep 4 00:04:13.195056 containerd[2018]: 2025-09-04 00:04:13.117 [INFO][5298] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.69/26] handle="k8s-pod-network.38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" host="ip-172-31-20-83" Sep 4 00:04:13.195056 containerd[2018]: 2025-09-04 00:04:13.117 [INFO][5298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:04:13.195056 containerd[2018]: 2025-09-04 00:04:13.117 [INFO][5298] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.69/26] IPv6=[] ContainerID="38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" HandleID="k8s-pod-network.38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" Workload="ip--172--31--20--83-k8s-coredns--674b8bbfcf--8jwcz-eth0" Sep 4 00:04:13.195281 containerd[2018]: 2025-09-04 00:04:13.124 [INFO][5247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-8jwcz" WorkloadEndpoint="ip--172--31--20--83-k8s-coredns--674b8bbfcf--8jwcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--83-k8s-coredns--674b8bbfcf--8jwcz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8cd13f3f-b7b5-422b-a7be-6e9466ed1027", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-83", ContainerID:"", Pod:"coredns-674b8bbfcf-8jwcz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali408a95cd08a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:13.195281 containerd[2018]: 2025-09-04 00:04:13.124 [INFO][5247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.69/32] ContainerID="38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-8jwcz" WorkloadEndpoint="ip--172--31--20--83-k8s-coredns--674b8bbfcf--8jwcz-eth0" Sep 4 00:04:13.195281 containerd[2018]: 2025-09-04 00:04:13.124 [INFO][5247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali408a95cd08a ContainerID="38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-8jwcz" WorkloadEndpoint="ip--172--31--20--83-k8s-coredns--674b8bbfcf--8jwcz-eth0" Sep 4 00:04:13.195281 containerd[2018]: 2025-09-04 00:04:13.133 [INFO][5247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-8jwcz" WorkloadEndpoint="ip--172--31--20--83-k8s-coredns--674b8bbfcf--8jwcz-eth0" Sep 4 00:04:13.195281 containerd[2018]: 2025-09-04 00:04:13.143 [INFO][5247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-8jwcz" WorkloadEndpoint="ip--172--31--20--83-k8s-coredns--674b8bbfcf--8jwcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--83-k8s-coredns--674b8bbfcf--8jwcz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8cd13f3f-b7b5-422b-a7be-6e9466ed1027", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-83", ContainerID:"38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2", Pod:"coredns-674b8bbfcf-8jwcz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali408a95cd08a", MAC:"ae:60:3f:d2:6c:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:13.195281 containerd[2018]: 2025-09-04 00:04:13.171 [INFO][5247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-8jwcz" WorkloadEndpoint="ip--172--31--20--83-k8s-coredns--674b8bbfcf--8jwcz-eth0" Sep 4 00:04:13.202193 systemd-networkd[1850]: vxlan.calico: Gained IPv6LL Sep 4 00:04:13.303332 containerd[2018]: time="2025-09-04T00:04:13.302121407Z" level=info msg="connecting to shim 38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2" address="unix:///run/containerd/s/e4b4458f476ab6973a9a6e798fe4bf6e14188f034320d71e1c1349b5c29c7102" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:13.318013 systemd-networkd[1850]: calicec6d0bb53a: Link UP Sep 4 00:04:13.326658 systemd-networkd[1850]: calicec6d0bb53a: Gained carrier Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:12.895 [INFO][5271] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--83-k8s-csi--node--driver--vlj2d-eth0 csi-node-driver- calico-system 4c03246a-81f1-4d02-b2e6-f80b3ba3c00c 743 0 2025-09-04 00:03:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-20-83 csi-node-driver-vlj2d eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicec6d0bb53a [] [] }} ContainerID="bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" Namespace="calico-system" Pod="csi-node-driver-vlj2d" WorkloadEndpoint="ip--172--31--20--83-k8s-csi--node--driver--vlj2d-" Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:12.895 [INFO][5271] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" Namespace="calico-system" Pod="csi-node-driver-vlj2d" WorkloadEndpoint="ip--172--31--20--83-k8s-csi--node--driver--vlj2d-eth0" Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.083 [INFO][5296] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" HandleID="k8s-pod-network.bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" Workload="ip--172--31--20--83-k8s-csi--node--driver--vlj2d-eth0" Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.084 [INFO][5296] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" HandleID="k8s-pod-network.bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" Workload="ip--172--31--20--83-k8s-csi--node--driver--vlj2d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027c350), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-83", "pod":"csi-node-driver-vlj2d", "timestamp":"2025-09-04 00:04:13.08363167 +0000 UTC"}, Hostname:"ip-172-31-20-83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.084 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.117 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.118 [INFO][5296] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-83' Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.161 [INFO][5296] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" host="ip-172-31-20-83" Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.191 [INFO][5296] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-83" Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.214 [INFO][5296] ipam/ipam.go 511: Trying affinity for 192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.227 [INFO][5296] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.240 [INFO][5296] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.240 [INFO][5296] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.112.64/26 handle="k8s-pod-network.bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" host="ip-172-31-20-83" Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.248 [INFO][5296] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093 Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.264 [INFO][5296] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.112.64/26 handle="k8s-pod-network.bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" host="ip-172-31-20-83" Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.288 [INFO][5296] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.112.70/26] block=192.168.112.64/26 handle="k8s-pod-network.bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" host="ip-172-31-20-83" Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.289 [INFO][5296] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.70/26] handle="k8s-pod-network.bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" host="ip-172-31-20-83" Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.289 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:04:13.403674 containerd[2018]: 2025-09-04 00:04:13.289 [INFO][5296] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.70/26] IPv6=[] ContainerID="bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" HandleID="k8s-pod-network.bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" Workload="ip--172--31--20--83-k8s-csi--node--driver--vlj2d-eth0" Sep 4 00:04:13.407673 containerd[2018]: 2025-09-04 00:04:13.304 [INFO][5271] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" Namespace="calico-system" Pod="csi-node-driver-vlj2d" WorkloadEndpoint="ip--172--31--20--83-k8s-csi--node--driver--vlj2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--83-k8s-csi--node--driver--vlj2d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c03246a-81f1-4d02-b2e6-f80b3ba3c00c", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 3, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-83", ContainerID:"", Pod:"csi-node-driver-vlj2d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicec6d0bb53a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:13.407673 containerd[2018]: 2025-09-04 00:04:13.306 [INFO][5271] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.70/32] ContainerID="bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" Namespace="calico-system" Pod="csi-node-driver-vlj2d" WorkloadEndpoint="ip--172--31--20--83-k8s-csi--node--driver--vlj2d-eth0" Sep 4 00:04:13.407673 containerd[2018]: 2025-09-04 00:04:13.307 [INFO][5271] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicec6d0bb53a ContainerID="bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" Namespace="calico-system" Pod="csi-node-driver-vlj2d" WorkloadEndpoint="ip--172--31--20--83-k8s-csi--node--driver--vlj2d-eth0" Sep 4 00:04:13.407673 containerd[2018]: 2025-09-04 00:04:13.335 [INFO][5271] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" Namespace="calico-system" Pod="csi-node-driver-vlj2d" WorkloadEndpoint="ip--172--31--20--83-k8s-csi--node--driver--vlj2d-eth0" Sep 4 00:04:13.407673 containerd[2018]: 2025-09-04 00:04:13.337 [INFO][5271] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" Namespace="calico-system" Pod="csi-node-driver-vlj2d" WorkloadEndpoint="ip--172--31--20--83-k8s-csi--node--driver--vlj2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--83-k8s-csi--node--driver--vlj2d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c03246a-81f1-4d02-b2e6-f80b3ba3c00c", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 3, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-83", ContainerID:"bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093", Pod:"csi-node-driver-vlj2d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicec6d0bb53a", MAC:"ee:e7:5c:22:7f:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:13.407673 containerd[2018]: 2025-09-04 00:04:13.377 [INFO][5271] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" Namespace="calico-system" Pod="csi-node-driver-vlj2d" WorkloadEndpoint="ip--172--31--20--83-k8s-csi--node--driver--vlj2d-eth0" Sep 4 00:04:13.427285 systemd[1]: Started cri-containerd-38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2.scope - libcontainer container 38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2. Sep 4 00:04:13.534016 systemd-networkd[1850]: cali107333e80b8: Link UP Sep 4 00:04:13.536815 systemd-networkd[1850]: cali107333e80b8: Gained carrier Sep 4 00:04:13.583766 containerd[2018]: time="2025-09-04T00:04:13.582096734Z" level=info msg="connecting to shim bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093" address="unix:///run/containerd/s/980a4f2ab6897f742631c61206734fc971bd62d140465b3ed95b59281251e51f" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:12.885 [INFO][5243] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--83-k8s-coredns--674b8bbfcf--rrcdc-eth0 coredns-674b8bbfcf- kube-system 542d776f-a26a-48e7-95c9-068cfc34a1e2 855 0 2025-09-04 00:03:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-83 coredns-674b8bbfcf-rrcdc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali107333e80b8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" Namespace="kube-system" Pod="coredns-674b8bbfcf-rrcdc" WorkloadEndpoint="ip--172--31--20--83-k8s-coredns--674b8bbfcf--rrcdc-" Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:12.885 [INFO][5243] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" Namespace="kube-system" Pod="coredns-674b8bbfcf-rrcdc" WorkloadEndpoint="ip--172--31--20--83-k8s-coredns--674b8bbfcf--rrcdc-eth0" Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.087 [INFO][5300] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" HandleID="k8s-pod-network.c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" Workload="ip--172--31--20--83-k8s-coredns--674b8bbfcf--rrcdc-eth0" Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.088 [INFO][5300] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" HandleID="k8s-pod-network.c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" Workload="ip--172--31--20--83-k8s-coredns--674b8bbfcf--rrcdc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032f270), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-83", "pod":"coredns-674b8bbfcf-rrcdc", "timestamp":"2025-09-04 00:04:13.08767018 +0000 UTC"}, Hostname:"ip-172-31-20-83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.088 [INFO][5300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.289 [INFO][5300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.292 [INFO][5300] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-83' Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.328 [INFO][5300] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" host="ip-172-31-20-83" Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.360 [INFO][5300] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-83" Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.400 [INFO][5300] ipam/ipam.go 511: Trying affinity for 192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.411 [INFO][5300] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.421 [INFO][5300] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.421 [INFO][5300] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.112.64/26 handle="k8s-pod-network.c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" host="ip-172-31-20-83" Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.436 [INFO][5300] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7 Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.457 [INFO][5300] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.112.64/26 handle="k8s-pod-network.c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" host="ip-172-31-20-83" Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.476 [INFO][5300] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.112.71/26] block=192.168.112.64/26 handle="k8s-pod-network.c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" host="ip-172-31-20-83" Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.476 [INFO][5300] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.71/26] handle="k8s-pod-network.c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" host="ip-172-31-20-83" Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.478 [INFO][5300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:04:13.606380 containerd[2018]: 2025-09-04 00:04:13.481 [INFO][5300] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.71/26] IPv6=[] ContainerID="c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" HandleID="k8s-pod-network.c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" Workload="ip--172--31--20--83-k8s-coredns--674b8bbfcf--rrcdc-eth0" Sep 4 00:04:13.607944 containerd[2018]: 2025-09-04 00:04:13.512 [INFO][5243] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" Namespace="kube-system" Pod="coredns-674b8bbfcf-rrcdc" WorkloadEndpoint="ip--172--31--20--83-k8s-coredns--674b8bbfcf--rrcdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--83-k8s-coredns--674b8bbfcf--rrcdc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"542d776f-a26a-48e7-95c9-068cfc34a1e2", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-83", ContainerID:"", Pod:"coredns-674b8bbfcf-rrcdc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali107333e80b8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:13.607944 containerd[2018]: 2025-09-04 00:04:13.512 [INFO][5243] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.71/32] ContainerID="c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" Namespace="kube-system" Pod="coredns-674b8bbfcf-rrcdc" WorkloadEndpoint="ip--172--31--20--83-k8s-coredns--674b8bbfcf--rrcdc-eth0" Sep 4 00:04:13.607944 containerd[2018]: 2025-09-04 00:04:13.515 [INFO][5243] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali107333e80b8 ContainerID="c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" Namespace="kube-system" Pod="coredns-674b8bbfcf-rrcdc" WorkloadEndpoint="ip--172--31--20--83-k8s-coredns--674b8bbfcf--rrcdc-eth0" Sep 4 00:04:13.607944 containerd[2018]: 2025-09-04 00:04:13.540 [INFO][5243] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" Namespace="kube-system" Pod="coredns-674b8bbfcf-rrcdc" WorkloadEndpoint="ip--172--31--20--83-k8s-coredns--674b8bbfcf--rrcdc-eth0" Sep 4 00:04:13.607944 containerd[2018]: 2025-09-04 00:04:13.544 [INFO][5243] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" Namespace="kube-system" Pod="coredns-674b8bbfcf-rrcdc" WorkloadEndpoint="ip--172--31--20--83-k8s-coredns--674b8bbfcf--rrcdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--83-k8s-coredns--674b8bbfcf--rrcdc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"542d776f-a26a-48e7-95c9-068cfc34a1e2", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-83", ContainerID:"c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7", Pod:"coredns-674b8bbfcf-rrcdc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali107333e80b8", MAC:"c2:88:a1:f0:80:09", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:13.607944 containerd[2018]: 2025-09-04 00:04:13.591 [INFO][5243] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" Namespace="kube-system" Pod="coredns-674b8bbfcf-rrcdc" WorkloadEndpoint="ip--172--31--20--83-k8s-coredns--674b8bbfcf--rrcdc-eth0" Sep 4 00:04:13.745428 systemd[1]: Started cri-containerd-bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093.scope - libcontainer container bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093. Sep 4 00:04:13.775861 containerd[2018]: time="2025-09-04T00:04:13.775819531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8jwcz,Uid:8cd13f3f-b7b5-422b-a7be-6e9466ed1027,Namespace:kube-system,Attempt:0,} returns sandbox id \"38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2\"" Sep 4 00:04:13.809253 systemd-networkd[1850]: caliad86d8f54fa: Link UP Sep 4 00:04:13.809968 systemd-networkd[1850]: caliad86d8f54fa: Gained carrier Sep 4 00:04:13.812205 containerd[2018]: time="2025-09-04T00:04:13.812050189Z" level=info msg="connecting to shim c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7" address="unix:///run/containerd/s/bbd90113de92b9b06f1b982a7693a5660314c4e37ad55c4030927d4ebe4ba3a2" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:12.877 [INFO][5259] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--83-k8s-goldmane--54d579b49d--6jcfn-eth0 goldmane-54d579b49d- calico-system 83db72b0-cd35-4665-ad4e-cfc7b78c4403 847 0 2025-09-04 00:03:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-20-83 goldmane-54d579b49d-6jcfn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliad86d8f54fa [] [] }} ContainerID="b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" Namespace="calico-system" Pod="goldmane-54d579b49d-6jcfn" WorkloadEndpoint="ip--172--31--20--83-k8s-goldmane--54d579b49d--6jcfn-" Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:12.877 [INFO][5259] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" Namespace="calico-system" Pod="goldmane-54d579b49d-6jcfn" WorkloadEndpoint="ip--172--31--20--83-k8s-goldmane--54d579b49d--6jcfn-eth0" Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.105 [INFO][5294] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" HandleID="k8s-pod-network.b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" Workload="ip--172--31--20--83-k8s-goldmane--54d579b49d--6jcfn-eth0" Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.106 [INFO][5294] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" HandleID="k8s-pod-network.b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" Workload="ip--172--31--20--83-k8s-goldmane--54d579b49d--6jcfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123760), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-83", "pod":"goldmane-54d579b49d-6jcfn", "timestamp":"2025-09-04 00:04:13.105918326 +0000 UTC"}, Hostname:"ip-172-31-20-83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.106 [INFO][5294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.477 [INFO][5294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.477 [INFO][5294] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-83' Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.532 [INFO][5294] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" host="ip-172-31-20-83" Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.577 [INFO][5294] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-83" Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.603 [INFO][5294] ipam/ipam.go 511: Trying affinity for 192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.612 [INFO][5294] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.632 [INFO][5294] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.64/26 host="ip-172-31-20-83" Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.633 [INFO][5294] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.112.64/26 handle="k8s-pod-network.b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" host="ip-172-31-20-83" Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.642 [INFO][5294] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95 Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.683 [INFO][5294] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.112.64/26 handle="k8s-pod-network.b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" host="ip-172-31-20-83" Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.727 [INFO][5294] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.112.72/26] block=192.168.112.64/26 handle="k8s-pod-network.b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" host="ip-172-31-20-83" Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.727 [INFO][5294] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.72/26] handle="k8s-pod-network.b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" host="ip-172-31-20-83" Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.727 [INFO][5294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:04:13.879658 containerd[2018]: 2025-09-04 00:04:13.744 [INFO][5294] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.72/26] IPv6=[] ContainerID="b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" HandleID="k8s-pod-network.b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" Workload="ip--172--31--20--83-k8s-goldmane--54d579b49d--6jcfn-eth0" Sep 4 00:04:13.881411 containerd[2018]: 2025-09-04 00:04:13.780 [INFO][5259] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" Namespace="calico-system" Pod="goldmane-54d579b49d-6jcfn" WorkloadEndpoint="ip--172--31--20--83-k8s-goldmane--54d579b49d--6jcfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--83-k8s-goldmane--54d579b49d--6jcfn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"83db72b0-cd35-4665-ad4e-cfc7b78c4403", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 3, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-83", ContainerID:"", Pod:"goldmane-54d579b49d-6jcfn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.112.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliad86d8f54fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:13.881411 containerd[2018]: 2025-09-04 00:04:13.781 [INFO][5259] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.72/32] ContainerID="b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" Namespace="calico-system" Pod="goldmane-54d579b49d-6jcfn" WorkloadEndpoint="ip--172--31--20--83-k8s-goldmane--54d579b49d--6jcfn-eth0" Sep 4 00:04:13.881411 containerd[2018]: 2025-09-04 00:04:13.781 [INFO][5259] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad86d8f54fa ContainerID="b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" Namespace="calico-system" Pod="goldmane-54d579b49d-6jcfn" WorkloadEndpoint="ip--172--31--20--83-k8s-goldmane--54d579b49d--6jcfn-eth0" Sep 4 00:04:13.881411 containerd[2018]: 2025-09-04 00:04:13.815 [INFO][5259] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" Namespace="calico-system" Pod="goldmane-54d579b49d-6jcfn" WorkloadEndpoint="ip--172--31--20--83-k8s-goldmane--54d579b49d--6jcfn-eth0" Sep 4 00:04:13.881411 containerd[2018]: 2025-09-04 00:04:13.816 [INFO][5259] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" Namespace="calico-system" Pod="goldmane-54d579b49d-6jcfn" WorkloadEndpoint="ip--172--31--20--83-k8s-goldmane--54d579b49d--6jcfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--83-k8s-goldmane--54d579b49d--6jcfn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"83db72b0-cd35-4665-ad4e-cfc7b78c4403", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 3, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-83", ContainerID:"b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95", Pod:"goldmane-54d579b49d-6jcfn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.112.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliad86d8f54fa", MAC:"ba:45:da:ba:75:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:13.881411 containerd[2018]: 2025-09-04 00:04:13.843 [INFO][5259] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" Namespace="calico-system" Pod="goldmane-54d579b49d-6jcfn" WorkloadEndpoint="ip--172--31--20--83-k8s-goldmane--54d579b49d--6jcfn-eth0" Sep 4 00:04:13.948149 containerd[2018]: time="2025-09-04T00:04:13.947899972Z" level=info msg="CreateContainer within sandbox \"38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:04:13.978429 systemd[1]: Started cri-containerd-c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7.scope - libcontainer container c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7. Sep 4 00:04:14.017458 containerd[2018]: time="2025-09-04T00:04:14.015073487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlj2d,Uid:4c03246a-81f1-4d02-b2e6-f80b3ba3c00c,Namespace:calico-system,Attempt:0,} returns sandbox id \"bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093\"" Sep 4 00:04:14.020619 containerd[2018]: time="2025-09-04T00:04:14.020328767Z" level=info msg="connecting to shim b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95" address="unix:///run/containerd/s/f5294e72ccb296e91cc532e75728a551cb6aa9183a0db9c9f07f22b50b94f106" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:14.044360 containerd[2018]: time="2025-09-04T00:04:14.043748577Z" level=info msg="Container ff4491bfb46f544bb1a4618c22c97195cc928031b8faf96331b4dd7c8f6de570: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:14.064182 containerd[2018]: time="2025-09-04T00:04:14.064130719Z" level=info msg="CreateContainer within sandbox \"38213a5ff73f095f457ba046336ccaad48361d11f4b543b3a06bfa11aa26a0e2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ff4491bfb46f544bb1a4618c22c97195cc928031b8faf96331b4dd7c8f6de570\"" Sep 4 00:04:14.067591 containerd[2018]: time="2025-09-04T00:04:14.067547403Z" level=info msg="StartContainer for \"ff4491bfb46f544bb1a4618c22c97195cc928031b8faf96331b4dd7c8f6de570\"" Sep 4 00:04:14.070456 containerd[2018]: time="2025-09-04T00:04:14.070404053Z" level=info msg="connecting to shim ff4491bfb46f544bb1a4618c22c97195cc928031b8faf96331b4dd7c8f6de570" address="unix:///run/containerd/s/e4b4458f476ab6973a9a6e798fe4bf6e14188f034320d71e1c1349b5c29c7102" protocol=ttrpc version=3 Sep 4 00:04:14.133379 systemd[1]: Started cri-containerd-b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95.scope - libcontainer container b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95. Sep 4 00:04:14.136251 systemd[1]: Started cri-containerd-ff4491bfb46f544bb1a4618c22c97195cc928031b8faf96331b4dd7c8f6de570.scope - libcontainer container ff4491bfb46f544bb1a4618c22c97195cc928031b8faf96331b4dd7c8f6de570. Sep 4 00:04:14.179610 containerd[2018]: time="2025-09-04T00:04:14.179537025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rrcdc,Uid:542d776f-a26a-48e7-95c9-068cfc34a1e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7\"" Sep 4 00:04:14.193001 containerd[2018]: time="2025-09-04T00:04:14.192942370Z" level=info msg="CreateContainer within sandbox \"c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:04:14.215623 containerd[2018]: time="2025-09-04T00:04:14.215580124Z" level=info msg="Container 17e2aeeb3de3372006403868d0387cf0ccd257af751de13acd875902e867bbeb: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:14.229573 containerd[2018]: time="2025-09-04T00:04:14.229495439Z" level=info msg="CreateContainer within sandbox \"c879c5c15f17286ea9bf1e22f60e67cb3c19fcf408928c2074b2ac71485e80e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"17e2aeeb3de3372006403868d0387cf0ccd257af751de13acd875902e867bbeb\"" Sep 4 00:04:14.231579 containerd[2018]: time="2025-09-04T00:04:14.231544386Z" level=info msg="StartContainer for \"17e2aeeb3de3372006403868d0387cf0ccd257af751de13acd875902e867bbeb\"" Sep 4 00:04:14.240551 containerd[2018]: time="2025-09-04T00:04:14.240486449Z" level=info msg="connecting to shim 17e2aeeb3de3372006403868d0387cf0ccd257af751de13acd875902e867bbeb" address="unix:///run/containerd/s/bbd90113de92b9b06f1b982a7693a5660314c4e37ad55c4030927d4ebe4ba3a2" protocol=ttrpc version=3 Sep 4 00:04:14.269635 containerd[2018]: time="2025-09-04T00:04:14.269361273Z" level=info msg="StartContainer for \"ff4491bfb46f544bb1a4618c22c97195cc928031b8faf96331b4dd7c8f6de570\" returns successfully" Sep 4 00:04:14.273134 systemd[1]: Started cri-containerd-17e2aeeb3de3372006403868d0387cf0ccd257af751de13acd875902e867bbeb.scope - libcontainer container 17e2aeeb3de3372006403868d0387cf0ccd257af751de13acd875902e867bbeb. Sep 4 00:04:14.340255 containerd[2018]: time="2025-09-04T00:04:14.340216921Z" level=info msg="StartContainer for \"17e2aeeb3de3372006403868d0387cf0ccd257af751de13acd875902e867bbeb\" returns successfully" Sep 4 00:04:14.361835 containerd[2018]: time="2025-09-04T00:04:14.360599981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6jcfn,Uid:83db72b0-cd35-4665-ad4e-cfc7b78c4403,Namespace:calico-system,Attempt:0,} returns sandbox id \"b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95\"" Sep 4 00:04:14.545224 systemd-networkd[1850]: calicec6d0bb53a: Gained IPv6LL Sep 4 00:04:14.662495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount650880437.mount: Deactivated successfully. Sep 4 00:04:14.929413 systemd-networkd[1850]: cali107333e80b8: Gained IPv6LL Sep 4 00:04:14.993582 systemd-networkd[1850]: caliad86d8f54fa: Gained IPv6LL Sep 4 00:04:14.994578 systemd-networkd[1850]: cali408a95cd08a: Gained IPv6LL Sep 4 00:04:15.191127 kubelet[3603]: I0904 00:04:15.190946 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8jwcz" podStartSLOduration=46.190170147 podStartE2EDuration="46.190170147s" podCreationTimestamp="2025-09-04 00:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:04:15.187549076 +0000 UTC m=+51.769782074" watchObservedRunningTime="2025-09-04 00:04:15.190170147 +0000 UTC m=+51.772403144" Sep 4 00:04:15.193805 kubelet[3603]: I0904 00:04:15.193475 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rrcdc" podStartSLOduration=46.19345639 podStartE2EDuration="46.19345639s" podCreationTimestamp="2025-09-04 00:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:04:15.119273139 +0000 UTC m=+51.701506135" watchObservedRunningTime="2025-09-04 00:04:15.19345639 +0000 UTC m=+51.775689387" Sep 4 00:04:15.462916 containerd[2018]: time="2025-09-04T00:04:15.462702304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:15.464642 containerd[2018]: time="2025-09-04T00:04:15.464551436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 4 00:04:15.466360 containerd[2018]: time="2025-09-04T00:04:15.466141486Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:15.479148 containerd[2018]: time="2025-09-04T00:04:15.479075692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:15.480313 containerd[2018]: time="2025-09-04T00:04:15.480160203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.898174535s" Sep 4 00:04:15.480313 containerd[2018]: time="2025-09-04T00:04:15.480211455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 4 00:04:15.481757 containerd[2018]: time="2025-09-04T00:04:15.481688070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 00:04:15.553423 containerd[2018]: time="2025-09-04T00:04:15.553379289Z" level=info msg="CreateContainer within sandbox \"7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 00:04:15.565702 containerd[2018]: time="2025-09-04T00:04:15.564999788Z" level=info msg="Container ac3be521731f91d40e7769ba404db4e13b8f6b8bc8b226769a27b40bf083a567: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:15.594030 containerd[2018]: time="2025-09-04T00:04:15.593990740Z" level=info msg="CreateContainer within sandbox \"7a5d4665d82f985adadb4d139c815c69f1664266c73f214330c0d8a6988a7ace\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ac3be521731f91d40e7769ba404db4e13b8f6b8bc8b226769a27b40bf083a567\"" Sep 4 00:04:15.594968 containerd[2018]: time="2025-09-04T00:04:15.594930988Z" level=info msg="StartContainer for \"ac3be521731f91d40e7769ba404db4e13b8f6b8bc8b226769a27b40bf083a567\"" Sep 4 00:04:15.597639 containerd[2018]: time="2025-09-04T00:04:15.596732186Z" level=info msg="connecting to shim ac3be521731f91d40e7769ba404db4e13b8f6b8bc8b226769a27b40bf083a567" address="unix:///run/containerd/s/159f1afd87fb760edaca4aae191874402d9ebebb28ec4319c0c7d8254079b246" protocol=ttrpc version=3 Sep 4 00:04:15.634985 systemd[1]: Started cri-containerd-ac3be521731f91d40e7769ba404db4e13b8f6b8bc8b226769a27b40bf083a567.scope - libcontainer container ac3be521731f91d40e7769ba404db4e13b8f6b8bc8b226769a27b40bf083a567. Sep 4 00:04:15.782863 containerd[2018]: time="2025-09-04T00:04:15.782820070Z" level=info msg="StartContainer for \"ac3be521731f91d40e7769ba404db4e13b8f6b8bc8b226769a27b40bf083a567\" returns successfully" Sep 4 00:04:17.238813 containerd[2018]: time="2025-09-04T00:04:17.238740840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac3be521731f91d40e7769ba404db4e13b8f6b8bc8b226769a27b40bf083a567\" id:\"245c6c4fb4270c0fe7bfa47146033eebcfd0ce3926b5aa10f27e2d553d577e44\" pid:5667 exited_at:{seconds:1756944257 nanos:233222001}" Sep 4 00:04:17.279839 kubelet[3603]: I0904 00:04:17.279760 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7fd96bd78c-2tmlb" podStartSLOduration=27.361943152 podStartE2EDuration="32.279734759s" podCreationTimestamp="2025-09-04 00:03:45 +0000 UTC" firstStartedPulling="2025-09-04 00:04:10.563539573 +0000 UTC m=+47.145772564" lastFinishedPulling="2025-09-04 00:04:15.481331198 +0000 UTC m=+52.063564171" observedRunningTime="2025-09-04 00:04:16.1926124 +0000 UTC m=+52.774845396" watchObservedRunningTime="2025-09-04 00:04:17.279734759 +0000 UTC m=+53.861967756" Sep 4 00:04:17.632050 ntpd[1970]: Listen normally on 8 vxlan.calico 192.168.112.64:123 Sep 4 00:04:17.634074 ntpd[1970]: 4 Sep 00:04:17 ntpd[1970]: Listen normally on 8 vxlan.calico 192.168.112.64:123 Sep 4 00:04:17.634074 ntpd[1970]: 4 Sep 00:04:17 ntpd[1970]: Listen normally on 9 calice1584e9d57 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 4 00:04:17.634074 ntpd[1970]: 4 Sep 00:04:17 ntpd[1970]: Listen normally on 10 cali5f7c713629e [fe80::ecee:eeff:feee:eeee%5]:123 Sep 4 00:04:17.634074 ntpd[1970]: 4 Sep 00:04:17 ntpd[1970]: Listen normally on 11 cali891f8e3b9a6 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 4 00:04:17.634074 ntpd[1970]: 4 Sep 00:04:17 ntpd[1970]: Listen normally on 12 cali2977c5249c9 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 00:04:17.634074 ntpd[1970]: 4 Sep 00:04:17 ntpd[1970]: Listen normally on 13 vxlan.calico [fe80::645b:67ff:fe57:c1b2%8]:123 Sep 4 00:04:17.634074 ntpd[1970]: 4 Sep 00:04:17 ntpd[1970]: Listen normally on 14 cali408a95cd08a [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 00:04:17.634074 ntpd[1970]: 4 Sep 00:04:17 ntpd[1970]: Listen normally on 15 calicec6d0bb53a [fe80::ecee:eeff:feee:eeee%12]:123 Sep 4 00:04:17.634074 ntpd[1970]: 4 Sep 00:04:17 ntpd[1970]: Listen normally on 16 cali107333e80b8 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 4 00:04:17.634074 ntpd[1970]: 4 Sep 00:04:17 ntpd[1970]: Listen normally on 17 caliad86d8f54fa [fe80::ecee:eeff:feee:eeee%14]:123 Sep 4 00:04:17.632140 ntpd[1970]: Listen normally on 9 calice1584e9d57 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 4 00:04:17.632196 ntpd[1970]: Listen normally on 10 cali5f7c713629e [fe80::ecee:eeff:feee:eeee%5]:123 Sep 4 00:04:17.632233 ntpd[1970]: Listen normally on 11 cali891f8e3b9a6 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 4 00:04:17.632270 ntpd[1970]: Listen normally on 12 cali2977c5249c9 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 00:04:17.632309 ntpd[1970]: Listen normally on 13 vxlan.calico [fe80::645b:67ff:fe57:c1b2%8]:123 Sep 4 00:04:17.632350 ntpd[1970]: Listen normally on 14 cali408a95cd08a [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 00:04:17.632386 ntpd[1970]: Listen normally on 15 calicec6d0bb53a [fe80::ecee:eeff:feee:eeee%12]:123 Sep 4 00:04:17.632425 ntpd[1970]: Listen normally on 16 cali107333e80b8 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 4 00:04:17.632461 ntpd[1970]: Listen normally on 17 caliad86d8f54fa [fe80::ecee:eeff:feee:eeee%14]:123 Sep 4 00:04:18.630248 containerd[2018]: time="2025-09-04T00:04:18.630208965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:18.633796 containerd[2018]: time="2025-09-04T00:04:18.633746987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 4 00:04:18.635504 containerd[2018]: time="2025-09-04T00:04:18.634655931Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:18.637896 containerd[2018]: time="2025-09-04T00:04:18.637354982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:18.638153 containerd[2018]: time="2025-09-04T00:04:18.638122575Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.156397659s" Sep 4 00:04:18.638231 containerd[2018]: time="2025-09-04T00:04:18.638219681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 00:04:18.640067 containerd[2018]: time="2025-09-04T00:04:18.640030213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 00:04:18.644683 containerd[2018]: time="2025-09-04T00:04:18.644506079Z" level=info msg="CreateContainer within sandbox \"a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 00:04:18.690075 containerd[2018]: time="2025-09-04T00:04:18.690036178Z" level=info msg="Container fb99f4747629fffb5b150e8d2a26642d66ff8e514590e3b6de01543ca3d80894: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:18.702628 containerd[2018]: time="2025-09-04T00:04:18.702572278Z" level=info msg="CreateContainer within sandbox \"a6ba30e476ce78788dd87e861f99abb244b9b970b4a38dca1e04bc54e6b02a10\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fb99f4747629fffb5b150e8d2a26642d66ff8e514590e3b6de01543ca3d80894\"" Sep 4 00:04:18.703692 containerd[2018]: time="2025-09-04T00:04:18.703642453Z" level=info msg="StartContainer for \"fb99f4747629fffb5b150e8d2a26642d66ff8e514590e3b6de01543ca3d80894\"" Sep 4 00:04:18.706447 containerd[2018]: time="2025-09-04T00:04:18.706402920Z" level=info msg="connecting to shim fb99f4747629fffb5b150e8d2a26642d66ff8e514590e3b6de01543ca3d80894" address="unix:///run/containerd/s/3f9f3728c9298aa9421b3ca9534fcba4d3667b20a7f0705f153e81e657435003" protocol=ttrpc version=3 Sep 4 00:04:18.735110 systemd[1]: Started cri-containerd-fb99f4747629fffb5b150e8d2a26642d66ff8e514590e3b6de01543ca3d80894.scope - libcontainer container fb99f4747629fffb5b150e8d2a26642d66ff8e514590e3b6de01543ca3d80894. Sep 4 00:04:18.811748 containerd[2018]: time="2025-09-04T00:04:18.811682542Z" level=info msg="StartContainer for \"fb99f4747629fffb5b150e8d2a26642d66ff8e514590e3b6de01543ca3d80894\" returns successfully" Sep 4 00:04:19.109920 containerd[2018]: time="2025-09-04T00:04:19.109861815Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:19.112540 containerd[2018]: time="2025-09-04T00:04:19.112503713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 4 00:04:19.116230 containerd[2018]: time="2025-09-04T00:04:19.116192597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 475.247773ms" Sep 4 00:04:19.117826 containerd[2018]: time="2025-09-04T00:04:19.117796944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 00:04:19.120286 containerd[2018]: time="2025-09-04T00:04:19.120261032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 4 00:04:19.128223 containerd[2018]: time="2025-09-04T00:04:19.128052669Z" level=info msg="CreateContainer within sandbox \"a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 00:04:19.156010 containerd[2018]: time="2025-09-04T00:04:19.150734140Z" level=info msg="Container 42b9ebbcfa4e5f765474530a0c0245b6081db2afaf5c3961ea40275a6c87a374: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:19.194516 containerd[2018]: time="2025-09-04T00:04:19.194449707Z" level=info msg="CreateContainer within sandbox \"a25fd9f31ceab8211af6a5533bd35501374831d03a4c31daab9bab4235b0987f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"42b9ebbcfa4e5f765474530a0c0245b6081db2afaf5c3961ea40275a6c87a374\"" Sep 4 00:04:19.196499 containerd[2018]: time="2025-09-04T00:04:19.196354378Z" level=info msg="StartContainer for \"42b9ebbcfa4e5f765474530a0c0245b6081db2afaf5c3961ea40275a6c87a374\"" Sep 4 00:04:19.199452 containerd[2018]: time="2025-09-04T00:04:19.199398650Z" level=info msg="connecting to shim 42b9ebbcfa4e5f765474530a0c0245b6081db2afaf5c3961ea40275a6c87a374" address="unix:///run/containerd/s/208de93a76e36608e878540a6f82c806c68814e3f2c77b2c8dc5d019b5377b49" protocol=ttrpc version=3 Sep 4 00:04:19.246156 kubelet[3603]: I0904 00:04:19.246086 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69df694c46-rpcnm" podStartSLOduration=33.217475433 podStartE2EDuration="41.246045738s" podCreationTimestamp="2025-09-04 00:03:38 +0000 UTC" firstStartedPulling="2025-09-04 00:04:10.611081167 +0000 UTC m=+47.193314141" lastFinishedPulling="2025-09-04 00:04:18.639651472 +0000 UTC m=+55.221884446" observedRunningTime="2025-09-04 00:04:19.245900875 +0000 UTC m=+55.828133894" watchObservedRunningTime="2025-09-04 00:04:19.246045738 +0000 UTC m=+55.828278735" Sep 4 00:04:19.293559 systemd[1]: Started cri-containerd-42b9ebbcfa4e5f765474530a0c0245b6081db2afaf5c3961ea40275a6c87a374.scope - libcontainer container 42b9ebbcfa4e5f765474530a0c0245b6081db2afaf5c3961ea40275a6c87a374. Sep 4 00:04:19.413355 containerd[2018]: time="2025-09-04T00:04:19.412432375Z" level=info msg="StartContainer for \"42b9ebbcfa4e5f765474530a0c0245b6081db2afaf5c3961ea40275a6c87a374\" returns successfully" Sep 4 00:04:20.226562 kubelet[3603]: I0904 00:04:20.226503 3603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:04:20.243368 kubelet[3603]: I0904 00:04:20.243306 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69df694c46-r4snj" podStartSLOduration=33.827181648 podStartE2EDuration="42.243282092s" podCreationTimestamp="2025-09-04 00:03:38 +0000 UTC" firstStartedPulling="2025-09-04 00:04:10.703934994 +0000 UTC m=+47.286167981" lastFinishedPulling="2025-09-04 00:04:19.120035438 +0000 UTC m=+55.702268425" observedRunningTime="2025-09-04 00:04:20.241084886 +0000 UTC m=+56.823317882" watchObservedRunningTime="2025-09-04 00:04:20.243282092 +0000 UTC m=+56.825515092" Sep 4 00:04:20.812168 containerd[2018]: time="2025-09-04T00:04:20.812116317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:20.815995 containerd[2018]: time="2025-09-04T00:04:20.815537568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 4 00:04:20.818902 containerd[2018]: time="2025-09-04T00:04:20.818553292Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:20.825537 containerd[2018]: time="2025-09-04T00:04:20.824949484Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.7043215s" Sep 4 00:04:20.825537 containerd[2018]: time="2025-09-04T00:04:20.825001291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 4 00:04:20.827708 containerd[2018]: time="2025-09-04T00:04:20.825958346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:20.834490 containerd[2018]: time="2025-09-04T00:04:20.834448739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 4 00:04:20.844908 containerd[2018]: time="2025-09-04T00:04:20.841746486Z" level=info msg="CreateContainer within sandbox \"78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 4 00:04:20.860931 containerd[2018]: time="2025-09-04T00:04:20.857118116Z" level=info msg="Container b80f077522374a57bdd2732673e1e6a54743d63875166e1d7008913c6b9498d9: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:20.887519 containerd[2018]: time="2025-09-04T00:04:20.887475804Z" level=info msg="CreateContainer within sandbox \"78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b80f077522374a57bdd2732673e1e6a54743d63875166e1d7008913c6b9498d9\"" Sep 4 00:04:20.889177 containerd[2018]: time="2025-09-04T00:04:20.888710648Z" level=info msg="StartContainer for \"b80f077522374a57bdd2732673e1e6a54743d63875166e1d7008913c6b9498d9\"" Sep 4 00:04:20.893870 containerd[2018]: time="2025-09-04T00:04:20.890420967Z" level=info msg="connecting to shim b80f077522374a57bdd2732673e1e6a54743d63875166e1d7008913c6b9498d9" address="unix:///run/containerd/s/1ee90fafd14f88bd4be3dcb869fc0562062b829ec6365331aa97713e10b3edfe" protocol=ttrpc version=3 Sep 4 00:04:20.997136 systemd[1]: Started cri-containerd-b80f077522374a57bdd2732673e1e6a54743d63875166e1d7008913c6b9498d9.scope - libcontainer container b80f077522374a57bdd2732673e1e6a54743d63875166e1d7008913c6b9498d9. Sep 4 00:04:21.100657 containerd[2018]: time="2025-09-04T00:04:21.100491224Z" level=info msg="StartContainer for \"b80f077522374a57bdd2732673e1e6a54743d63875166e1d7008913c6b9498d9\" returns successfully" Sep 4 00:04:21.230451 kubelet[3603]: I0904 00:04:21.230183 3603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:04:22.981916 containerd[2018]: time="2025-09-04T00:04:22.981755534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:22.983682 containerd[2018]: time="2025-09-04T00:04:22.983638755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 4 00:04:22.986369 containerd[2018]: time="2025-09-04T00:04:22.986324127Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:22.990259 containerd[2018]: time="2025-09-04T00:04:22.989456780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:22.990259 containerd[2018]: time="2025-09-04T00:04:22.990138257Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.155641334s" Sep 4 00:04:22.990259 containerd[2018]: time="2025-09-04T00:04:22.990168838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 4 00:04:22.996801 containerd[2018]: time="2025-09-04T00:04:22.996756666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 4 00:04:23.004397 containerd[2018]: time="2025-09-04T00:04:23.004353625Z" level=info msg="CreateContainer within sandbox \"bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 00:04:23.074911 containerd[2018]: time="2025-09-04T00:04:23.074794955Z" level=info msg="Container 13198cfed0586cb24a850d8be1e66ebb0ee73f344f4c7805dd87b1b882aa9c39: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:23.105229 containerd[2018]: time="2025-09-04T00:04:23.105171603Z" level=info msg="CreateContainer within sandbox \"bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"13198cfed0586cb24a850d8be1e66ebb0ee73f344f4c7805dd87b1b882aa9c39\"" Sep 4 00:04:23.106399 containerd[2018]: time="2025-09-04T00:04:23.106360669Z" level=info msg="StartContainer for \"13198cfed0586cb24a850d8be1e66ebb0ee73f344f4c7805dd87b1b882aa9c39\"" Sep 4 00:04:23.108356 containerd[2018]: time="2025-09-04T00:04:23.108178966Z" level=info msg="connecting to shim 13198cfed0586cb24a850d8be1e66ebb0ee73f344f4c7805dd87b1b882aa9c39" address="unix:///run/containerd/s/980a4f2ab6897f742631c61206734fc971bd62d140465b3ed95b59281251e51f" protocol=ttrpc version=3 Sep 4 00:04:23.144157 systemd[1]: Started cri-containerd-13198cfed0586cb24a850d8be1e66ebb0ee73f344f4c7805dd87b1b882aa9c39.scope - libcontainer container 13198cfed0586cb24a850d8be1e66ebb0ee73f344f4c7805dd87b1b882aa9c39. Sep 4 00:04:23.202774 containerd[2018]: time="2025-09-04T00:04:23.202722251Z" level=info msg="StartContainer for \"13198cfed0586cb24a850d8be1e66ebb0ee73f344f4c7805dd87b1b882aa9c39\" returns successfully" Sep 4 00:04:24.961104 systemd[1]: Started sshd@9-172.31.20.83:22-139.178.68.195:40014.service - OpenSSH per-connection server daemon (139.178.68.195:40014). Sep 4 00:04:25.362214 sshd[5865]: Accepted publickey for core from 139.178.68.195 port 40014 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:04:25.371919 sshd-session[5865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:25.387964 systemd-logind[1981]: New session 10 of user core. Sep 4 00:04:25.394092 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 00:04:26.608738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2555086443.mount: Deactivated successfully. Sep 4 00:04:26.699507 sshd[5872]: Connection closed by 139.178.68.195 port 40014 Sep 4 00:04:26.700385 sshd-session[5865]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:26.712507 systemd[1]: sshd@9-172.31.20.83:22-139.178.68.195:40014.service: Deactivated successfully. Sep 4 00:04:26.716969 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 00:04:26.718597 systemd-logind[1981]: Session 10 logged out. Waiting for processes to exit. Sep 4 00:04:26.722361 systemd-logind[1981]: Removed session 10. Sep 4 00:04:28.088142 containerd[2018]: time="2025-09-04T00:04:28.088045927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:28.143018 containerd[2018]: time="2025-09-04T00:04:28.091903128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 4 00:04:28.217706 containerd[2018]: time="2025-09-04T00:04:28.217602983Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:28.234710 containerd[2018]: time="2025-09-04T00:04:28.234660546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:28.269646 containerd[2018]: time="2025-09-04T00:04:28.269420975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.272614517s" Sep 4 00:04:28.269646 containerd[2018]: time="2025-09-04T00:04:28.269642521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 4 00:04:28.289294 containerd[2018]: time="2025-09-04T00:04:28.289248544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 4 00:04:28.355969 containerd[2018]: time="2025-09-04T00:04:28.355789352Z" level=info msg="CreateContainer within sandbox \"b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 4 00:04:28.396189 containerd[2018]: time="2025-09-04T00:04:28.396038966Z" level=info msg="Container e0d7d282e6b7daf7adc250a327af5d5d7c76b40b5f1b635c2fbace4db4eeaead: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:28.432409 containerd[2018]: time="2025-09-04T00:04:28.432162975Z" level=info msg="CreateContainer within sandbox \"b692aa616d1b2486107955bbd8134038e2a7f798c20b1ff412d983e2660a8d95\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e0d7d282e6b7daf7adc250a327af5d5d7c76b40b5f1b635c2fbace4db4eeaead\"" Sep 4 00:04:28.433155 containerd[2018]: time="2025-09-04T00:04:28.433123950Z" level=info msg="StartContainer for \"e0d7d282e6b7daf7adc250a327af5d5d7c76b40b5f1b635c2fbace4db4eeaead\"" Sep 4 00:04:28.435008 containerd[2018]: time="2025-09-04T00:04:28.434977611Z" level=info msg="connecting to shim e0d7d282e6b7daf7adc250a327af5d5d7c76b40b5f1b635c2fbace4db4eeaead" address="unix:///run/containerd/s/f5294e72ccb296e91cc532e75728a551cb6aa9183a0db9c9f07f22b50b94f106" protocol=ttrpc version=3 Sep 4 00:04:28.651426 systemd[1]: Started cri-containerd-e0d7d282e6b7daf7adc250a327af5d5d7c76b40b5f1b635c2fbace4db4eeaead.scope - libcontainer container e0d7d282e6b7daf7adc250a327af5d5d7c76b40b5f1b635c2fbace4db4eeaead. Sep 4 00:04:28.766559 containerd[2018]: time="2025-09-04T00:04:28.766430823Z" level=info msg="StartContainer for \"e0d7d282e6b7daf7adc250a327af5d5d7c76b40b5f1b635c2fbace4db4eeaead\" returns successfully" Sep 4 00:04:29.497776 kubelet[3603]: I0904 00:04:29.490731 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-6jcfn" podStartSLOduration=31.554510431 podStartE2EDuration="45.478352467s" podCreationTimestamp="2025-09-04 00:03:44 +0000 UTC" firstStartedPulling="2025-09-04 00:04:14.364581996 +0000 UTC m=+50.946814973" lastFinishedPulling="2025-09-04 00:04:28.288424035 +0000 UTC m=+64.870657009" observedRunningTime="2025-09-04 00:04:29.476216935 +0000 UTC m=+66.058449937" watchObservedRunningTime="2025-09-04 00:04:29.478352467 +0000 UTC m=+66.060585464" Sep 4 00:04:30.057936 containerd[2018]: time="2025-09-04T00:04:30.057819308Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0d7d282e6b7daf7adc250a327af5d5d7c76b40b5f1b635c2fbace4db4eeaead\" id:\"fe2217e6d3cac157d41d435e89f81e0c2c91a001d194784ddf4605ae2d4bba68\" pid:5945 exit_status:1 exited_at:{seconds:1756944270 nanos:33723625}" Sep 4 00:04:30.741505 containerd[2018]: time="2025-09-04T00:04:30.741467367Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0d7d282e6b7daf7adc250a327af5d5d7c76b40b5f1b635c2fbace4db4eeaead\" id:\"b45e1a01ebbf40488f68a576a5fe87225602c24cd67e94d47722246bcdd3fbab\" pid:5973 exited_at:{seconds:1756944270 nanos:738003229}" Sep 4 00:04:31.028309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount474193205.mount: Deactivated successfully. Sep 4 00:04:31.077179 containerd[2018]: time="2025-09-04T00:04:31.077115987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:31.078621 containerd[2018]: time="2025-09-04T00:04:31.078576201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 4 00:04:31.080080 containerd[2018]: time="2025-09-04T00:04:31.080012117Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:31.082933 containerd[2018]: time="2025-09-04T00:04:31.082830028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:31.083564 containerd[2018]: time="2025-09-04T00:04:31.083526294Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.794185522s" Sep 4 00:04:31.084408 containerd[2018]: time="2025-09-04T00:04:31.083673582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 4 00:04:31.101394 containerd[2018]: time="2025-09-04T00:04:31.101334953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 4 00:04:31.106666 containerd[2018]: time="2025-09-04T00:04:31.106625836Z" level=info msg="CreateContainer within sandbox \"78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 4 00:04:31.119539 containerd[2018]: time="2025-09-04T00:04:31.118739355Z" level=info msg="Container 5e45d944cb211689c5b3d6d98b21987f144c82a0f0e03bbb30b1cfccbcecbd65: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:31.156251 containerd[2018]: time="2025-09-04T00:04:31.156202142Z" level=info msg="CreateContainer within sandbox \"78e154847a355e96d984eb85cb870083414f30e04ab1d9cc81036b015f758302\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5e45d944cb211689c5b3d6d98b21987f144c82a0f0e03bbb30b1cfccbcecbd65\"" Sep 4 00:04:31.157090 containerd[2018]: time="2025-09-04T00:04:31.157056317Z" level=info msg="StartContainer for \"5e45d944cb211689c5b3d6d98b21987f144c82a0f0e03bbb30b1cfccbcecbd65\"" Sep 4 00:04:31.158532 containerd[2018]: time="2025-09-04T00:04:31.158456796Z" level=info msg="connecting to shim 5e45d944cb211689c5b3d6d98b21987f144c82a0f0e03bbb30b1cfccbcecbd65" address="unix:///run/containerd/s/1ee90fafd14f88bd4be3dcb869fc0562062b829ec6365331aa97713e10b3edfe" protocol=ttrpc version=3 Sep 4 00:04:31.198341 systemd[1]: Started cri-containerd-5e45d944cb211689c5b3d6d98b21987f144c82a0f0e03bbb30b1cfccbcecbd65.scope - libcontainer container 5e45d944cb211689c5b3d6d98b21987f144c82a0f0e03bbb30b1cfccbcecbd65. Sep 4 00:04:31.328860 containerd[2018]: time="2025-09-04T00:04:31.328725255Z" level=info msg="StartContainer for \"5e45d944cb211689c5b3d6d98b21987f144c82a0f0e03bbb30b1cfccbcecbd65\" returns successfully" Sep 4 00:04:31.744956 systemd[1]: Started sshd@10-172.31.20.83:22-139.178.68.195:40150.service - OpenSSH per-connection server daemon (139.178.68.195:40150). Sep 4 00:04:32.044372 sshd[6032]: Accepted publickey for core from 139.178.68.195 port 40150 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:04:32.049141 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:32.056777 systemd-logind[1981]: New session 11 of user core. Sep 4 00:04:32.061128 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 00:04:33.223439 containerd[2018]: time="2025-09-04T00:04:33.223370406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:33.224300 containerd[2018]: time="2025-09-04T00:04:33.224238927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 4 00:04:33.226657 containerd[2018]: time="2025-09-04T00:04:33.226497143Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:33.230021 containerd[2018]: time="2025-09-04T00:04:33.229952311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:33.230791 containerd[2018]: time="2025-09-04T00:04:33.230615380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.129233112s" Sep 4 00:04:33.230791 containerd[2018]: time="2025-09-04T00:04:33.230655039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 4 00:04:33.452943 containerd[2018]: time="2025-09-04T00:04:33.452681445Z" level=info msg="CreateContainer within sandbox \"bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 00:04:33.489388 containerd[2018]: time="2025-09-04T00:04:33.489284927Z" level=info msg="Container 98e6812b052b8f848195551346a1288a1ac421c2dc9834ec585877faa3ff02cf: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:33.522222 containerd[2018]: time="2025-09-04T00:04:33.522178813Z" level=info msg="CreateContainer within sandbox \"bde7912f9f0f88581b0075be76f9b2dc9849a8c9a94e014e45be4494e25fd093\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"98e6812b052b8f848195551346a1288a1ac421c2dc9834ec585877faa3ff02cf\"" Sep 4 00:04:33.546491 containerd[2018]: time="2025-09-04T00:04:33.546427111Z" level=info msg="StartContainer for \"98e6812b052b8f848195551346a1288a1ac421c2dc9834ec585877faa3ff02cf\"" Sep 4 00:04:33.555647 containerd[2018]: time="2025-09-04T00:04:33.549261576Z" level=info msg="connecting to shim 98e6812b052b8f848195551346a1288a1ac421c2dc9834ec585877faa3ff02cf" address="unix:///run/containerd/s/980a4f2ab6897f742631c61206734fc971bd62d140465b3ed95b59281251e51f" protocol=ttrpc version=3 Sep 4 00:04:33.615217 systemd[1]: Started cri-containerd-98e6812b052b8f848195551346a1288a1ac421c2dc9834ec585877faa3ff02cf.scope - libcontainer container 98e6812b052b8f848195551346a1288a1ac421c2dc9834ec585877faa3ff02cf. Sep 4 00:04:33.698138 containerd[2018]: time="2025-09-04T00:04:33.698074054Z" level=info msg="StartContainer for \"98e6812b052b8f848195551346a1288a1ac421c2dc9834ec585877faa3ff02cf\" returns successfully" Sep 4 00:04:33.987972 sshd[6034]: Connection closed by 139.178.68.195 port 40150 Sep 4 00:04:33.991190 sshd-session[6032]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:34.001489 systemd[1]: sshd@10-172.31.20.83:22-139.178.68.195:40150.service: Deactivated successfully. Sep 4 00:04:34.002278 systemd-logind[1981]: Session 11 logged out. Waiting for processes to exit. Sep 4 00:04:34.007228 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 00:04:34.012369 systemd-logind[1981]: Removed session 11. Sep 4 00:04:34.335946 containerd[2018]: time="2025-09-04T00:04:34.335831164Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac3be521731f91d40e7769ba404db4e13b8f6b8bc8b226769a27b40bf083a567\" id:\"7a2c425e297b9f7cc1d391d5c6d4527e59bd89325f8955996a46d107cd9a25ff\" pid:6104 exited_at:{seconds:1756944274 nanos:335479889}" Sep 4 00:04:34.385295 kubelet[3603]: I0904 00:04:34.379043 3603 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 00:04:34.395332 kubelet[3603]: I0904 00:04:34.395278 3603 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 00:04:34.716917 kubelet[3603]: I0904 00:04:34.667889 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6bbf6d5bdd-m9zrs" podStartSLOduration=5.424853378 podStartE2EDuration="25.667852566s" podCreationTimestamp="2025-09-04 00:04:09 +0000 UTC" firstStartedPulling="2025-09-04 00:04:10.85807262 +0000 UTC m=+47.440305601" lastFinishedPulling="2025-09-04 00:04:31.101071813 +0000 UTC m=+67.683304789" observedRunningTime="2025-09-04 00:04:31.561203753 +0000 UTC m=+68.143436775" watchObservedRunningTime="2025-09-04 00:04:34.667852566 +0000 UTC m=+71.250085563" Sep 4 00:04:34.716917 kubelet[3603]: I0904 00:04:34.716017 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vlj2d" podStartSLOduration=30.522895353 podStartE2EDuration="49.715999533s" podCreationTimestamp="2025-09-04 00:03:45 +0000 UTC" firstStartedPulling="2025-09-04 00:04:14.039073681 +0000 UTC m=+50.621306656" lastFinishedPulling="2025-09-04 00:04:33.232177862 +0000 UTC m=+69.814410836" observedRunningTime="2025-09-04 00:04:34.633691518 +0000 UTC m=+71.215924532" watchObservedRunningTime="2025-09-04 00:04:34.715999533 +0000 UTC m=+71.298232529" Sep 4 00:04:39.024777 systemd[1]: Started sshd@11-172.31.20.83:22-139.178.68.195:40158.service - OpenSSH per-connection server daemon (139.178.68.195:40158). Sep 4 00:04:39.318781 sshd[6115]: Accepted publickey for core from 139.178.68.195 port 40158 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:04:39.323596 sshd-session[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:39.332070 systemd-logind[1981]: New session 12 of user core. Sep 4 00:04:39.337476 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 00:04:40.465763 sshd[6117]: Connection closed by 139.178.68.195 port 40158 Sep 4 00:04:40.466349 sshd-session[6115]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:40.472516 systemd-logind[1981]: Session 12 logged out. Waiting for processes to exit. Sep 4 00:04:40.474008 systemd[1]: sshd@11-172.31.20.83:22-139.178.68.195:40158.service: Deactivated successfully. Sep 4 00:04:40.476726 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 00:04:40.478470 systemd-logind[1981]: Removed session 12. Sep 4 00:04:40.503236 systemd[1]: Started sshd@12-172.31.20.83:22-139.178.68.195:47534.service - OpenSSH per-connection server daemon (139.178.68.195:47534). Sep 4 00:04:40.650348 containerd[2018]: time="2025-09-04T00:04:40.650289975Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862dba424f1398c2fb721999d14fb282f5f4efaeaa91be78edd29c5aa25e3213\" id:\"b7df7f9a940586917685465cb9b613b9eb79c175d9fd205114529f2562a427c9\" pid:6138 exited_at:{seconds:1756944280 nanos:649713914}" Sep 4 00:04:40.696296 sshd[6153]: Accepted publickey for core from 139.178.68.195 port 47534 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:04:40.700407 sshd-session[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:40.716801 systemd-logind[1981]: New session 13 of user core. Sep 4 00:04:40.724498 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 00:04:40.859346 kubelet[3603]: I0904 00:04:40.859299 3603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:04:41.252242 sshd[6155]: Connection closed by 139.178.68.195 port 47534 Sep 4 00:04:41.255067 sshd-session[6153]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:41.262247 systemd-logind[1981]: Session 13 logged out. Waiting for processes to exit. Sep 4 00:04:41.262798 systemd[1]: sshd@12-172.31.20.83:22-139.178.68.195:47534.service: Deactivated successfully. Sep 4 00:04:41.268064 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 00:04:41.287660 systemd-logind[1981]: Removed session 13. Sep 4 00:04:41.290488 systemd[1]: Started sshd@13-172.31.20.83:22-139.178.68.195:47542.service - OpenSSH per-connection server daemon (139.178.68.195:47542). Sep 4 00:04:41.521366 sshd[6165]: Accepted publickey for core from 139.178.68.195 port 47542 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:04:41.524622 sshd-session[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:41.535582 systemd-logind[1981]: New session 14 of user core. Sep 4 00:04:41.544363 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 00:04:42.007831 sshd[6168]: Connection closed by 139.178.68.195 port 47542 Sep 4 00:04:42.008563 sshd-session[6165]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:42.014836 systemd-logind[1981]: Session 14 logged out. Waiting for processes to exit. Sep 4 00:04:42.016864 systemd[1]: sshd@13-172.31.20.83:22-139.178.68.195:47542.service: Deactivated successfully. Sep 4 00:04:42.022568 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 00:04:42.027865 systemd-logind[1981]: Removed session 14. Sep 4 00:04:43.715987 kubelet[3603]: I0904 00:04:43.715911 3603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:04:47.045264 systemd[1]: Started sshd@14-172.31.20.83:22-139.178.68.195:47554.service - OpenSSH per-connection server daemon (139.178.68.195:47554). Sep 4 00:04:47.317423 containerd[2018]: time="2025-09-04T00:04:47.317279564Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac3be521731f91d40e7769ba404db4e13b8f6b8bc8b226769a27b40bf083a567\" id:\"9cf99b4c88af81d125d719afa61c5b8be688cdece085311dc0f39d034c8676a7\" pid:6203 exited_at:{seconds:1756944287 nanos:315504489}" Sep 4 00:04:47.325135 sshd[6188]: Accepted publickey for core from 139.178.68.195 port 47554 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:04:47.328419 sshd-session[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:47.338170 systemd-logind[1981]: New session 15 of user core. Sep 4 00:04:47.345568 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 00:04:48.109709 sshd[6212]: Connection closed by 139.178.68.195 port 47554 Sep 4 00:04:48.110850 sshd-session[6188]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:48.118613 systemd-logind[1981]: Session 15 logged out. Waiting for processes to exit. Sep 4 00:04:48.119553 systemd[1]: sshd@14-172.31.20.83:22-139.178.68.195:47554.service: Deactivated successfully. Sep 4 00:04:48.122708 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 00:04:48.125217 systemd-logind[1981]: Removed session 15. Sep 4 00:04:48.148422 systemd[1]: Started sshd@15-172.31.20.83:22-139.178.68.195:47566.service - OpenSSH per-connection server daemon (139.178.68.195:47566). Sep 4 00:04:48.381701 sshd[6224]: Accepted publickey for core from 139.178.68.195 port 47566 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:04:48.383477 sshd-session[6224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:48.389086 systemd-logind[1981]: New session 16 of user core. Sep 4 00:04:48.394171 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 00:04:52.015353 sshd[6226]: Connection closed by 139.178.68.195 port 47566 Sep 4 00:04:52.028460 sshd-session[6224]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:52.056785 systemd[1]: sshd@15-172.31.20.83:22-139.178.68.195:47566.service: Deactivated successfully. Sep 4 00:04:52.060618 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 00:04:52.081039 systemd-logind[1981]: Session 16 logged out. Waiting for processes to exit. Sep 4 00:04:52.088463 systemd[1]: Started sshd@16-172.31.20.83:22-139.178.68.195:34606.service - OpenSSH per-connection server daemon (139.178.68.195:34606). Sep 4 00:04:52.101271 systemd-logind[1981]: Removed session 16. Sep 4 00:04:52.316863 sshd[6240]: Accepted publickey for core from 139.178.68.195 port 34606 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:04:52.319577 sshd-session[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:52.325776 systemd-logind[1981]: New session 17 of user core. Sep 4 00:04:52.331085 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 00:04:53.383625 sshd[6242]: Connection closed by 139.178.68.195 port 34606 Sep 4 00:04:53.384315 sshd-session[6240]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:53.394100 systemd-logind[1981]: Session 17 logged out. Waiting for processes to exit. Sep 4 00:04:53.394435 systemd[1]: sshd@16-172.31.20.83:22-139.178.68.195:34606.service: Deactivated successfully. Sep 4 00:04:53.403175 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 00:04:53.428172 systemd-logind[1981]: Removed session 17. Sep 4 00:04:53.429528 systemd[1]: Started sshd@17-172.31.20.83:22-139.178.68.195:34608.service - OpenSSH per-connection server daemon (139.178.68.195:34608). Sep 4 00:04:53.633730 sshd[6265]: Accepted publickey for core from 139.178.68.195 port 34608 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:04:53.635515 sshd-session[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:53.644950 systemd-logind[1981]: New session 18 of user core. Sep 4 00:04:53.650382 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 00:04:55.347765 sshd[6268]: Connection closed by 139.178.68.195 port 34608 Sep 4 00:04:55.349603 sshd-session[6265]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:55.355794 systemd[1]: sshd@17-172.31.20.83:22-139.178.68.195:34608.service: Deactivated successfully. Sep 4 00:04:55.359427 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 00:04:55.360541 systemd-logind[1981]: Session 18 logged out. Waiting for processes to exit. Sep 4 00:04:55.364284 systemd-logind[1981]: Removed session 18. Sep 4 00:04:55.388798 systemd[1]: Started sshd@18-172.31.20.83:22-139.178.68.195:34616.service - OpenSSH per-connection server daemon (139.178.68.195:34616). Sep 4 00:04:55.658356 sshd[6280]: Accepted publickey for core from 139.178.68.195 port 34616 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:04:55.661162 sshd-session[6280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:55.670824 systemd-logind[1981]: New session 19 of user core. Sep 4 00:04:55.676514 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 00:04:56.516052 sshd[6282]: Connection closed by 139.178.68.195 port 34616 Sep 4 00:04:56.518149 sshd-session[6280]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:56.524257 systemd[1]: sshd@18-172.31.20.83:22-139.178.68.195:34616.service: Deactivated successfully. Sep 4 00:04:56.527427 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 00:04:56.528429 systemd-logind[1981]: Session 19 logged out. Waiting for processes to exit. Sep 4 00:04:56.531137 systemd-logind[1981]: Removed session 19. Sep 4 00:05:01.141181 containerd[2018]: time="2025-09-04T00:05:01.141046877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0d7d282e6b7daf7adc250a327af5d5d7c76b40b5f1b635c2fbace4db4eeaead\" id:\"42b34fa04f776ba42f8565cb438cdc259af8200257808d515dc1c0b900505d83\" pid:6308 exited_at:{seconds:1756944301 nanos:78682466}" Sep 4 00:05:01.896179 systemd[1]: Started sshd@19-172.31.20.83:22-139.178.68.195:42938.service - OpenSSH per-connection server daemon (139.178.68.195:42938). Sep 4 00:05:02.510324 sshd[6319]: Accepted publickey for core from 139.178.68.195 port 42938 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:05:02.535934 sshd-session[6319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:02.676515 systemd-logind[1981]: New session 20 of user core. Sep 4 00:05:02.718181 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 00:05:04.560202 sshd[6323]: Connection closed by 139.178.68.195 port 42938 Sep 4 00:05:04.561131 sshd-session[6319]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:04.567346 systemd-logind[1981]: Session 20 logged out. Waiting for processes to exit. Sep 4 00:05:04.567452 systemd[1]: sshd@19-172.31.20.83:22-139.178.68.195:42938.service: Deactivated successfully. Sep 4 00:05:04.570225 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 00:05:04.572224 systemd-logind[1981]: Removed session 20. Sep 4 00:05:05.613590 containerd[2018]: time="2025-09-04T00:05:05.613480664Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0d7d282e6b7daf7adc250a327af5d5d7c76b40b5f1b635c2fbace4db4eeaead\" id:\"1d951c50ba2c73c18d8fb382d46b12a3d0bdf5e09f87a72585fd7718a300533f\" pid:6348 exited_at:{seconds:1756944305 nanos:612717256}" Sep 4 00:05:09.603714 systemd[1]: Started sshd@20-172.31.20.83:22-139.178.68.195:42946.service - OpenSSH per-connection server daemon (139.178.68.195:42946). Sep 4 00:05:09.829814 sshd[6362]: Accepted publickey for core from 139.178.68.195 port 42946 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:05:09.831233 sshd-session[6362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:09.837872 systemd-logind[1981]: New session 21 of user core. Sep 4 00:05:09.844112 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 00:05:10.612161 sshd[6364]: Connection closed by 139.178.68.195 port 42946 Sep 4 00:05:10.615340 containerd[2018]: time="2025-09-04T00:05:10.613324696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862dba424f1398c2fb721999d14fb282f5f4efaeaa91be78edd29c5aa25e3213\" id:\"32609bf89da1661043eca20188a3272b27081465f03d8d6ae29c5adea62e03ba\" pid:6380 exited_at:{seconds:1756944310 nanos:612786258}" Sep 4 00:05:10.613685 sshd-session[6362]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:10.621318 systemd[1]: sshd@20-172.31.20.83:22-139.178.68.195:42946.service: Deactivated successfully. Sep 4 00:05:10.624440 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 00:05:10.627150 systemd-logind[1981]: Session 21 logged out. Waiting for processes to exit. Sep 4 00:05:10.630397 systemd-logind[1981]: Removed session 21. Sep 4 00:05:15.647233 systemd[1]: Started sshd@21-172.31.20.83:22-139.178.68.195:43798.service - OpenSSH per-connection server daemon (139.178.68.195:43798). Sep 4 00:05:15.876398 sshd[6401]: Accepted publickey for core from 139.178.68.195 port 43798 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:05:15.880047 sshd-session[6401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:15.888193 systemd-logind[1981]: New session 22 of user core. Sep 4 00:05:15.900148 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 00:05:16.225737 sshd[6403]: Connection closed by 139.178.68.195 port 43798 Sep 4 00:05:16.229291 sshd-session[6401]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:16.236836 systemd[1]: sshd@21-172.31.20.83:22-139.178.68.195:43798.service: Deactivated successfully. Sep 4 00:05:16.242002 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 00:05:16.245046 systemd-logind[1981]: Session 22 logged out. Waiting for processes to exit. Sep 4 00:05:16.249261 systemd-logind[1981]: Removed session 22. Sep 4 00:05:17.266530 containerd[2018]: time="2025-09-04T00:05:17.266474064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac3be521731f91d40e7769ba404db4e13b8f6b8bc8b226769a27b40bf083a567\" id:\"3797b76cbaab4a0d2d21e7f52ea3cd76ffd78fd18c113bac5bc428e9d8d4fbee\" pid:6427 exited_at:{seconds:1756944317 nanos:265955851}" Sep 4 00:05:21.265064 systemd[1]: Started sshd@22-172.31.20.83:22-139.178.68.195:42414.service - OpenSSH per-connection server daemon (139.178.68.195:42414). Sep 4 00:05:21.582669 sshd[6438]: Accepted publickey for core from 139.178.68.195 port 42414 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:05:21.586002 sshd-session[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:21.593584 systemd-logind[1981]: New session 23 of user core. Sep 4 00:05:21.601177 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 00:05:22.535369 sshd[6440]: Connection closed by 139.178.68.195 port 42414 Sep 4 00:05:22.537435 sshd-session[6438]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:22.542374 systemd-logind[1981]: Session 23 logged out. Waiting for processes to exit. Sep 4 00:05:22.544694 systemd[1]: sshd@22-172.31.20.83:22-139.178.68.195:42414.service: Deactivated successfully. Sep 4 00:05:22.550389 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 00:05:22.554505 systemd-logind[1981]: Removed session 23. Sep 4 00:05:30.710632 containerd[2018]: time="2025-09-04T00:05:30.710585131Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0d7d282e6b7daf7adc250a327af5d5d7c76b40b5f1b635c2fbace4db4eeaead\" id:\"d5e5b2fe195fcccd390202f471835870f571959ad5a66272571d3f9db60e50b5\" pid:6466 exited_at:{seconds:1756944330 nanos:710231846}" Sep 4 00:05:33.952232 containerd[2018]: time="2025-09-04T00:05:33.943749220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac3be521731f91d40e7769ba404db4e13b8f6b8bc8b226769a27b40bf083a567\" id:\"baa3b6997ae428b9b599c4314a401220144dc9149951f5fbe7f4fe0beef49f26\" pid:6495 exited_at:{seconds:1756944333 nanos:943431750}" Sep 4 00:05:38.109865 systemd[1]: cri-containerd-b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77.scope: Deactivated successfully. Sep 4 00:05:38.110268 systemd[1]: cri-containerd-b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77.scope: Consumed 11.072s CPU time, 106.7M memory peak, 97.1M read from disk. Sep 4 00:05:38.165034 systemd[1]: cri-containerd-a7fb62876e041c598be248ca70445eec48cc076e3d44bb0d2f4b0dab676176c9.scope: Deactivated successfully. Sep 4 00:05:38.167412 systemd[1]: cri-containerd-a7fb62876e041c598be248ca70445eec48cc076e3d44bb0d2f4b0dab676176c9.scope: Consumed 5.277s CPU time, 93.6M memory peak, 133.7M read from disk. Sep 4 00:05:38.215989 containerd[2018]: time="2025-09-04T00:05:38.215781435Z" level=info msg="received exit event container_id:\"b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77\" id:\"b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77\" pid:3934 exit_status:1 exited_at:{seconds:1756944338 nanos:190311276}" Sep 4 00:05:38.226237 containerd[2018]: time="2025-09-04T00:05:38.226039341Z" level=info msg="received exit event container_id:\"a7fb62876e041c598be248ca70445eec48cc076e3d44bb0d2f4b0dab676176c9\" id:\"a7fb62876e041c598be248ca70445eec48cc076e3d44bb0d2f4b0dab676176c9\" pid:3162 exit_status:1 exited_at:{seconds:1756944338 nanos:199607033}" Sep 4 00:05:38.227184 containerd[2018]: time="2025-09-04T00:05:38.227150209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77\" id:\"b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77\" pid:3934 exit_status:1 exited_at:{seconds:1756944338 nanos:190311276}" Sep 4 00:05:38.242995 containerd[2018]: time="2025-09-04T00:05:38.242924620Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7fb62876e041c598be248ca70445eec48cc076e3d44bb0d2f4b0dab676176c9\" id:\"a7fb62876e041c598be248ca70445eec48cc076e3d44bb0d2f4b0dab676176c9\" pid:3162 exit_status:1 exited_at:{seconds:1756944338 nanos:199607033}" Sep 4 00:05:38.385544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77-rootfs.mount: Deactivated successfully. Sep 4 00:05:38.387001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7fb62876e041c598be248ca70445eec48cc076e3d44bb0d2f4b0dab676176c9-rootfs.mount: Deactivated successfully. Sep 4 00:05:39.615561 kubelet[3603]: I0904 00:05:39.615505 3603 scope.go:117] "RemoveContainer" containerID="b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77" Sep 4 00:05:39.618757 kubelet[3603]: I0904 00:05:39.615777 3603 scope.go:117] "RemoveContainer" containerID="a7fb62876e041c598be248ca70445eec48cc076e3d44bb0d2f4b0dab676176c9" Sep 4 00:05:39.719958 containerd[2018]: time="2025-09-04T00:05:39.719907046Z" level=info msg="CreateContainer within sandbox \"129a0b39cbc045991fa5e1122a16fe6119912b3133f8020bd557540d615f1e5a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 00:05:39.761625 containerd[2018]: time="2025-09-04T00:05:39.760693281Z" level=info msg="CreateContainer within sandbox \"9ea78a2acd03f407d8a5cc442ef4fc0564e8c29f84181385a5d2e3ce7f22a60f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 4 00:05:39.942865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2812863503.mount: Deactivated successfully. Sep 4 00:05:39.950177 containerd[2018]: time="2025-09-04T00:05:39.950148411Z" level=info msg="Container 8c0f96d6d187a202bd55ee1ba3eb7ed96c00c32877903f1b35dfdcc931cd2df6: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:39.969888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3934046500.mount: Deactivated successfully. Sep 4 00:05:39.972524 containerd[2018]: time="2025-09-04T00:05:39.972489794Z" level=info msg="Container c68d8fbf56c717d016e96c79f49fb96a0cd0f65a7792e0eff38099de1247046e: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:39.988755 containerd[2018]: time="2025-09-04T00:05:39.988705571Z" level=info msg="CreateContainer within sandbox \"129a0b39cbc045991fa5e1122a16fe6119912b3133f8020bd557540d615f1e5a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c68d8fbf56c717d016e96c79f49fb96a0cd0f65a7792e0eff38099de1247046e\"" Sep 4 00:05:39.989146 containerd[2018]: time="2025-09-04T00:05:39.988732396Z" level=info msg="CreateContainer within sandbox \"9ea78a2acd03f407d8a5cc442ef4fc0564e8c29f84181385a5d2e3ce7f22a60f\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8c0f96d6d187a202bd55ee1ba3eb7ed96c00c32877903f1b35dfdcc931cd2df6\"" Sep 4 00:05:39.997123 containerd[2018]: time="2025-09-04T00:05:39.997083680Z" level=info msg="StartContainer for \"8c0f96d6d187a202bd55ee1ba3eb7ed96c00c32877903f1b35dfdcc931cd2df6\"" Sep 4 00:05:39.998150 containerd[2018]: time="2025-09-04T00:05:39.997924913Z" level=info msg="StartContainer for \"c68d8fbf56c717d016e96c79f49fb96a0cd0f65a7792e0eff38099de1247046e\"" Sep 4 00:05:40.004308 containerd[2018]: time="2025-09-04T00:05:40.003487637Z" level=info msg="connecting to shim 8c0f96d6d187a202bd55ee1ba3eb7ed96c00c32877903f1b35dfdcc931cd2df6" address="unix:///run/containerd/s/e5422f1ce3ddecc578eb8107c9391e2912f229b177077362e9737cf22d35876b" protocol=ttrpc version=3 Sep 4 00:05:40.007356 containerd[2018]: time="2025-09-04T00:05:40.007188773Z" level=info msg="connecting to shim c68d8fbf56c717d016e96c79f49fb96a0cd0f65a7792e0eff38099de1247046e" address="unix:///run/containerd/s/d3863e43e2de5f1b8b93c491901f69b949bc9d662cefe7d5fa373069875336eb" protocol=ttrpc version=3 Sep 4 00:05:40.092405 systemd[1]: Started cri-containerd-c68d8fbf56c717d016e96c79f49fb96a0cd0f65a7792e0eff38099de1247046e.scope - libcontainer container c68d8fbf56c717d016e96c79f49fb96a0cd0f65a7792e0eff38099de1247046e. Sep 4 00:05:40.111122 systemd[1]: Started cri-containerd-8c0f96d6d187a202bd55ee1ba3eb7ed96c00c32877903f1b35dfdcc931cd2df6.scope - libcontainer container 8c0f96d6d187a202bd55ee1ba3eb7ed96c00c32877903f1b35dfdcc931cd2df6. Sep 4 00:05:40.229735 containerd[2018]: time="2025-09-04T00:05:40.229097578Z" level=info msg="StartContainer for \"8c0f96d6d187a202bd55ee1ba3eb7ed96c00c32877903f1b35dfdcc931cd2df6\" returns successfully" Sep 4 00:05:40.230440 containerd[2018]: time="2025-09-04T00:05:40.230409483Z" level=info msg="StartContainer for \"c68d8fbf56c717d016e96c79f49fb96a0cd0f65a7792e0eff38099de1247046e\" returns successfully" Sep 4 00:05:40.464912 containerd[2018]: time="2025-09-04T00:05:40.464831149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862dba424f1398c2fb721999d14fb282f5f4efaeaa91be78edd29c5aa25e3213\" id:\"b2f2c4e0c568b5d72259de9b163ce6d787dd94650f2537e45fdc4d6ef98c19b8\" pid:6546 exited_at:{seconds:1756944340 nanos:464551350}" Sep 4 00:05:44.446469 systemd[1]: cri-containerd-7cdf9c769401ef66c0c2adb8961f1c8df744fe6ffd1702b8e319308f9d96a230.scope: Deactivated successfully. Sep 4 00:05:44.449473 containerd[2018]: time="2025-09-04T00:05:44.448234831Z" level=info msg="received exit event container_id:\"7cdf9c769401ef66c0c2adb8961f1c8df744fe6ffd1702b8e319308f9d96a230\" id:\"7cdf9c769401ef66c0c2adb8961f1c8df744fe6ffd1702b8e319308f9d96a230\" pid:3171 exit_status:1 exited_at:{seconds:1756944344 nanos:447656417}" Sep 4 00:05:44.449473 containerd[2018]: time="2025-09-04T00:05:44.448268254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7cdf9c769401ef66c0c2adb8961f1c8df744fe6ffd1702b8e319308f9d96a230\" id:\"7cdf9c769401ef66c0c2adb8961f1c8df744fe6ffd1702b8e319308f9d96a230\" pid:3171 exit_status:1 exited_at:{seconds:1756944344 nanos:447656417}" Sep 4 00:05:44.448377 systemd[1]: cri-containerd-7cdf9c769401ef66c0c2adb8961f1c8df744fe6ffd1702b8e319308f9d96a230.scope: Consumed 1.803s CPU time, 40.8M memory peak, 74.1M read from disk. Sep 4 00:05:44.512961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cdf9c769401ef66c0c2adb8961f1c8df744fe6ffd1702b8e319308f9d96a230-rootfs.mount: Deactivated successfully. Sep 4 00:05:44.553388 kubelet[3603]: I0904 00:05:44.553350 3603 scope.go:117] "RemoveContainer" containerID="7cdf9c769401ef66c0c2adb8961f1c8df744fe6ffd1702b8e319308f9d96a230" Sep 4 00:05:44.556602 containerd[2018]: time="2025-09-04T00:05:44.556544948Z" level=info msg="CreateContainer within sandbox \"cd44b0b8fffa7e8293ef998c7be16b4f1caa97a9c03e11a2028aea472f70cebc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 00:05:44.575171 containerd[2018]: time="2025-09-04T00:05:44.575134993Z" level=info msg="Container 871187e96c85442efe5c9808dab391cdc36275c83525529bdded5c4b1bc0f0c1: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:44.587377 containerd[2018]: time="2025-09-04T00:05:44.587331119Z" level=info msg="CreateContainer within sandbox \"cd44b0b8fffa7e8293ef998c7be16b4f1caa97a9c03e11a2028aea472f70cebc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"871187e96c85442efe5c9808dab391cdc36275c83525529bdded5c4b1bc0f0c1\"" Sep 4 00:05:44.587954 containerd[2018]: time="2025-09-04T00:05:44.587921053Z" level=info msg="StartContainer for \"871187e96c85442efe5c9808dab391cdc36275c83525529bdded5c4b1bc0f0c1\"" Sep 4 00:05:44.589566 containerd[2018]: time="2025-09-04T00:05:44.589499161Z" level=info msg="connecting to shim 871187e96c85442efe5c9808dab391cdc36275c83525529bdded5c4b1bc0f0c1" address="unix:///run/containerd/s/b4c8a819ad742f5ed8380c344342251ac6191953ae89e790413fe7a74073dddd" protocol=ttrpc version=3 Sep 4 00:05:44.616110 systemd[1]: Started cri-containerd-871187e96c85442efe5c9808dab391cdc36275c83525529bdded5c4b1bc0f0c1.scope - libcontainer container 871187e96c85442efe5c9808dab391cdc36275c83525529bdded5c4b1bc0f0c1. Sep 4 00:05:44.696154 containerd[2018]: time="2025-09-04T00:05:44.696115753Z" level=info msg="StartContainer for \"871187e96c85442efe5c9808dab391cdc36275c83525529bdded5c4b1bc0f0c1\" returns successfully" Sep 4 00:05:46.931472 kubelet[3603]: E0904 00:05:46.931285 3603 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-83?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 4 00:05:47.210525 containerd[2018]: time="2025-09-04T00:05:47.210209776Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac3be521731f91d40e7769ba404db4e13b8f6b8bc8b226769a27b40bf083a567\" id:\"09ee8b13d8093079a2844f66a8f9d71606e7e1c70bcb6badf1c5f28413385976\" pid:6685 exit_status:1 exited_at:{seconds:1756944347 nanos:209516773}" Sep 4 00:05:52.045074 systemd[1]: cri-containerd-8c0f96d6d187a202bd55ee1ba3eb7ed96c00c32877903f1b35dfdcc931cd2df6.scope: Deactivated successfully. Sep 4 00:05:52.045391 systemd[1]: cri-containerd-8c0f96d6d187a202bd55ee1ba3eb7ed96c00c32877903f1b35dfdcc931cd2df6.scope: Consumed 383ms CPU time, 63.5M memory peak, 33.8M read from disk. Sep 4 00:05:52.054912 containerd[2018]: time="2025-09-04T00:05:52.054847250Z" level=info msg="received exit event container_id:\"8c0f96d6d187a202bd55ee1ba3eb7ed96c00c32877903f1b35dfdcc931cd2df6\" id:\"8c0f96d6d187a202bd55ee1ba3eb7ed96c00c32877903f1b35dfdcc931cd2df6\" pid:6587 exit_status:1 exited_at:{seconds:1756944352 nanos:54573602}" Sep 4 00:05:52.055518 containerd[2018]: time="2025-09-04T00:05:52.055166678Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c0f96d6d187a202bd55ee1ba3eb7ed96c00c32877903f1b35dfdcc931cd2df6\" id:\"8c0f96d6d187a202bd55ee1ba3eb7ed96c00c32877903f1b35dfdcc931cd2df6\" pid:6587 exit_status:1 exited_at:{seconds:1756944352 nanos:54573602}" Sep 4 00:05:52.080495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c0f96d6d187a202bd55ee1ba3eb7ed96c00c32877903f1b35dfdcc931cd2df6-rootfs.mount: Deactivated successfully. Sep 4 00:05:52.580701 kubelet[3603]: I0904 00:05:52.580651 3603 scope.go:117] "RemoveContainer" containerID="b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77" Sep 4 00:05:52.581507 kubelet[3603]: I0904 00:05:52.581203 3603 scope.go:117] "RemoveContainer" containerID="8c0f96d6d187a202bd55ee1ba3eb7ed96c00c32877903f1b35dfdcc931cd2df6" Sep 4 00:05:52.586105 kubelet[3603]: E0904 00:05:52.586037 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-755d956888-9tcx7_tigera-operator(fc988561-0072-46e7-8e12-ddfba82b61b1)\"" pod="tigera-operator/tigera-operator-755d956888-9tcx7" podUID="fc988561-0072-46e7-8e12-ddfba82b61b1" Sep 4 00:05:52.635186 containerd[2018]: time="2025-09-04T00:05:52.635115830Z" level=info msg="RemoveContainer for \"b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77\"" Sep 4 00:05:52.648510 containerd[2018]: time="2025-09-04T00:05:52.648455844Z" level=info msg="RemoveContainer for \"b4e081c71057813db3909f6add88eb60052790389ea1da117ed9972c3b1e9e77\" returns successfully" Sep 4 00:05:56.935623 kubelet[3603]: E0904 00:05:56.935565 3603 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-83?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"