Sep 4 00:07:01.022509 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 3 22:05:39 -00 2025 Sep 4 00:07:01.023091 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:07:01.023108 kernel: BIOS-provided physical RAM map: Sep 4 00:07:01.023120 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 00:07:01.023131 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 4 00:07:01.023142 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 4 00:07:01.023156 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 4 00:07:01.023250 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 4 00:07:01.023268 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 4 00:07:01.023281 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 4 00:07:01.023293 kernel: NX (Execute Disable) protection: active Sep 4 00:07:01.023305 kernel: APIC: Static calls initialized Sep 4 00:07:01.023317 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 4 00:07:01.023330 kernel: extended physical RAM map: Sep 4 00:07:01.023349 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 00:07:01.023362 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Sep 4 00:07:01.023376 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Sep 4 00:07:01.023389 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Sep 4 00:07:01.023403 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 4 00:07:01.023416 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 4 00:07:01.023429 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 4 00:07:01.023442 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 4 00:07:01.023456 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 4 00:07:01.023472 kernel: efi: EFI v2.7 by EDK II Sep 4 00:07:01.023485 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 4 00:07:01.023499 kernel: secureboot: Secure boot disabled Sep 4 00:07:01.023512 kernel: SMBIOS 2.7 present. Sep 4 00:07:01.023525 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 4 00:07:01.023538 kernel: DMI: Memory slots populated: 1/1 Sep 4 00:07:01.023551 kernel: Hypervisor detected: KVM Sep 4 00:07:01.023564 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 00:07:01.023577 kernel: kvm-clock: using sched offset of 4765002695 cycles Sep 4 00:07:01.023592 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 00:07:01.023606 kernel: tsc: Detected 2500.008 MHz processor Sep 4 00:07:01.023620 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 00:07:01.023636 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 00:07:01.023650 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 4 00:07:01.023664 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 00:07:01.023678 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 00:07:01.023692 kernel: Using GB pages for direct mapping Sep 4 00:07:01.023711 kernel: ACPI: Early table checksum verification disabled Sep 4 00:07:01.023728 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 4 00:07:01.023743 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 4 00:07:01.023757 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 00:07:01.023772 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 4 00:07:01.023788 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 4 00:07:01.023802 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 4 00:07:01.023817 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 00:07:01.023834 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 00:07:01.023849 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 4 00:07:01.023863 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 4 00:07:01.023878 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 4 00:07:01.023893 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 4 00:07:01.023908 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 4 00:07:01.023922 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 4 00:07:01.023937 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 4 00:07:01.023951 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 4 00:07:01.023969 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 4 00:07:01.023983 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 4 00:07:01.023997 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 4 00:07:01.024012 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 4 00:07:01.024026 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 4 00:07:01.024074 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 4 00:07:01.024093 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 4 00:07:01.024106 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 4 00:07:01.024118 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 4 00:07:01.024136 kernel: NUMA: Initialized distance table, cnt=1 Sep 4 00:07:01.024150 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Sep 4 00:07:01.024164 kernel: Zone ranges: Sep 4 00:07:01.024179 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 00:07:01.024193 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 4 00:07:01.024207 kernel: Normal empty Sep 4 00:07:01.024221 kernel: Device empty Sep 4 00:07:01.024236 kernel: Movable zone start for each node Sep 4 00:07:01.024250 kernel: Early memory node ranges Sep 4 00:07:01.024264 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 00:07:01.024281 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 4 00:07:01.024296 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 4 00:07:01.024311 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 4 00:07:01.024325 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 00:07:01.024339 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 00:07:01.024354 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 4 00:07:01.024369 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 4 00:07:01.024383 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 00:07:01.024398 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 00:07:01.024415 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 4 00:07:01.024430 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 00:07:01.024444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 00:07:01.024459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 00:07:01.024473 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 00:07:01.024488 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 00:07:01.024502 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 00:07:01.024517 kernel: TSC deadline timer available Sep 4 00:07:01.024531 kernel: CPU topo: Max. logical packages: 1 Sep 4 00:07:01.024548 kernel: CPU topo: Max. logical dies: 1 Sep 4 00:07:01.024562 kernel: CPU topo: Max. dies per package: 1 Sep 4 00:07:01.024576 kernel: CPU topo: Max. threads per core: 2 Sep 4 00:07:01.024590 kernel: CPU topo: Num. cores per package: 1 Sep 4 00:07:01.024604 kernel: CPU topo: Num. threads per package: 2 Sep 4 00:07:01.024618 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 4 00:07:01.024633 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 00:07:01.024647 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 4 00:07:01.024661 kernel: Booting paravirtualized kernel on KVM Sep 4 00:07:01.024676 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 00:07:01.024694 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 00:07:01.024708 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 4 00:07:01.024722 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 4 00:07:01.024736 kernel: pcpu-alloc: [0] 0 1 Sep 4 00:07:01.024751 kernel: kvm-guest: PV spinlocks enabled Sep 4 00:07:01.024765 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 00:07:01.024782 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:07:01.024798 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 00:07:01.024816 kernel: random: crng init done Sep 4 00:07:01.024830 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 00:07:01.024845 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 00:07:01.024859 kernel: Fallback order for Node 0: 0 Sep 4 00:07:01.024874 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Sep 4 00:07:01.024889 kernel: Policy zone: DMA32 Sep 4 00:07:01.024917 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 00:07:01.024932 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 00:07:01.024947 kernel: Kernel/User page tables isolation: enabled Sep 4 00:07:01.024963 kernel: ftrace: allocating 40099 entries in 157 pages Sep 4 00:07:01.024978 kernel: ftrace: allocated 157 pages with 5 groups Sep 4 00:07:01.024996 kernel: Dynamic Preempt: voluntary Sep 4 00:07:01.025010 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 00:07:01.025026 kernel: rcu: RCU event tracing is enabled. Sep 4 00:07:01.025087 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 00:07:01.025100 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 00:07:01.025113 kernel: Rude variant of Tasks RCU enabled. Sep 4 00:07:01.025131 kernel: Tracing variant of Tasks RCU enabled. Sep 4 00:07:01.025144 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 00:07:01.025160 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 00:07:01.025174 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:07:01.025191 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:07:01.025204 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:07:01.025220 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 00:07:01.025237 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 00:07:01.025255 kernel: Console: colour dummy device 80x25 Sep 4 00:07:01.025269 kernel: printk: legacy console [tty0] enabled Sep 4 00:07:01.025283 kernel: printk: legacy console [ttyS0] enabled Sep 4 00:07:01.025296 kernel: ACPI: Core revision 20240827 Sep 4 00:07:01.025311 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 4 00:07:01.025324 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 00:07:01.025341 kernel: x2apic enabled Sep 4 00:07:01.025360 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 00:07:01.025373 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2409413c780, max_idle_ns: 440795222072 ns Sep 4 00:07:01.025391 kernel: Calibrating delay loop (skipped) preset value.. 5000.01 BogoMIPS (lpj=2500008) Sep 4 00:07:01.025405 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 00:07:01.025419 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 4 00:07:01.027079 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 00:07:01.027101 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 00:07:01.027119 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 00:07:01.027135 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 00:07:01.027152 kernel: RETBleed: Vulnerable Sep 4 00:07:01.027168 kernel: Speculative Store Bypass: Vulnerable Sep 4 00:07:01.027183 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 00:07:01.027199 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 00:07:01.027220 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 00:07:01.027236 kernel: active return thunk: its_return_thunk Sep 4 00:07:01.027251 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 4 00:07:01.027266 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 00:07:01.027281 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 00:07:01.027296 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 00:07:01.027311 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 4 00:07:01.027326 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 4 00:07:01.027342 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 00:07:01.027358 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 00:07:01.027377 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 00:07:01.027392 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 4 00:07:01.027408 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 00:07:01.027421 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 4 00:07:01.027436 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 4 00:07:01.027451 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 4 00:07:01.027467 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 4 00:07:01.027482 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 4 00:07:01.027497 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 4 00:07:01.027512 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 4 00:07:01.027528 kernel: Freeing SMP alternatives memory: 32K Sep 4 00:07:01.027542 kernel: pid_max: default: 32768 minimum: 301 Sep 4 00:07:01.027561 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 00:07:01.027576 kernel: landlock: Up and running. Sep 4 00:07:01.027591 kernel: SELinux: Initializing. Sep 4 00:07:01.027607 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 00:07:01.027622 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 00:07:01.027638 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 00:07:01.027654 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 00:07:01.027669 kernel: signal: max sigframe size: 3632 Sep 4 00:07:01.027685 kernel: rcu: Hierarchical SRCU implementation. Sep 4 00:07:01.027701 kernel: rcu: Max phase no-delay instances is 400. Sep 4 00:07:01.027721 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 00:07:01.027737 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 00:07:01.027752 kernel: smp: Bringing up secondary CPUs ... Sep 4 00:07:01.027768 kernel: smpboot: x86: Booting SMP configuration: Sep 4 00:07:01.027783 kernel: .... node #0, CPUs: #1 Sep 4 00:07:01.027800 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 4 00:07:01.027817 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 00:07:01.027833 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 00:07:01.027849 kernel: smpboot: Total of 2 processors activated (10000.03 BogoMIPS) Sep 4 00:07:01.027868 kernel: Memory: 1910108K/2037804K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 123140K reserved, 0K cma-reserved) Sep 4 00:07:01.027884 kernel: devtmpfs: initialized Sep 4 00:07:01.027900 kernel: x86/mm: Memory block size: 128MB Sep 4 00:07:01.027915 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 4 00:07:01.027931 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 00:07:01.027947 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 00:07:01.027963 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 00:07:01.027980 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 00:07:01.027998 kernel: audit: initializing netlink subsys (disabled) Sep 4 00:07:01.028015 kernel: audit: type=2000 audit(1756944418.098:1): state=initialized audit_enabled=0 res=1 Sep 4 00:07:01.028030 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 00:07:01.029189 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 00:07:01.029204 kernel: cpuidle: using governor menu Sep 4 00:07:01.029217 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 00:07:01.029230 kernel: dca service started, version 1.12.1 Sep 4 00:07:01.029243 kernel: PCI: Using configuration type 1 for base access Sep 4 00:07:01.029257 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 00:07:01.029276 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 00:07:01.029290 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 00:07:01.029302 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 00:07:01.029317 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 00:07:01.029332 kernel: ACPI: Added _OSI(Module Device) Sep 4 00:07:01.029347 kernel: ACPI: Added _OSI(Processor Device) Sep 4 00:07:01.029362 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 00:07:01.029378 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 4 00:07:01.029390 kernel: ACPI: Interpreter enabled Sep 4 00:07:01.029403 kernel: ACPI: PM: (supports S0 S5) Sep 4 00:07:01.029421 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 00:07:01.029438 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 00:07:01.029452 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 00:07:01.029466 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 00:07:01.029480 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 00:07:01.029708 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 00:07:01.029893 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 00:07:01.031101 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 00:07:01.031128 kernel: acpiphp: Slot [3] registered Sep 4 00:07:01.031144 kernel: acpiphp: Slot [4] registered Sep 4 00:07:01.031158 kernel: acpiphp: Slot [5] registered Sep 4 00:07:01.031172 kernel: acpiphp: Slot [6] registered Sep 4 00:07:01.031187 kernel: acpiphp: Slot [7] registered Sep 4 00:07:01.031200 kernel: acpiphp: Slot [8] registered Sep 4 00:07:01.031214 kernel: acpiphp: Slot [9] registered Sep 4 00:07:01.031228 kernel: acpiphp: Slot [10] registered Sep 4 00:07:01.031247 kernel: acpiphp: Slot [11] registered Sep 4 00:07:01.031261 kernel: acpiphp: Slot [12] registered Sep 4 00:07:01.031274 kernel: acpiphp: Slot [13] registered Sep 4 00:07:01.031288 kernel: acpiphp: Slot [14] registered Sep 4 00:07:01.031302 kernel: acpiphp: Slot [15] registered Sep 4 00:07:01.031317 kernel: acpiphp: Slot [16] registered Sep 4 00:07:01.031330 kernel: acpiphp: Slot [17] registered Sep 4 00:07:01.031345 kernel: acpiphp: Slot [18] registered Sep 4 00:07:01.031359 kernel: acpiphp: Slot [19] registered Sep 4 00:07:01.031373 kernel: acpiphp: Slot [20] registered Sep 4 00:07:01.031390 kernel: acpiphp: Slot [21] registered Sep 4 00:07:01.031403 kernel: acpiphp: Slot [22] registered Sep 4 00:07:01.031417 kernel: acpiphp: Slot [23] registered Sep 4 00:07:01.031431 kernel: acpiphp: Slot [24] registered Sep 4 00:07:01.031445 kernel: acpiphp: Slot [25] registered Sep 4 00:07:01.031458 kernel: acpiphp: Slot [26] registered Sep 4 00:07:01.031472 kernel: acpiphp: Slot [27] registered Sep 4 00:07:01.031505 kernel: acpiphp: Slot [28] registered Sep 4 00:07:01.031522 kernel: acpiphp: Slot [29] registered Sep 4 00:07:01.031540 kernel: acpiphp: Slot [30] registered Sep 4 00:07:01.031556 kernel: acpiphp: Slot [31] registered Sep 4 00:07:01.031569 kernel: PCI host bridge to bus 0000:00 Sep 4 00:07:01.031739 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 00:07:01.031865 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 00:07:01.031988 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 00:07:01.033266 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 4 00:07:01.033404 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 4 00:07:01.033533 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 00:07:01.033694 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 4 00:07:01.033835 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 4 00:07:01.033993 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Sep 4 00:07:01.034146 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 00:07:01.034282 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 4 00:07:01.034417 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 4 00:07:01.034577 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 4 00:07:01.034721 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 4 00:07:01.034853 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 4 00:07:01.034984 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 4 00:07:01.036203 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Sep 4 00:07:01.036352 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Sep 4 00:07:01.036489 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 4 00:07:01.036621 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 00:07:01.036760 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Sep 4 00:07:01.036892 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Sep 4 00:07:01.037030 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Sep 4 00:07:01.037185 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Sep 4 00:07:01.037208 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 00:07:01.037224 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 00:07:01.037239 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 00:07:01.037254 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 00:07:01.037269 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 00:07:01.037284 kernel: iommu: Default domain type: Translated Sep 4 00:07:01.037299 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 00:07:01.037314 kernel: efivars: Registered efivars operations Sep 4 00:07:01.037329 kernel: PCI: Using ACPI for IRQ routing Sep 4 00:07:01.037347 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 00:07:01.037362 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Sep 4 00:07:01.037376 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 4 00:07:01.037390 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 4 00:07:01.037518 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 4 00:07:01.037648 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 4 00:07:01.037779 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 00:07:01.037797 kernel: vgaarb: loaded Sep 4 00:07:01.037812 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 4 00:07:01.037831 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 4 00:07:01.037846 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 00:07:01.037860 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 00:07:01.037875 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 00:07:01.037891 kernel: pnp: PnP ACPI init Sep 4 00:07:01.037905 kernel: pnp: PnP ACPI: found 5 devices Sep 4 00:07:01.037920 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 00:07:01.037946 kernel: NET: Registered PF_INET protocol family Sep 4 00:07:01.037961 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 00:07:01.037979 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 4 00:07:01.037994 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 00:07:01.038010 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 00:07:01.038025 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 00:07:01.040147 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 4 00:07:01.040167 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 00:07:01.040182 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 00:07:01.040198 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 00:07:01.040219 kernel: NET: Registered PF_XDP protocol family Sep 4 00:07:01.040379 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 00:07:01.040518 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 00:07:01.040652 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 00:07:01.040785 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 4 00:07:01.040917 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 4 00:07:01.041095 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 00:07:01.041117 kernel: PCI: CLS 0 bytes, default 64 Sep 4 00:07:01.041133 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 00:07:01.041153 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2409413c780, max_idle_ns: 440795222072 ns Sep 4 00:07:01.041166 kernel: clocksource: Switched to clocksource tsc Sep 4 00:07:01.041181 kernel: Initialise system trusted keyrings Sep 4 00:07:01.041198 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 4 00:07:01.041212 kernel: Key type asymmetric registered Sep 4 00:07:01.041227 kernel: Asymmetric key parser 'x509' registered Sep 4 00:07:01.041242 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 00:07:01.041259 kernel: io scheduler mq-deadline registered Sep 4 00:07:01.041271 kernel: io scheduler kyber registered Sep 4 00:07:01.041288 kernel: io scheduler bfq registered Sep 4 00:07:01.041303 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 00:07:01.041318 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 00:07:01.041331 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 00:07:01.041345 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 00:07:01.041358 kernel: i8042: Warning: Keylock active Sep 4 00:07:01.041372 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 00:07:01.041386 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 00:07:01.041538 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 4 00:07:01.041666 kernel: rtc_cmos 00:00: registered as rtc0 Sep 4 00:07:01.041784 kernel: rtc_cmos 00:00: setting system clock to 2025-09-04T00:07:00 UTC (1756944420) Sep 4 00:07:01.041900 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 4 00:07:01.044110 kernel: intel_pstate: CPU model not supported Sep 4 00:07:01.044136 kernel: efifb: probing for efifb Sep 4 00:07:01.044151 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 4 00:07:01.044166 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 4 00:07:01.044183 kernel: efifb: scrolling: redraw Sep 4 00:07:01.044197 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 00:07:01.044212 kernel: Console: switching to colour frame buffer device 100x37 Sep 4 00:07:01.044227 kernel: fb0: EFI VGA frame buffer device Sep 4 00:07:01.044243 kernel: pstore: Using crash dump compression: deflate Sep 4 00:07:01.044258 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 00:07:01.044272 kernel: NET: Registered PF_INET6 protocol family Sep 4 00:07:01.044287 kernel: Segment Routing with IPv6 Sep 4 00:07:01.044301 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 00:07:01.044320 kernel: NET: Registered PF_PACKET protocol family Sep 4 00:07:01.044335 kernel: Key type dns_resolver registered Sep 4 00:07:01.044350 kernel: IPI shorthand broadcast: enabled Sep 4 00:07:01.044368 kernel: sched_clock: Marking stable (2666003822, 145216605)->(2903506207, -92285780) Sep 4 00:07:01.044383 kernel: registered taskstats version 1 Sep 4 00:07:01.044398 kernel: Loading compiled-in X.509 certificates Sep 4 00:07:01.044412 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 247a8159a15e16f8eb89737aa66cd9cf9bbb3c10' Sep 4 00:07:01.044428 kernel: Demotion targets for Node 0: null Sep 4 00:07:01.044442 kernel: Key type .fscrypt registered Sep 4 00:07:01.044456 kernel: Key type fscrypt-provisioning registered Sep 4 00:07:01.044474 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 00:07:01.044489 kernel: ima: Allocated hash algorithm: sha1 Sep 4 00:07:01.044504 kernel: ima: No architecture policies found Sep 4 00:07:01.044519 kernel: clk: Disabling unused clocks Sep 4 00:07:01.044533 kernel: Warning: unable to open an initial console. Sep 4 00:07:01.044548 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 4 00:07:01.044563 kernel: Write protecting the kernel read-only data: 24576k Sep 4 00:07:01.044578 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 4 00:07:01.044595 kernel: Run /init as init process Sep 4 00:07:01.044610 kernel: with arguments: Sep 4 00:07:01.044624 kernel: /init Sep 4 00:07:01.044638 kernel: with environment: Sep 4 00:07:01.044653 kernel: HOME=/ Sep 4 00:07:01.044667 kernel: TERM=linux Sep 4 00:07:01.044684 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 00:07:01.044701 systemd[1]: Successfully made /usr/ read-only. Sep 4 00:07:01.044721 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:07:01.044737 systemd[1]: Detected virtualization amazon. Sep 4 00:07:01.044751 systemd[1]: Detected architecture x86-64. Sep 4 00:07:01.044766 systemd[1]: Running in initrd. Sep 4 00:07:01.044780 systemd[1]: No hostname configured, using default hostname. Sep 4 00:07:01.044798 systemd[1]: Hostname set to . Sep 4 00:07:01.044814 systemd[1]: Initializing machine ID from VM UUID. Sep 4 00:07:01.044830 systemd[1]: Queued start job for default target initrd.target. Sep 4 00:07:01.044844 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:07:01.044859 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:07:01.044877 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 00:07:01.044892 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:07:01.044908 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 00:07:01.044928 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 00:07:01.044944 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 00:07:01.044960 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 00:07:01.044975 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:07:01.044991 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:07:01.045006 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:07:01.045022 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:07:01.045050 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:07:01.045067 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:07:01.045082 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:07:01.045097 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:07:01.045113 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 00:07:01.045128 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 00:07:01.045146 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:07:01.045161 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:07:01.045179 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:07:01.045194 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:07:01.045210 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 00:07:01.045225 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:07:01.045241 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 00:07:01.045256 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 00:07:01.045272 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 00:07:01.045287 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:07:01.045306 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:07:01.045325 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:07:01.045340 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 00:07:01.045356 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 00:07:01.045410 systemd-journald[207]: Collecting audit messages is disabled. Sep 4 00:07:01.045448 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:07:01.045464 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 00:07:01.045480 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 00:07:01.045496 systemd-journald[207]: Journal started Sep 4 00:07:01.045530 systemd-journald[207]: Runtime Journal (/run/log/journal/ec2eb9c13b8b9a56cba165879a6c6a11) is 4.8M, max 38.4M, 33.6M free. Sep 4 00:07:01.049521 systemd-modules-load[208]: Inserted module 'overlay' Sep 4 00:07:01.060077 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:07:01.060854 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:07:01.067170 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 00:07:01.084216 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:07:01.091179 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:07:01.098292 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:07:01.110570 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 00:07:01.110614 kernel: Bridge firewalling registered Sep 4 00:07:01.108084 systemd-modules-load[208]: Inserted module 'br_netfilter' Sep 4 00:07:01.112003 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:07:01.118914 systemd-tmpfiles[224]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 00:07:01.120227 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:07:01.128572 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:07:01.131316 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:07:01.137283 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 00:07:01.138968 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:07:01.142087 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:07:01.145874 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:07:01.173073 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:07:01.203763 systemd-resolved[247]: Positive Trust Anchors: Sep 4 00:07:01.205880 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:07:01.205953 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:07:01.213404 systemd-resolved[247]: Defaulting to hostname 'linux'. Sep 4 00:07:01.216977 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:07:01.218165 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:07:01.331132 kernel: SCSI subsystem initialized Sep 4 00:07:01.362767 kernel: Loading iSCSI transport class v2.0-870. Sep 4 00:07:01.415121 kernel: iscsi: registered transport (tcp) Sep 4 00:07:01.458353 kernel: iscsi: registered transport (qla4xxx) Sep 4 00:07:01.458450 kernel: QLogic iSCSI HBA Driver Sep 4 00:07:01.480573 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:07:01.504982 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:07:01.510116 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:07:01.673546 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 00:07:01.680603 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 00:07:01.945261 kernel: raid6: avx512x4 gen() 10521 MB/s Sep 4 00:07:01.963093 kernel: raid6: avx512x2 gen() 9811 MB/s Sep 4 00:07:01.981085 kernel: raid6: avx512x1 gen() 13016 MB/s Sep 4 00:07:01.999097 kernel: raid6: avx2x4 gen() 11273 MB/s Sep 4 00:07:02.017072 kernel: raid6: avx2x2 gen() 11036 MB/s Sep 4 00:07:02.035863 kernel: raid6: avx2x1 gen() 8062 MB/s Sep 4 00:07:02.035940 kernel: raid6: using algorithm avx512x1 gen() 13016 MB/s Sep 4 00:07:02.063405 kernel: raid6: .... xor() 14228 MB/s, rmw enabled Sep 4 00:07:02.063488 kernel: raid6: using avx512x2 recovery algorithm Sep 4 00:07:02.126372 kernel: xor: automatically using best checksumming function avx Sep 4 00:07:02.398071 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 00:07:02.414310 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:07:02.420555 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:07:02.482345 systemd-udevd[456]: Using default interface naming scheme 'v255'. Sep 4 00:07:02.498154 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:07:02.507800 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 00:07:02.585444 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Sep 4 00:07:02.672495 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:07:02.684701 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:07:02.810842 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:07:02.815156 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 00:07:02.900122 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 00:07:02.910062 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 4 00:07:02.916075 kernel: AES CTR mode by8 optimization enabled Sep 4 00:07:02.959251 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 00:07:02.959533 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 00:07:02.974805 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 00:07:02.975117 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 4 00:07:02.979212 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 4 00:07:02.980161 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:07:02.984246 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:dd:96:a7:a5:33 Sep 4 00:07:02.980343 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:07:02.985256 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:07:02.987344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:07:02.989151 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:07:02.999078 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 00:07:03.001134 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:07:03.001296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:07:03.003388 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:07:03.009328 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 00:07:03.009411 kernel: GPT:9289727 != 16777215 Sep 4 00:07:03.009429 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 00:07:03.009447 kernel: GPT:9289727 != 16777215 Sep 4 00:07:03.009463 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 00:07:03.009481 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 00:07:03.017122 (udev-worker)[506]: Network interface NamePolicy= disabled on kernel command line. Sep 4 00:07:03.038559 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:07:03.057095 kernel: nvme nvme0: using unchecked data buffer Sep 4 00:07:03.203527 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 00:07:03.204298 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 00:07:03.217225 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 00:07:03.218219 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 00:07:03.230424 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 00:07:03.241617 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 00:07:03.242332 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:07:03.243526 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:07:03.244688 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:07:03.246464 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 00:07:03.249181 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 00:07:03.271657 disk-uuid[695]: Primary Header is updated. Sep 4 00:07:03.271657 disk-uuid[695]: Secondary Entries is updated. Sep 4 00:07:03.271657 disk-uuid[695]: Secondary Header is updated. Sep 4 00:07:03.278309 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:07:03.281102 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 00:07:03.288063 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 00:07:04.295333 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 00:07:04.295914 disk-uuid[700]: The operation has completed successfully. Sep 4 00:07:04.414138 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 00:07:04.414233 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 00:07:04.578421 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 00:07:04.591646 sh[961]: Success Sep 4 00:07:04.619191 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 00:07:04.619271 kernel: device-mapper: uevent: version 1.0.3 Sep 4 00:07:04.619293 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 00:07:04.632103 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 4 00:07:04.787123 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 00:07:04.790816 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 00:07:04.806746 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 00:07:04.845371 kernel: BTRFS: device fsid 8a9c2e34-3d3c-49a9-acce-59bf90003071 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (984) Sep 4 00:07:04.850881 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9c2e34-3d3c-49a9-acce-59bf90003071 Sep 4 00:07:04.850968 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:07:04.920083 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 00:07:04.920190 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 00:07:04.920213 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 00:07:04.924942 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 00:07:04.926768 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:07:04.927417 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 00:07:04.928639 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 00:07:04.931265 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 00:07:04.971342 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1014) Sep 4 00:07:04.975070 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:07:04.975144 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:07:04.996875 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 00:07:04.996961 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 4 00:07:05.006116 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:07:05.006739 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 00:07:05.010212 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 00:07:05.046700 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:07:05.049967 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:07:05.095628 systemd-networkd[1155]: lo: Link UP Sep 4 00:07:05.095641 systemd-networkd[1155]: lo: Gained carrier Sep 4 00:07:05.097372 systemd-networkd[1155]: Enumeration completed Sep 4 00:07:05.097499 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:07:05.098357 systemd-networkd[1155]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:07:05.098362 systemd-networkd[1155]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:07:05.098497 systemd[1]: Reached target network.target - Network. Sep 4 00:07:05.101962 systemd-networkd[1155]: eth0: Link UP Sep 4 00:07:05.101970 systemd-networkd[1155]: eth0: Gained carrier Sep 4 00:07:05.101988 systemd-networkd[1155]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:07:05.116230 systemd-networkd[1155]: eth0: DHCPv4 address 172.31.29.190/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 00:07:05.395562 ignition[1112]: Ignition 2.21.0 Sep 4 00:07:05.395577 ignition[1112]: Stage: fetch-offline Sep 4 00:07:05.395746 ignition[1112]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:07:05.395754 ignition[1112]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 00:07:05.396062 ignition[1112]: Ignition finished successfully Sep 4 00:07:05.398542 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:07:05.400636 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 00:07:05.425732 ignition[1165]: Ignition 2.21.0 Sep 4 00:07:05.425747 ignition[1165]: Stage: fetch Sep 4 00:07:05.426141 ignition[1165]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:07:05.426153 ignition[1165]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 00:07:05.426270 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 00:07:05.462624 ignition[1165]: PUT result: OK Sep 4 00:07:05.465087 ignition[1165]: parsed url from cmdline: "" Sep 4 00:07:05.465097 ignition[1165]: no config URL provided Sep 4 00:07:05.465109 ignition[1165]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 00:07:05.465125 ignition[1165]: no config at "/usr/lib/ignition/user.ign" Sep 4 00:07:05.465157 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 00:07:05.465857 ignition[1165]: PUT result: OK Sep 4 00:07:05.466122 ignition[1165]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 00:07:05.466900 ignition[1165]: GET result: OK Sep 4 00:07:05.467053 ignition[1165]: parsing config with SHA512: ae7d9f9baf0288f4a3bf26ad2f87a706b9ee21ac53c0f1fe6e592d46797782256db95bff21bffcc42ca62c84ceda33e9137f452dcd5e692f7e8842bb94304e44 Sep 4 00:07:05.475553 unknown[1165]: fetched base config from "system" Sep 4 00:07:05.475572 unknown[1165]: fetched base config from "system" Sep 4 00:07:05.476130 ignition[1165]: fetch: fetch complete Sep 4 00:07:05.475579 unknown[1165]: fetched user config from "aws" Sep 4 00:07:05.476138 ignition[1165]: fetch: fetch passed Sep 4 00:07:05.476202 ignition[1165]: Ignition finished successfully Sep 4 00:07:05.479687 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 00:07:05.481667 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 00:07:05.514077 ignition[1172]: Ignition 2.21.0 Sep 4 00:07:05.514093 ignition[1172]: Stage: kargs Sep 4 00:07:05.514533 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:07:05.514562 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 00:07:05.514704 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 00:07:05.515529 ignition[1172]: PUT result: OK Sep 4 00:07:05.518195 ignition[1172]: kargs: kargs passed Sep 4 00:07:05.518270 ignition[1172]: Ignition finished successfully Sep 4 00:07:05.520565 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 00:07:05.522207 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 00:07:05.546368 ignition[1178]: Ignition 2.21.0 Sep 4 00:07:05.546385 ignition[1178]: Stage: disks Sep 4 00:07:05.546772 ignition[1178]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:07:05.546784 ignition[1178]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 00:07:05.546900 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 00:07:05.548389 ignition[1178]: PUT result: OK Sep 4 00:07:05.551513 ignition[1178]: disks: disks passed Sep 4 00:07:05.551590 ignition[1178]: Ignition finished successfully Sep 4 00:07:05.553106 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 00:07:05.554070 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 00:07:05.554741 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 00:07:05.555138 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:07:05.555673 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:07:05.556232 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:07:05.557813 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 00:07:05.611507 systemd-fsck[1186]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 4 00:07:05.614694 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 00:07:05.617301 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 00:07:05.753057 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c3518c93-f823-4477-a620-ff9666a59be5 r/w with ordered data mode. Quota mode: none. Sep 4 00:07:05.754528 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 00:07:05.755561 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 00:07:05.757599 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:07:05.761065 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 00:07:05.762945 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 00:07:05.764176 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 00:07:05.764217 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:07:05.776505 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 00:07:05.778863 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 00:07:05.795066 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1205) Sep 4 00:07:05.800059 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:07:05.800146 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:07:05.808973 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 00:07:05.809077 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 4 00:07:05.811510 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:07:06.044814 initrd-setup-root[1229]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 00:07:06.049757 initrd-setup-root[1236]: cut: /sysroot/etc/group: No such file or directory Sep 4 00:07:06.065134 initrd-setup-root[1243]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 00:07:06.069527 initrd-setup-root[1250]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 00:07:06.316966 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 00:07:06.319458 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 00:07:06.323201 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 00:07:06.336853 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 00:07:06.339306 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:07:06.369557 ignition[1317]: INFO : Ignition 2.21.0 Sep 4 00:07:06.371519 ignition[1317]: INFO : Stage: mount Sep 4 00:07:06.371519 ignition[1317]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:07:06.371519 ignition[1317]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 00:07:06.371519 ignition[1317]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 00:07:06.371519 ignition[1317]: INFO : PUT result: OK Sep 4 00:07:06.376143 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 00:07:06.378464 ignition[1317]: INFO : mount: mount passed Sep 4 00:07:06.378464 ignition[1317]: INFO : Ignition finished successfully Sep 4 00:07:06.380568 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 00:07:06.382126 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 00:07:06.400736 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:07:06.437098 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1330) Sep 4 00:07:06.440144 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:07:06.440210 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:07:06.451055 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 00:07:06.451125 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 4 00:07:06.453397 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:07:06.485420 ignition[1346]: INFO : Ignition 2.21.0 Sep 4 00:07:06.485420 ignition[1346]: INFO : Stage: files Sep 4 00:07:06.486674 ignition[1346]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:07:06.486674 ignition[1346]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 00:07:06.486674 ignition[1346]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 00:07:06.488786 ignition[1346]: INFO : PUT result: OK Sep 4 00:07:06.488491 systemd-networkd[1155]: eth0: Gained IPv6LL Sep 4 00:07:06.492075 ignition[1346]: DEBUG : files: compiled without relabeling support, skipping Sep 4 00:07:06.493273 ignition[1346]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 00:07:06.493273 ignition[1346]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 00:07:06.508496 ignition[1346]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 00:07:06.509320 ignition[1346]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 00:07:06.509320 ignition[1346]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 00:07:06.509093 unknown[1346]: wrote ssh authorized keys file for user: core Sep 4 00:07:06.524736 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 00:07:06.525629 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 4 00:07:06.626485 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 00:07:06.930650 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 00:07:06.930650 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 00:07:06.932468 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 00:07:06.932468 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:07:06.932468 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:07:06.932468 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:07:06.932468 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:07:06.932468 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:07:06.932468 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:07:06.937763 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:07:06.937763 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:07:06.937763 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:07:06.940530 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:07:06.940530 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:07:06.940530 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 4 00:07:18.748003 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 00:07:19.129081 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:07:19.129081 ignition[1346]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 00:07:19.131110 ignition[1346]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:07:19.134938 ignition[1346]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:07:19.134938 ignition[1346]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 00:07:19.134938 ignition[1346]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 00:07:19.138549 ignition[1346]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 00:07:19.138549 ignition[1346]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:07:19.138549 ignition[1346]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:07:19.138549 ignition[1346]: INFO : files: files passed Sep 4 00:07:19.138549 ignition[1346]: INFO : Ignition finished successfully Sep 4 00:07:19.137413 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 00:07:19.139162 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 00:07:19.143069 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 00:07:19.151355 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 00:07:19.151466 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 00:07:19.164056 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:07:19.165589 initrd-setup-root-after-ignition[1377]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:07:19.166835 initrd-setup-root-after-ignition[1381]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:07:19.168168 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:07:19.169294 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 00:07:19.170869 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 00:07:19.220627 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 00:07:19.220785 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 00:07:19.222010 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 00:07:19.223247 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 00:07:19.224002 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 00:07:19.225199 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 00:07:19.263354 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:07:19.266286 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 00:07:19.287411 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:07:19.288022 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:07:19.288940 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 00:07:19.289829 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 00:07:19.290017 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:07:19.290966 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 00:07:19.291742 systemd[1]: Stopped target basic.target - Basic System. Sep 4 00:07:19.292500 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 00:07:19.293182 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:07:19.294068 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 00:07:19.294712 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:07:19.295510 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 00:07:19.296198 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:07:19.296927 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 00:07:19.298074 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 00:07:19.298866 systemd[1]: Stopped target swap.target - Swaps. Sep 4 00:07:19.299706 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 00:07:19.301077 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:07:19.302129 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:07:19.302649 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:07:19.303286 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 00:07:19.304170 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:07:19.304669 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 00:07:19.304797 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 00:07:19.306451 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 00:07:19.306659 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:07:19.307492 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 00:07:19.307696 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 00:07:19.311177 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 00:07:19.314167 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 00:07:19.314685 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 00:07:19.314924 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:07:19.316355 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 00:07:19.317258 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:07:19.324786 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 00:07:19.327916 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 00:07:19.349321 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 00:07:19.350314 ignition[1401]: INFO : Ignition 2.21.0 Sep 4 00:07:19.350314 ignition[1401]: INFO : Stage: umount Sep 4 00:07:19.352118 ignition[1401]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:07:19.352118 ignition[1401]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 00:07:19.352118 ignition[1401]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 00:07:19.352118 ignition[1401]: INFO : PUT result: OK Sep 4 00:07:19.356889 ignition[1401]: INFO : umount: umount passed Sep 4 00:07:19.358241 ignition[1401]: INFO : Ignition finished successfully Sep 4 00:07:19.358050 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 00:07:19.358704 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 00:07:19.359404 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 00:07:19.359520 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 00:07:19.360953 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 00:07:19.361567 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 00:07:19.362345 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 00:07:19.362408 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 00:07:19.362961 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 00:07:19.363020 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 00:07:19.363627 systemd[1]: Stopped target network.target - Network. Sep 4 00:07:19.364203 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 00:07:19.364268 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:07:19.364834 systemd[1]: Stopped target paths.target - Path Units. Sep 4 00:07:19.365447 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 00:07:19.369123 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:07:19.369481 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 00:07:19.370528 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 00:07:19.371174 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 00:07:19.371233 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:07:19.371788 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 00:07:19.371835 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:07:19.372404 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 00:07:19.372478 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 00:07:19.373079 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 00:07:19.373136 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 00:07:19.373699 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 00:07:19.373892 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 00:07:19.374654 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 00:07:19.375290 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 00:07:19.379335 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 00:07:19.379493 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 00:07:19.383230 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 00:07:19.383592 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 00:07:19.383743 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 00:07:19.386143 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 00:07:19.386943 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 00:07:19.387968 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 00:07:19.388054 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:07:19.389947 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 00:07:19.390490 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 00:07:19.390561 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:07:19.392637 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 00:07:19.392701 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:07:19.394252 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 00:07:19.394312 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 00:07:19.395024 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 00:07:19.395113 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:07:19.397638 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:07:19.400295 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 00:07:19.400385 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:07:19.406982 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 00:07:19.407219 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:07:19.409001 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 00:07:19.409099 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 00:07:19.410559 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 00:07:19.410609 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:07:19.412158 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 00:07:19.412230 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:07:19.413503 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 00:07:19.413567 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 00:07:19.415949 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 00:07:19.416021 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:07:19.422191 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 00:07:19.422857 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 00:07:19.422932 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:07:19.423585 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 00:07:19.423642 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:07:19.427635 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 00:07:19.427702 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:07:19.428437 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 00:07:19.428495 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:07:19.429127 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:07:19.429183 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:07:19.432300 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 4 00:07:19.432381 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 4 00:07:19.432433 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 00:07:19.432487 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:07:19.432955 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 00:07:19.433120 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 00:07:19.440934 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 00:07:19.441045 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 00:07:19.443687 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 00:07:19.446132 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 00:07:19.462067 systemd[1]: Switching root. Sep 4 00:07:19.505609 systemd-journald[207]: Journal stopped Sep 4 00:07:21.162048 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Sep 4 00:07:21.162136 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 00:07:21.162164 kernel: SELinux: policy capability open_perms=1 Sep 4 00:07:21.163755 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 00:07:21.163790 kernel: SELinux: policy capability always_check_network=0 Sep 4 00:07:21.163810 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 00:07:21.163836 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 00:07:21.163860 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 00:07:21.163878 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 00:07:21.163899 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 00:07:21.163917 kernel: audit: type=1403 audit(1756944439.852:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 00:07:21.163938 systemd[1]: Successfully loaded SELinux policy in 44.992ms. Sep 4 00:07:21.163971 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.280ms. Sep 4 00:07:21.163994 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:07:21.164016 systemd[1]: Detected virtualization amazon. Sep 4 00:07:21.164847 systemd[1]: Detected architecture x86-64. Sep 4 00:07:21.164875 systemd[1]: Detected first boot. Sep 4 00:07:21.164896 systemd[1]: Initializing machine ID from VM UUID. Sep 4 00:07:21.164915 kernel: Guest personality initialized and is inactive Sep 4 00:07:21.164934 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 00:07:21.164951 kernel: Initialized host personality Sep 4 00:07:21.164968 kernel: NET: Registered PF_VSOCK protocol family Sep 4 00:07:21.164986 zram_generator::config[1444]: No configuration found. Sep 4 00:07:21.165006 systemd[1]: Populated /etc with preset unit settings. Sep 4 00:07:21.165030 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 00:07:21.165071 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 00:07:21.165092 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 00:07:21.165110 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 00:07:21.165129 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 00:07:21.165148 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 00:07:21.165174 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 00:07:21.165193 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 00:07:21.165212 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 00:07:21.165234 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 00:07:21.165254 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 00:07:21.165273 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 00:07:21.165292 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:07:21.165311 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:07:21.165330 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 00:07:21.165348 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 00:07:21.165367 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 00:07:21.165390 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:07:21.165409 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 00:07:21.165428 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:07:21.165446 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:07:21.165465 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 00:07:21.165484 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 00:07:21.165502 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 00:07:21.165521 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 00:07:21.165548 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:07:21.165567 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:07:21.165586 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:07:21.165605 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:07:21.165623 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 00:07:21.165641 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 00:07:21.165660 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 00:07:21.165679 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:07:21.165697 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:07:21.165727 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:07:21.165746 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 00:07:21.165766 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 00:07:21.168106 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 00:07:21.168131 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 00:07:21.168158 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:07:21.168183 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 00:07:21.168205 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 00:07:21.168224 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 00:07:21.168254 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 00:07:21.168275 systemd[1]: Reached target machines.target - Containers. Sep 4 00:07:21.168293 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 00:07:21.168310 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:07:21.168330 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:07:21.168350 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 00:07:21.168370 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:07:21.168391 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:07:21.168415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:07:21.168435 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 00:07:21.168454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:07:21.168474 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 00:07:21.168494 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 00:07:21.168529 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 00:07:21.168550 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 00:07:21.168570 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 00:07:21.168591 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:07:21.168615 kernel: loop: module loaded Sep 4 00:07:21.168637 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:07:21.168658 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:07:21.168683 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:07:21.168705 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 00:07:21.168730 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 00:07:21.168752 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:07:21.168774 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 00:07:21.168795 systemd[1]: Stopped verity-setup.service. Sep 4 00:07:21.168818 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:07:21.168844 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 00:07:21.168867 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 00:07:21.168889 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 00:07:21.168911 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 00:07:21.168933 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 00:07:21.168955 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 00:07:21.168977 kernel: ACPI: bus type drm_connector registered Sep 4 00:07:21.168998 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:07:21.169020 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 00:07:21.184119 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 00:07:21.184154 kernel: fuse: init (API version 7.41) Sep 4 00:07:21.184178 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:07:21.184202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:07:21.184225 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:07:21.184248 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:07:21.184271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:07:21.184294 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:07:21.184316 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 00:07:21.184345 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 00:07:21.184368 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:07:21.184391 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:07:21.184456 systemd-journald[1530]: Collecting audit messages is disabled. Sep 4 00:07:21.184496 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:07:21.184520 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 00:07:21.184542 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 00:07:21.184567 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 00:07:21.184585 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 00:07:21.184603 systemd-journald[1530]: Journal started Sep 4 00:07:21.184642 systemd-journald[1530]: Runtime Journal (/run/log/journal/ec2eb9c13b8b9a56cba165879a6c6a11) is 4.8M, max 38.4M, 33.6M free. Sep 4 00:07:21.188177 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:07:20.772088 systemd[1]: Queued start job for default target multi-user.target. Sep 4 00:07:20.797355 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 00:07:20.797952 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 00:07:21.198573 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 00:07:21.203559 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 00:07:21.203621 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:07:21.216055 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 00:07:21.216139 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:07:21.222109 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 00:07:21.228108 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:07:21.234061 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:07:21.242062 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 00:07:21.258070 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 00:07:21.261805 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:07:21.265053 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 00:07:21.268983 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:07:21.272154 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 00:07:21.276026 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 00:07:21.279837 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 00:07:21.283724 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 00:07:21.314466 kernel: loop0: detected capacity change from 0 to 72352 Sep 4 00:07:21.320778 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 00:07:21.321959 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:07:21.328428 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 00:07:21.332253 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 00:07:21.342571 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:07:21.377378 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:07:21.382970 systemd-journald[1530]: Time spent on flushing to /var/log/journal/ec2eb9c13b8b9a56cba165879a6c6a11 is 53.580ms for 1029 entries. Sep 4 00:07:21.382970 systemd-journald[1530]: System Journal (/var/log/journal/ec2eb9c13b8b9a56cba165879a6c6a11) is 8M, max 195.6M, 187.6M free. Sep 4 00:07:21.447242 systemd-journald[1530]: Received client request to flush runtime journal. Sep 4 00:07:21.447306 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 00:07:21.391083 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 00:07:21.415457 systemd-tmpfiles[1560]: ACLs are not supported, ignoring. Sep 4 00:07:21.415480 systemd-tmpfiles[1560]: ACLs are not supported, ignoring. Sep 4 00:07:21.426673 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:07:21.434058 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 00:07:21.457294 kernel: loop1: detected capacity change from 0 to 146240 Sep 4 00:07:21.449114 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 00:07:21.550197 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 00:07:21.552247 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:07:21.568893 kernel: loop2: detected capacity change from 0 to 113872 Sep 4 00:07:21.590639 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Sep 4 00:07:21.591022 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Sep 4 00:07:21.596411 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:07:21.636070 kernel: loop3: detected capacity change from 0 to 224512 Sep 4 00:07:21.754051 kernel: loop4: detected capacity change from 0 to 72352 Sep 4 00:07:21.782068 kernel: loop5: detected capacity change from 0 to 146240 Sep 4 00:07:21.799552 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 00:07:21.807059 kernel: loop6: detected capacity change from 0 to 113872 Sep 4 00:07:21.828064 kernel: loop7: detected capacity change from 0 to 224512 Sep 4 00:07:21.860121 (sd-merge)[1605]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 00:07:21.860668 (sd-merge)[1605]: Merged extensions into '/usr'. Sep 4 00:07:21.870803 systemd[1]: Reload requested from client PID 1559 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 00:07:21.870821 systemd[1]: Reloading... Sep 4 00:07:21.982062 zram_generator::config[1631]: No configuration found. Sep 4 00:07:22.163345 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:07:22.324107 systemd[1]: Reloading finished in 452 ms. Sep 4 00:07:22.349960 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 00:07:22.356203 systemd[1]: Starting ensure-sysext.service... Sep 4 00:07:22.362224 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:07:22.391136 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 00:07:22.396768 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:07:22.405022 systemd[1]: Reload requested from client PID 1682 ('systemctl') (unit ensure-sysext.service)... Sep 4 00:07:22.405183 systemd[1]: Reloading... Sep 4 00:07:22.422403 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 00:07:22.422449 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 00:07:22.422827 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 00:07:22.423227 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 00:07:22.431019 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 00:07:22.435714 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Sep 4 00:07:22.435810 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Sep 4 00:07:22.450614 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:07:22.450629 systemd-tmpfiles[1683]: Skipping /boot Sep 4 00:07:22.477182 systemd-udevd[1686]: Using default interface naming scheme 'v255'. Sep 4 00:07:22.501943 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:07:22.501965 systemd-tmpfiles[1683]: Skipping /boot Sep 4 00:07:22.534072 zram_generator::config[1715]: No configuration found. Sep 4 00:07:22.726357 ldconfig[1554]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 00:07:22.806265 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:07:22.864182 (udev-worker)[1746]: Network interface NamePolicy= disabled on kernel command line. Sep 4 00:07:22.938061 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 00:07:23.062059 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 00:07:23.085593 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 00:07:23.085825 systemd[1]: Reloading finished in 679 ms. Sep 4 00:07:23.087064 kernel: ACPI: button: Power Button [PWRF] Sep 4 00:07:23.099081 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 4 00:07:23.099402 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 4 00:07:23.098518 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:07:23.102637 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 00:07:23.104550 kernel: ACPI: button: Sleep Button [SLPF] Sep 4 00:07:23.104719 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:07:23.144059 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:07:23.150438 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 00:07:23.154784 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 00:07:23.161746 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:07:23.168610 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:07:23.174608 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 00:07:23.179350 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:07:23.180385 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:07:23.185449 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:07:23.201074 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:07:23.218532 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:07:23.219414 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:07:23.219605 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:07:23.219757 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:07:23.229142 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 00:07:23.235272 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:07:23.235577 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:07:23.235822 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:07:23.235962 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:07:23.236118 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:07:23.244692 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:07:23.246479 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:07:23.249408 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:07:23.250275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:07:23.250432 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:07:23.250672 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 00:07:23.261521 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:07:23.279589 systemd[1]: Finished ensure-sysext.service. Sep 4 00:07:23.282334 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 00:07:23.321466 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:07:23.321884 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:07:23.334029 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:07:23.334335 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:07:23.335402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:07:23.336239 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:07:23.337378 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:07:23.337589 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:07:23.340763 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:07:23.340859 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:07:23.346693 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 00:07:23.355663 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 00:07:23.428362 augenrules[1929]: No rules Sep 4 00:07:23.431146 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:07:23.431456 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:07:23.432873 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 00:07:23.434424 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 00:07:23.441742 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 00:07:23.443774 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 00:07:23.537845 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:07:23.581709 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 00:07:23.585016 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 00:07:23.628773 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 00:07:23.732644 systemd-networkd[1879]: lo: Link UP Sep 4 00:07:23.732661 systemd-networkd[1879]: lo: Gained carrier Sep 4 00:07:23.734532 systemd-networkd[1879]: Enumeration completed Sep 4 00:07:23.734659 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:07:23.736483 systemd-networkd[1879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:07:23.736497 systemd-networkd[1879]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:07:23.742235 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 00:07:23.742534 systemd-networkd[1879]: eth0: Link UP Sep 4 00:07:23.742784 systemd-networkd[1879]: eth0: Gained carrier Sep 4 00:07:23.742815 systemd-networkd[1879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:07:23.744365 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 00:07:23.755153 systemd-networkd[1879]: eth0: DHCPv4 address 172.31.29.190/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 00:07:23.761005 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:07:23.778756 systemd-resolved[1880]: Positive Trust Anchors: Sep 4 00:07:23.778780 systemd-resolved[1880]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:07:23.778832 systemd-resolved[1880]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:07:23.783248 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 00:07:23.785153 systemd-resolved[1880]: Defaulting to hostname 'linux'. Sep 4 00:07:23.787837 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:07:23.788435 systemd[1]: Reached target network.target - Network. Sep 4 00:07:23.788877 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:07:23.789309 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:07:23.789987 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 00:07:23.790451 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 00:07:23.790842 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 4 00:07:23.791423 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 00:07:23.791901 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 00:07:23.792307 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 00:07:23.792684 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 00:07:23.792731 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:07:23.793133 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:07:23.794785 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 00:07:23.796419 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 00:07:23.799550 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 00:07:23.800112 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 00:07:23.800467 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 00:07:23.802903 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 00:07:23.803760 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 00:07:23.804813 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 00:07:23.806232 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:07:23.806590 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:07:23.806958 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:07:23.806987 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:07:23.808007 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 00:07:23.811166 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 00:07:23.813217 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 00:07:23.816208 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 00:07:23.819182 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 00:07:23.823223 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 00:07:23.823619 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 00:07:23.826208 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 4 00:07:23.832471 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 00:07:23.840160 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 00:07:23.846155 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 00:07:23.853140 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 00:07:23.856307 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 00:07:23.868281 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 00:07:23.870624 jq[1968]: false Sep 4 00:07:23.873674 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 00:07:23.875888 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 00:07:23.879400 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 00:07:23.882262 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 00:07:23.890918 google_oslogin_nss_cache[1970]: oslogin_cache_refresh[1970]: Refreshing passwd entry cache Sep 4 00:07:23.890695 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 00:07:23.890361 oslogin_cache_refresh[1970]: Refreshing passwd entry cache Sep 4 00:07:23.897104 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 00:07:23.898488 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 00:07:23.898684 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 00:07:23.921110 google_oslogin_nss_cache[1970]: oslogin_cache_refresh[1970]: Failure getting users, quitting Sep 4 00:07:23.921110 google_oslogin_nss_cache[1970]: oslogin_cache_refresh[1970]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:07:23.921110 google_oslogin_nss_cache[1970]: oslogin_cache_refresh[1970]: Refreshing group entry cache Sep 4 00:07:23.920616 oslogin_cache_refresh[1970]: Failure getting users, quitting Sep 4 00:07:23.920634 oslogin_cache_refresh[1970]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:07:23.920680 oslogin_cache_refresh[1970]: Refreshing group entry cache Sep 4 00:07:23.922958 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 00:07:23.923214 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 00:07:23.936115 extend-filesystems[1969]: Found /dev/nvme0n1p6 Sep 4 00:07:23.941891 jq[1981]: true Sep 4 00:07:23.946269 extend-filesystems[1969]: Found /dev/nvme0n1p9 Sep 4 00:07:23.945510 oslogin_cache_refresh[1970]: Failure getting groups, quitting Sep 4 00:07:23.948874 google_oslogin_nss_cache[1970]: oslogin_cache_refresh[1970]: Failure getting groups, quitting Sep 4 00:07:23.948874 google_oslogin_nss_cache[1970]: oslogin_cache_refresh[1970]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:07:23.948535 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 4 00:07:23.945523 oslogin_cache_refresh[1970]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:07:23.950112 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 4 00:07:23.954975 extend-filesystems[1969]: Checking size of /dev/nvme0n1p9 Sep 4 00:07:23.972255 tar[1991]: linux-amd64/LICENSE Sep 4 00:07:23.972255 tar[1991]: linux-amd64/helm Sep 4 00:07:23.976944 update_engine[1980]: I20250904 00:07:23.976503 1980 main.cc:92] Flatcar Update Engine starting Sep 4 00:07:23.997990 extend-filesystems[1969]: Resized partition /dev/nvme0n1p9 Sep 4 00:07:23.998751 jq[2010]: true Sep 4 00:07:23.999249 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 00:07:24.019476 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 00:07:24.019689 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 00:07:24.034719 dbus-daemon[1966]: [system] SELinux support is enabled Sep 4 00:07:24.035114 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 00:07:24.040635 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 00:07:24.040684 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 00:07:24.041121 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 00:07:24.041145 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 00:07:24.044322 dbus-daemon[1966]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1879 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 00:07:24.050754 ntpd[1972]: ntpd 4.2.8p17@1.4004-o Wed Sep 3 21:33:36 UTC 2025 (1): Starting Sep 4 00:07:24.051438 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: ntpd 4.2.8p17@1.4004-o Wed Sep 3 21:33:36 UTC 2025 (1): Starting Sep 4 00:07:24.051438 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 00:07:24.051438 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: ---------------------------------------------------- Sep 4 00:07:24.051438 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: ntp-4 is maintained by Network Time Foundation, Sep 4 00:07:24.051438 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 00:07:24.051438 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: corporation. Support and training for ntp-4 are Sep 4 00:07:24.051438 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: available at https://www.nwtime.org/support Sep 4 00:07:24.051438 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: ---------------------------------------------------- Sep 4 00:07:24.050781 ntpd[1972]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 00:07:24.050788 ntpd[1972]: ---------------------------------------------------- Sep 4 00:07:24.050794 ntpd[1972]: ntp-4 is maintained by Network Time Foundation, Sep 4 00:07:24.050801 ntpd[1972]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 00:07:24.050807 ntpd[1972]: corporation. Support and training for ntp-4 are Sep 4 00:07:24.050814 ntpd[1972]: available at https://www.nwtime.org/support Sep 4 00:07:24.050820 ntpd[1972]: ---------------------------------------------------- Sep 4 00:07:24.053704 dbus-daemon[1966]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 00:07:24.056501 extend-filesystems[2026]: resize2fs 1.47.2 (1-Jan-2025) Sep 4 00:07:24.057953 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: proto: precision = 0.056 usec (-24) Sep 4 00:07:24.056339 ntpd[1972]: proto: precision = 0.056 usec (-24) Sep 4 00:07:24.058218 ntpd[1972]: basedate set to 2025-08-22 Sep 4 00:07:24.058461 (ntainerd)[2012]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 00:07:24.059693 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: basedate set to 2025-08-22 Sep 4 00:07:24.059693 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: gps base set to 2025-08-24 (week 2381) Sep 4 00:07:24.058236 ntpd[1972]: gps base set to 2025-08-24 (week 2381) Sep 4 00:07:24.061609 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 00:07:24.069171 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 00:07:24.069171 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 00:07:24.069244 update_engine[1980]: I20250904 00:07:24.068799 1980 update_check_scheduler.cc:74] Next update check in 7m14s Sep 4 00:07:24.068973 ntpd[1972]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 00:07:24.064445 systemd[1]: Started update-engine.service - Update Engine. Sep 4 00:07:24.069012 ntpd[1972]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 00:07:24.076266 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 00:07:24.073852 ntpd[1972]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 00:07:24.076401 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 00:07:24.076401 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: Listen normally on 3 eth0 172.31.29.190:123 Sep 4 00:07:24.076401 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: Listen normally on 4 lo [::1]:123 Sep 4 00:07:24.076401 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: bind(21) AF_INET6 fe80::4dd:96ff:fea7:a533%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 00:07:24.076401 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: unable to create socket on eth0 (5) for fe80::4dd:96ff:fea7:a533%2#123 Sep 4 00:07:24.076401 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: failed to init interface for address fe80::4dd:96ff:fea7:a533%2 Sep 4 00:07:24.076401 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: Listening on routing socket on fd #21 for interface updates Sep 4 00:07:24.076401 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 00:07:24.076401 ntpd[1972]: 4 Sep 00:07:24 ntpd[1972]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 00:07:24.073914 ntpd[1972]: Listen normally on 3 eth0 172.31.29.190:123 Sep 4 00:07:24.073948 ntpd[1972]: Listen normally on 4 lo [::1]:123 Sep 4 00:07:24.073989 ntpd[1972]: bind(21) AF_INET6 fe80::4dd:96ff:fea7:a533%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 00:07:24.074004 ntpd[1972]: unable to create socket on eth0 (5) for fe80::4dd:96ff:fea7:a533%2#123 Sep 4 00:07:24.074015 ntpd[1972]: failed to init interface for address fe80::4dd:96ff:fea7:a533%2 Sep 4 00:07:24.074061 ntpd[1972]: Listening on routing socket on fd #21 for interface updates Sep 4 00:07:24.075061 ntpd[1972]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 00:07:24.075082 ntpd[1972]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 00:07:24.079575 systemd-logind[1979]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 00:07:24.079597 systemd-logind[1979]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 4 00:07:24.079614 systemd-logind[1979]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 00:07:24.080223 systemd-logind[1979]: New seat seat0. Sep 4 00:07:24.091308 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 00:07:24.092246 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 00:07:24.100777 coreos-metadata[1965]: Sep 04 00:07:24.100 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 00:07:24.104697 coreos-metadata[1965]: Sep 04 00:07:24.104 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 00:07:24.106366 coreos-metadata[1965]: Sep 04 00:07:24.106 INFO Fetch successful Sep 4 00:07:24.106463 coreos-metadata[1965]: Sep 04 00:07:24.106 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 00:07:24.107913 coreos-metadata[1965]: Sep 04 00:07:24.107 INFO Fetch successful Sep 4 00:07:24.107972 coreos-metadata[1965]: Sep 04 00:07:24.107 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 00:07:24.113084 coreos-metadata[1965]: Sep 04 00:07:24.112 INFO Fetch successful Sep 4 00:07:24.113084 coreos-metadata[1965]: Sep 04 00:07:24.112 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 00:07:24.115411 coreos-metadata[1965]: Sep 04 00:07:24.115 INFO Fetch successful Sep 4 00:07:24.115501 coreos-metadata[1965]: Sep 04 00:07:24.115 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 00:07:24.116296 coreos-metadata[1965]: Sep 04 00:07:24.116 INFO Fetch failed with 404: resource not found Sep 4 00:07:24.116365 coreos-metadata[1965]: Sep 04 00:07:24.116 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 00:07:24.120173 coreos-metadata[1965]: Sep 04 00:07:24.119 INFO Fetch successful Sep 4 00:07:24.120173 coreos-metadata[1965]: Sep 04 00:07:24.119 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 00:07:24.125027 coreos-metadata[1965]: Sep 04 00:07:24.125 INFO Fetch successful Sep 4 00:07:24.125124 coreos-metadata[1965]: Sep 04 00:07:24.125 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 00:07:24.128704 coreos-metadata[1965]: Sep 04 00:07:24.128 INFO Fetch successful Sep 4 00:07:24.128704 coreos-metadata[1965]: Sep 04 00:07:24.128 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 00:07:24.134319 coreos-metadata[1965]: Sep 04 00:07:24.129 INFO Fetch successful Sep 4 00:07:24.134319 coreos-metadata[1965]: Sep 04 00:07:24.133 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 00:07:24.135998 coreos-metadata[1965]: Sep 04 00:07:24.135 INFO Fetch successful Sep 4 00:07:24.154951 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 00:07:24.199747 extend-filesystems[2026]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 00:07:24.199747 extend-filesystems[2026]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 00:07:24.199747 extend-filesystems[2026]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 00:07:24.199549 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 00:07:24.211310 extend-filesystems[1969]: Resized filesystem in /dev/nvme0n1p9 Sep 4 00:07:24.200466 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 00:07:24.218167 bash[2049]: Updated "/home/core/.ssh/authorized_keys" Sep 4 00:07:24.216462 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 00:07:24.222128 systemd[1]: Starting sshkeys.service... Sep 4 00:07:24.257527 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 00:07:24.258387 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 00:07:24.282603 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 00:07:24.286132 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 00:07:24.334399 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 00:07:24.335504 dbus-daemon[1966]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 00:07:24.342137 dbus-daemon[1966]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2028 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 00:07:24.350480 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 00:07:24.360516 sshd_keygen[2002]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 00:07:24.436423 locksmithd[2029]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 00:07:24.436470 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 00:07:24.442194 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 00:07:24.465838 coreos-metadata[2070]: Sep 04 00:07:24.463 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 00:07:24.475182 coreos-metadata[2070]: Sep 04 00:07:24.475 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 00:07:24.475768 coreos-metadata[2070]: Sep 04 00:07:24.475 INFO Fetch successful Sep 4 00:07:24.475768 coreos-metadata[2070]: Sep 04 00:07:24.475 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 00:07:24.476940 coreos-metadata[2070]: Sep 04 00:07:24.476 INFO Fetch successful Sep 4 00:07:24.482951 unknown[2070]: wrote ssh authorized keys file for user: core Sep 4 00:07:24.492404 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 00:07:24.492927 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 00:07:24.506018 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 00:07:24.561949 update-ssh-keys[2156]: Updated "/home/core/.ssh/authorized_keys" Sep 4 00:07:24.566677 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 00:07:24.581327 systemd[1]: Finished sshkeys.service. Sep 4 00:07:24.584432 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 00:07:24.619757 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 00:07:24.625839 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 00:07:24.628367 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 00:07:24.678165 containerd[2012]: time="2025-09-04T00:07:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 00:07:24.680758 containerd[2012]: time="2025-09-04T00:07:24.680717194Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 4 00:07:24.758102 containerd[2012]: time="2025-09-04T00:07:24.756705236Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.675µs" Sep 4 00:07:24.758102 containerd[2012]: time="2025-09-04T00:07:24.756747599Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 00:07:24.758102 containerd[2012]: time="2025-09-04T00:07:24.756770070Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 00:07:24.758102 containerd[2012]: time="2025-09-04T00:07:24.756952837Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 00:07:24.758102 containerd[2012]: time="2025-09-04T00:07:24.756972711Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 00:07:24.758102 containerd[2012]: time="2025-09-04T00:07:24.757005214Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:07:24.758102 containerd[2012]: time="2025-09-04T00:07:24.757602923Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:07:24.758102 containerd[2012]: time="2025-09-04T00:07:24.757631713Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:07:24.758102 containerd[2012]: time="2025-09-04T00:07:24.757981792Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:07:24.758102 containerd[2012]: time="2025-09-04T00:07:24.758006501Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:07:24.758102 containerd[2012]: time="2025-09-04T00:07:24.758024398Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:07:24.758102 containerd[2012]: time="2025-09-04T00:07:24.758054373Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 00:07:24.758571 containerd[2012]: time="2025-09-04T00:07:24.758162893Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 00:07:24.758571 containerd[2012]: time="2025-09-04T00:07:24.758403536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:07:24.758571 containerd[2012]: time="2025-09-04T00:07:24.758443808Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:07:24.758571 containerd[2012]: time="2025-09-04T00:07:24.758459817Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 00:07:24.762232 containerd[2012]: time="2025-09-04T00:07:24.759597231Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 00:07:24.762232 containerd[2012]: time="2025-09-04T00:07:24.760751481Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 00:07:24.762232 containerd[2012]: time="2025-09-04T00:07:24.760859605Z" level=info msg="metadata content store policy set" policy=shared Sep 4 00:07:24.766271 containerd[2012]: time="2025-09-04T00:07:24.766229086Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 00:07:24.766405 containerd[2012]: time="2025-09-04T00:07:24.766381876Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 00:07:24.766470 containerd[2012]: time="2025-09-04T00:07:24.766417557Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 00:07:24.766470 containerd[2012]: time="2025-09-04T00:07:24.766438490Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 00:07:24.766470 containerd[2012]: time="2025-09-04T00:07:24.766457140Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 00:07:24.766572 containerd[2012]: time="2025-09-04T00:07:24.766473531Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 00:07:24.766572 containerd[2012]: time="2025-09-04T00:07:24.766491923Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 00:07:24.766572 containerd[2012]: time="2025-09-04T00:07:24.766509263Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 00:07:24.766572 containerd[2012]: time="2025-09-04T00:07:24.766527155Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 00:07:24.766572 containerd[2012]: time="2025-09-04T00:07:24.766542306Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 00:07:24.766572 containerd[2012]: time="2025-09-04T00:07:24.766558558Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 00:07:24.766753 containerd[2012]: time="2025-09-04T00:07:24.766579461Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 00:07:24.766753 containerd[2012]: time="2025-09-04T00:07:24.766732372Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 00:07:24.766821 containerd[2012]: time="2025-09-04T00:07:24.766757503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 00:07:24.766821 containerd[2012]: time="2025-09-04T00:07:24.766778607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 00:07:24.766821 containerd[2012]: time="2025-09-04T00:07:24.766794085Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 00:07:24.766821 containerd[2012]: time="2025-09-04T00:07:24.766810894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 00:07:24.766950 containerd[2012]: time="2025-09-04T00:07:24.766826972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 00:07:24.766950 containerd[2012]: time="2025-09-04T00:07:24.766845174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 00:07:24.766950 containerd[2012]: time="2025-09-04T00:07:24.766862375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 00:07:24.766950 containerd[2012]: time="2025-09-04T00:07:24.766880605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 00:07:24.766950 containerd[2012]: time="2025-09-04T00:07:24.766922201Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 00:07:24.766950 containerd[2012]: time="2025-09-04T00:07:24.766937508Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 00:07:24.769712 containerd[2012]: time="2025-09-04T00:07:24.767023695Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 00:07:24.769844 containerd[2012]: time="2025-09-04T00:07:24.769825965Z" level=info msg="Start snapshots syncer" Sep 4 00:07:24.769970 containerd[2012]: time="2025-09-04T00:07:24.769952382Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 00:07:24.770499 containerd[2012]: time="2025-09-04T00:07:24.770444140Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 00:07:24.770755 containerd[2012]: time="2025-09-04T00:07:24.770727802Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 00:07:24.770950 containerd[2012]: time="2025-09-04T00:07:24.770931649Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 00:07:24.771222 containerd[2012]: time="2025-09-04T00:07:24.771202395Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 00:07:24.771357 containerd[2012]: time="2025-09-04T00:07:24.771339198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 00:07:24.771501 containerd[2012]: time="2025-09-04T00:07:24.771482900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 00:07:24.771595 containerd[2012]: time="2025-09-04T00:07:24.771579663Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 00:07:24.771685 containerd[2012]: time="2025-09-04T00:07:24.771671479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 00:07:24.771771 containerd[2012]: time="2025-09-04T00:07:24.771755526Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 00:07:24.771849 containerd[2012]: time="2025-09-04T00:07:24.771835414Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 00:07:24.771850 polkitd[2081]: Started polkitd version 126 Sep 4 00:07:24.772379 containerd[2012]: time="2025-09-04T00:07:24.772108042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 00:07:24.772379 containerd[2012]: time="2025-09-04T00:07:24.772149471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 00:07:24.772379 containerd[2012]: time="2025-09-04T00:07:24.772167331Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 00:07:24.772638 containerd[2012]: time="2025-09-04T00:07:24.772498415Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:07:24.772638 containerd[2012]: time="2025-09-04T00:07:24.772527940Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:07:24.772638 containerd[2012]: time="2025-09-04T00:07:24.772540511Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:07:24.772638 containerd[2012]: time="2025-09-04T00:07:24.772570744Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:07:24.772638 containerd[2012]: time="2025-09-04T00:07:24.772582347Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 00:07:24.772638 containerd[2012]: time="2025-09-04T00:07:24.772595026Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 00:07:24.772638 containerd[2012]: time="2025-09-04T00:07:24.772612456Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 00:07:24.773030 containerd[2012]: time="2025-09-04T00:07:24.772883398Z" level=info msg="runtime interface created" Sep 4 00:07:24.773030 containerd[2012]: time="2025-09-04T00:07:24.772899433Z" level=info msg="created NRI interface" Sep 4 00:07:24.773030 containerd[2012]: time="2025-09-04T00:07:24.772912151Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 00:07:24.773030 containerd[2012]: time="2025-09-04T00:07:24.772933641Z" level=info msg="Connect containerd service" Sep 4 00:07:24.773360 containerd[2012]: time="2025-09-04T00:07:24.773000315Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 00:07:24.776283 containerd[2012]: time="2025-09-04T00:07:24.774686917Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 00:07:24.784396 polkitd[2081]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 00:07:24.784973 polkitd[2081]: Loading rules from directory /run/polkit-1/rules.d Sep 4 00:07:24.785055 polkitd[2081]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 4 00:07:24.785600 polkitd[2081]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 4 00:07:24.785641 polkitd[2081]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 4 00:07:24.785701 polkitd[2081]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 00:07:24.786440 polkitd[2081]: Finished loading, compiling and executing 2 rules Sep 4 00:07:24.786833 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 00:07:24.788903 dbus-daemon[1966]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 00:07:24.789546 polkitd[2081]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 00:07:24.824852 systemd-hostnamed[2028]: Hostname set to (transient) Sep 4 00:07:24.828796 systemd-resolved[1880]: System hostname changed to 'ip-172-31-29-190'. Sep 4 00:07:25.042635 containerd[2012]: time="2025-09-04T00:07:25.042538872Z" level=info msg="Start subscribing containerd event" Sep 4 00:07:25.042902 containerd[2012]: time="2025-09-04T00:07:25.042877245Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 00:07:25.042960 containerd[2012]: time="2025-09-04T00:07:25.042944744Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 00:07:25.043223 containerd[2012]: time="2025-09-04T00:07:25.043129397Z" level=info msg="Start recovering state" Sep 4 00:07:25.043335 containerd[2012]: time="2025-09-04T00:07:25.043317284Z" level=info msg="Start event monitor" Sep 4 00:07:25.043378 containerd[2012]: time="2025-09-04T00:07:25.043342257Z" level=info msg="Start cni network conf syncer for default" Sep 4 00:07:25.043378 containerd[2012]: time="2025-09-04T00:07:25.043352538Z" level=info msg="Start streaming server" Sep 4 00:07:25.043378 containerd[2012]: time="2025-09-04T00:07:25.043368339Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 00:07:25.043486 containerd[2012]: time="2025-09-04T00:07:25.043378923Z" level=info msg="runtime interface starting up..." Sep 4 00:07:25.043486 containerd[2012]: time="2025-09-04T00:07:25.043387581Z" level=info msg="starting plugins..." Sep 4 00:07:25.043486 containerd[2012]: time="2025-09-04T00:07:25.043405363Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 00:07:25.043642 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 00:07:25.043978 containerd[2012]: time="2025-09-04T00:07:25.043952641Z" level=info msg="containerd successfully booted in 0.366312s" Sep 4 00:07:25.051208 ntpd[1972]: bind(24) AF_INET6 fe80::4dd:96ff:fea7:a533%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 00:07:25.052324 ntpd[1972]: 4 Sep 00:07:25 ntpd[1972]: bind(24) AF_INET6 fe80::4dd:96ff:fea7:a533%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 00:07:25.052324 ntpd[1972]: 4 Sep 00:07:25 ntpd[1972]: unable to create socket on eth0 (6) for fe80::4dd:96ff:fea7:a533%2#123 Sep 4 00:07:25.052324 ntpd[1972]: 4 Sep 00:07:25 ntpd[1972]: failed to init interface for address fe80::4dd:96ff:fea7:a533%2 Sep 4 00:07:25.051249 ntpd[1972]: unable to create socket on eth0 (6) for fe80::4dd:96ff:fea7:a533%2#123 Sep 4 00:07:25.051265 ntpd[1972]: failed to init interface for address fe80::4dd:96ff:fea7:a533%2 Sep 4 00:07:25.067338 tar[1991]: linux-amd64/README.md Sep 4 00:07:25.085951 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 00:07:25.432258 systemd-networkd[1879]: eth0: Gained IPv6LL Sep 4 00:07:25.434906 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 00:07:25.435878 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 00:07:25.437824 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 00:07:25.442199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:07:25.447292 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 00:07:25.493824 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 00:07:25.527704 amazon-ssm-agent[2209]: Initializing new seelog logger Sep 4 00:07:25.528177 amazon-ssm-agent[2209]: New Seelog Logger Creation Complete Sep 4 00:07:25.528177 amazon-ssm-agent[2209]: 2025/09/04 00:07:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:07:25.528177 amazon-ssm-agent[2209]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:07:25.528424 amazon-ssm-agent[2209]: 2025/09/04 00:07:25 processing appconfig overrides Sep 4 00:07:25.528950 amazon-ssm-agent[2209]: 2025/09/04 00:07:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:07:25.528950 amazon-ssm-agent[2209]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:07:25.528950 amazon-ssm-agent[2209]: 2025/09/04 00:07:25 processing appconfig overrides Sep 4 00:07:25.529189 amazon-ssm-agent[2209]: 2025/09/04 00:07:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:07:25.529189 amazon-ssm-agent[2209]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:07:25.529274 amazon-ssm-agent[2209]: 2025/09/04 00:07:25 processing appconfig overrides Sep 4 00:07:25.529513 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.5286 INFO Proxy environment variables: Sep 4 00:07:25.531546 amazon-ssm-agent[2209]: 2025/09/04 00:07:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:07:25.531546 amazon-ssm-agent[2209]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:07:25.531650 amazon-ssm-agent[2209]: 2025/09/04 00:07:25 processing appconfig overrides Sep 4 00:07:25.629336 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.5286 INFO https_proxy: Sep 4 00:07:25.728714 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.5286 INFO http_proxy: Sep 4 00:07:25.827490 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.5287 INFO no_proxy: Sep 4 00:07:25.858023 amazon-ssm-agent[2209]: 2025/09/04 00:07:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:07:25.858023 amazon-ssm-agent[2209]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 00:07:25.858023 amazon-ssm-agent[2209]: 2025/09/04 00:07:25 processing appconfig overrides Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.5288 INFO Checking if agent identity type OnPrem can be assumed Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.5290 INFO Checking if agent identity type EC2 can be assumed Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.5684 INFO Agent will take identity from EC2 Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.5707 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.5707 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.5707 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.5707 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.5707 INFO [Registrar] Starting registrar module Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.5721 INFO [EC2Identity] Checking disk for registration info Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.5721 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.5721 INFO [EC2Identity] Generating registration keypair Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.8118 INFO [EC2Identity] Checking write access before registering Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.8123 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.8571 INFO [EC2Identity] EC2 registration was successful. Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.8571 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.8572 INFO [CredentialRefresher] credentialRefresher has started Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.8572 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.8831 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 00:07:25.883577 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.8833 INFO [CredentialRefresher] Credentials ready Sep 4 00:07:25.925965 amazon-ssm-agent[2209]: 2025-09-04 00:07:25.8835 INFO [CredentialRefresher] Next credential rotation will be in 29.999994204016666 minutes Sep 4 00:07:26.430129 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 00:07:26.432004 systemd[1]: Started sshd@0-172.31.29.190:22-139.178.68.195:49330.service - OpenSSH per-connection server daemon (139.178.68.195:49330). Sep 4 00:07:26.649880 sshd[2230]: Accepted publickey for core from 139.178.68.195 port 49330 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:07:26.651835 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:26.659000 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 00:07:26.661274 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 00:07:26.675399 systemd-logind[1979]: New session 1 of user core. Sep 4 00:07:26.688025 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 00:07:26.693349 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 00:07:26.708253 (systemd)[2234]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 00:07:26.711272 systemd-logind[1979]: New session c1 of user core. Sep 4 00:07:26.859975 systemd[2234]: Queued start job for default target default.target. Sep 4 00:07:26.874232 systemd[2234]: Created slice app.slice - User Application Slice. Sep 4 00:07:26.874266 systemd[2234]: Reached target paths.target - Paths. Sep 4 00:07:26.874311 systemd[2234]: Reached target timers.target - Timers. Sep 4 00:07:26.875743 systemd[2234]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 00:07:26.889143 systemd[2234]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 00:07:26.889260 systemd[2234]: Reached target sockets.target - Sockets. Sep 4 00:07:26.889893 systemd[2234]: Reached target basic.target - Basic System. Sep 4 00:07:26.889980 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 00:07:26.890466 systemd[2234]: Reached target default.target - Main User Target. Sep 4 00:07:26.890516 systemd[2234]: Startup finished in 170ms. Sep 4 00:07:26.897457 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 00:07:26.899335 amazon-ssm-agent[2209]: 2025-09-04 00:07:26.8991 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 00:07:26.999954 amazon-ssm-agent[2209]: 2025-09-04 00:07:26.9014 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2244) started Sep 4 00:07:27.060084 systemd[1]: Started sshd@1-172.31.29.190:22-139.178.68.195:49334.service - OpenSSH per-connection server daemon (139.178.68.195:49334). Sep 4 00:07:27.101480 amazon-ssm-agent[2209]: 2025-09-04 00:07:26.9014 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 00:07:27.253783 sshd[2254]: Accepted publickey for core from 139.178.68.195 port 49334 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:07:27.255845 sshd-session[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:27.263012 systemd-logind[1979]: New session 2 of user core. Sep 4 00:07:27.271291 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 00:07:27.327068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:07:27.327910 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 00:07:27.329625 systemd[1]: Startup finished in 2.768s (kernel) + 19.190s (initrd) + 7.520s (userspace) = 29.479s. Sep 4 00:07:27.339810 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:07:27.393514 sshd[2261]: Connection closed by 139.178.68.195 port 49334 Sep 4 00:07:27.394109 sshd-session[2254]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:27.398369 systemd[1]: sshd@1-172.31.29.190:22-139.178.68.195:49334.service: Deactivated successfully. Sep 4 00:07:27.400268 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 00:07:27.401338 systemd-logind[1979]: Session 2 logged out. Waiting for processes to exit. Sep 4 00:07:27.402821 systemd-logind[1979]: Removed session 2. Sep 4 00:07:27.425972 systemd[1]: Started sshd@2-172.31.29.190:22-139.178.68.195:49344.service - OpenSSH per-connection server daemon (139.178.68.195:49344). Sep 4 00:07:27.593570 sshd[2277]: Accepted publickey for core from 139.178.68.195 port 49344 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:07:27.594853 sshd-session[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:27.600091 systemd-logind[1979]: New session 3 of user core. Sep 4 00:07:27.607261 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 00:07:27.720275 sshd[2283]: Connection closed by 139.178.68.195 port 49344 Sep 4 00:07:27.720631 sshd-session[2277]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:27.724144 systemd[1]: sshd@2-172.31.29.190:22-139.178.68.195:49344.service: Deactivated successfully. Sep 4 00:07:27.725968 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 00:07:27.726786 systemd-logind[1979]: Session 3 logged out. Waiting for processes to exit. Sep 4 00:07:27.728509 systemd-logind[1979]: Removed session 3. Sep 4 00:07:27.760894 systemd[1]: Started sshd@3-172.31.29.190:22-139.178.68.195:49358.service - OpenSSH per-connection server daemon (139.178.68.195:49358). Sep 4 00:07:27.929717 sshd[2289]: Accepted publickey for core from 139.178.68.195 port 49358 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:07:27.930649 sshd-session[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:27.936262 systemd-logind[1979]: New session 4 of user core. Sep 4 00:07:27.940202 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 00:07:28.051210 ntpd[1972]: Listen normally on 7 eth0 [fe80::4dd:96ff:fea7:a533%2]:123 Sep 4 00:07:28.051568 ntpd[1972]: 4 Sep 00:07:28 ntpd[1972]: Listen normally on 7 eth0 [fe80::4dd:96ff:fea7:a533%2]:123 Sep 4 00:07:28.062714 sshd[2291]: Connection closed by 139.178.68.195 port 49358 Sep 4 00:07:28.063271 sshd-session[2289]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:28.067564 systemd-logind[1979]: Session 4 logged out. Waiting for processes to exit. Sep 4 00:07:28.067733 systemd[1]: sshd@3-172.31.29.190:22-139.178.68.195:49358.service: Deactivated successfully. Sep 4 00:07:28.070550 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 00:07:28.074210 systemd-logind[1979]: Removed session 4. Sep 4 00:07:28.094868 systemd[1]: Started sshd@4-172.31.29.190:22-139.178.68.195:49360.service - OpenSSH per-connection server daemon (139.178.68.195:49360). Sep 4 00:07:28.267238 sshd[2297]: Accepted publickey for core from 139.178.68.195 port 49360 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:07:28.270545 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:28.276227 systemd-logind[1979]: New session 5 of user core. Sep 4 00:07:28.285293 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 00:07:28.381591 kubelet[2267]: E0904 00:07:28.381523 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:07:28.383886 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:07:28.384046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:07:28.384404 systemd[1]: kubelet.service: Consumed 1.009s CPU time, 266.1M memory peak. Sep 4 00:07:28.416309 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 00:07:28.416589 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:07:28.429780 sudo[2301]: pam_unix(sudo:session): session closed for user root Sep 4 00:07:28.452297 sshd[2300]: Connection closed by 139.178.68.195 port 49360 Sep 4 00:07:28.453061 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:28.456441 systemd[1]: sshd@4-172.31.29.190:22-139.178.68.195:49360.service: Deactivated successfully. Sep 4 00:07:28.458350 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 00:07:28.460465 systemd-logind[1979]: Session 5 logged out. Waiting for processes to exit. Sep 4 00:07:28.461626 systemd-logind[1979]: Removed session 5. Sep 4 00:07:28.492139 systemd[1]: Started sshd@5-172.31.29.190:22-139.178.68.195:49362.service - OpenSSH per-connection server daemon (139.178.68.195:49362). Sep 4 00:07:28.667365 sshd[2308]: Accepted publickey for core from 139.178.68.195 port 49362 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:07:28.669328 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:28.675522 systemd-logind[1979]: New session 6 of user core. Sep 4 00:07:28.683308 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 00:07:28.781738 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 00:07:28.782142 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:07:28.789557 sudo[2312]: pam_unix(sudo:session): session closed for user root Sep 4 00:07:28.795467 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 00:07:28.795836 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:07:28.806651 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:07:28.847949 augenrules[2334]: No rules Sep 4 00:07:28.849550 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:07:28.849876 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:07:28.850998 sudo[2311]: pam_unix(sudo:session): session closed for user root Sep 4 00:07:28.874474 sshd[2310]: Connection closed by 139.178.68.195 port 49362 Sep 4 00:07:28.874984 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:28.879385 systemd[1]: sshd@5-172.31.29.190:22-139.178.68.195:49362.service: Deactivated successfully. Sep 4 00:07:28.881377 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 00:07:28.882841 systemd-logind[1979]: Session 6 logged out. Waiting for processes to exit. Sep 4 00:07:28.884425 systemd-logind[1979]: Removed session 6. Sep 4 00:07:28.920944 systemd[1]: Started sshd@6-172.31.29.190:22-139.178.68.195:49372.service - OpenSSH per-connection server daemon (139.178.68.195:49372). Sep 4 00:07:29.095132 sshd[2343]: Accepted publickey for core from 139.178.68.195 port 49372 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:07:29.096568 sshd-session[2343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:29.101745 systemd-logind[1979]: New session 7 of user core. Sep 4 00:07:29.108245 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 00:07:29.207913 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 00:07:29.208201 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:07:29.843139 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 00:07:29.857532 (dockerd)[2363]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 00:07:30.269711 dockerd[2363]: time="2025-09-04T00:07:30.269310695Z" level=info msg="Starting up" Sep 4 00:07:30.271019 dockerd[2363]: time="2025-09-04T00:07:30.270979924Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 00:07:30.571240 dockerd[2363]: time="2025-09-04T00:07:30.570916858Z" level=info msg="Loading containers: start." Sep 4 00:07:30.586105 kernel: Initializing XFRM netlink socket Sep 4 00:07:30.821317 (udev-worker)[2386]: Network interface NamePolicy= disabled on kernel command line. Sep 4 00:07:30.864416 systemd-networkd[1879]: docker0: Link UP Sep 4 00:07:30.870365 dockerd[2363]: time="2025-09-04T00:07:30.869764273Z" level=info msg="Loading containers: done." Sep 4 00:07:30.886460 dockerd[2363]: time="2025-09-04T00:07:30.886408105Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 00:07:30.886642 dockerd[2363]: time="2025-09-04T00:07:30.886516173Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 4 00:07:30.886691 dockerd[2363]: time="2025-09-04T00:07:30.886652236Z" level=info msg="Initializing buildkit" Sep 4 00:07:30.911472 dockerd[2363]: time="2025-09-04T00:07:30.911232899Z" level=info msg="Completed buildkit initialization" Sep 4 00:07:30.918953 dockerd[2363]: time="2025-09-04T00:07:30.918889021Z" level=info msg="Daemon has completed initialization" Sep 4 00:07:30.919336 dockerd[2363]: time="2025-09-04T00:07:30.919294925Z" level=info msg="API listen on /run/docker.sock" Sep 4 00:07:30.921831 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 00:07:31.486855 systemd-resolved[1880]: Clock change detected. Flushing caches. Sep 4 00:07:32.524516 containerd[2012]: time="2025-09-04T00:07:32.524476607Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 00:07:33.058392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2238412581.mount: Deactivated successfully. Sep 4 00:07:34.481802 containerd[2012]: time="2025-09-04T00:07:34.481747627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:34.483118 containerd[2012]: time="2025-09-04T00:07:34.482897333Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 4 00:07:34.484643 containerd[2012]: time="2025-09-04T00:07:34.484611129Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:34.487532 containerd[2012]: time="2025-09-04T00:07:34.487497635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:34.488385 containerd[2012]: time="2025-09-04T00:07:34.488352582Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 1.963837228s" Sep 4 00:07:34.488477 containerd[2012]: time="2025-09-04T00:07:34.488464355Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 4 00:07:34.489212 containerd[2012]: time="2025-09-04T00:07:34.489188999Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 00:07:36.097717 containerd[2012]: time="2025-09-04T00:07:36.097645273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:36.100442 containerd[2012]: time="2025-09-04T00:07:36.100368979Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 4 00:07:36.103728 containerd[2012]: time="2025-09-04T00:07:36.103593627Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:36.108878 containerd[2012]: time="2025-09-04T00:07:36.108797965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:36.109749 containerd[2012]: time="2025-09-04T00:07:36.109708674Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.620476993s" Sep 4 00:07:36.109749 containerd[2012]: time="2025-09-04T00:07:36.109740274Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 4 00:07:36.110466 containerd[2012]: time="2025-09-04T00:07:36.110444048Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 00:07:37.525301 containerd[2012]: time="2025-09-04T00:07:37.525220283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:37.526881 containerd[2012]: time="2025-09-04T00:07:37.526682937Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 4 00:07:37.528601 containerd[2012]: time="2025-09-04T00:07:37.528574307Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:37.534550 containerd[2012]: time="2025-09-04T00:07:37.532421198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:37.534550 containerd[2012]: time="2025-09-04T00:07:37.534008389Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.423540141s" Sep 4 00:07:37.534550 containerd[2012]: time="2025-09-04T00:07:37.534043416Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 4 00:07:37.534852 containerd[2012]: time="2025-09-04T00:07:37.534768957Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 00:07:38.517327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241672168.mount: Deactivated successfully. Sep 4 00:07:39.058181 containerd[2012]: time="2025-09-04T00:07:39.058126881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:39.059336 containerd[2012]: time="2025-09-04T00:07:39.059182499Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 4 00:07:39.060694 containerd[2012]: time="2025-09-04T00:07:39.060660048Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:39.063594 containerd[2012]: time="2025-09-04T00:07:39.063531485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:39.064252 containerd[2012]: time="2025-09-04T00:07:39.064056097Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.529262475s" Sep 4 00:07:39.064252 containerd[2012]: time="2025-09-04T00:07:39.064082590Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 4 00:07:39.064580 containerd[2012]: time="2025-09-04T00:07:39.064563513Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 00:07:39.069763 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 00:07:39.071772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:07:39.344516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:07:39.360873 (kubelet)[2646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:07:39.404438 kubelet[2646]: E0904 00:07:39.404356 2646 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:07:39.408605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:07:39.409081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:07:39.409492 systemd[1]: kubelet.service: Consumed 171ms CPU time, 108.3M memory peak. Sep 4 00:07:39.598439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1993124900.mount: Deactivated successfully. Sep 4 00:07:40.610086 containerd[2012]: time="2025-09-04T00:07:40.610035394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:40.612354 containerd[2012]: time="2025-09-04T00:07:40.612156681Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 4 00:07:40.614470 containerd[2012]: time="2025-09-04T00:07:40.614436997Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:40.647898 containerd[2012]: time="2025-09-04T00:07:40.647820169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:40.649141 containerd[2012]: time="2025-09-04T00:07:40.648672937Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.58402037s" Sep 4 00:07:40.649141 containerd[2012]: time="2025-09-04T00:07:40.648708191Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 00:07:40.649320 containerd[2012]: time="2025-09-04T00:07:40.649297537Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 00:07:41.128037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1587491050.mount: Deactivated successfully. Sep 4 00:07:41.134482 containerd[2012]: time="2025-09-04T00:07:41.134440039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:07:41.135379 containerd[2012]: time="2025-09-04T00:07:41.135225685Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 00:07:41.136540 containerd[2012]: time="2025-09-04T00:07:41.136510153Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:07:41.139105 containerd[2012]: time="2025-09-04T00:07:41.138498346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:07:41.139105 containerd[2012]: time="2025-09-04T00:07:41.139000854Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 489.68221ms" Sep 4 00:07:41.139105 containerd[2012]: time="2025-09-04T00:07:41.139023627Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 00:07:41.139558 containerd[2012]: time="2025-09-04T00:07:41.139539973Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 00:07:41.613449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3107400098.mount: Deactivated successfully. Sep 4 00:07:43.876443 containerd[2012]: time="2025-09-04T00:07:43.876367656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:43.878259 containerd[2012]: time="2025-09-04T00:07:43.878174392Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 4 00:07:43.879992 containerd[2012]: time="2025-09-04T00:07:43.879929148Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:43.884571 containerd[2012]: time="2025-09-04T00:07:43.883573642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:43.884571 containerd[2012]: time="2025-09-04T00:07:43.884388489Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.744827418s" Sep 4 00:07:43.884571 containerd[2012]: time="2025-09-04T00:07:43.884416708Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 4 00:07:46.811387 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:07:46.811644 systemd[1]: kubelet.service: Consumed 171ms CPU time, 108.3M memory peak. Sep 4 00:07:46.814611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:07:46.850129 systemd[1]: Reload requested from client PID 2792 ('systemctl') (unit session-7.scope)... Sep 4 00:07:46.850148 systemd[1]: Reloading... Sep 4 00:07:46.956261 zram_generator::config[2833]: No configuration found. Sep 4 00:07:47.083793 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:07:47.233798 systemd[1]: Reloading finished in 383 ms. Sep 4 00:07:47.277928 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 00:07:47.278040 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 00:07:47.278462 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:07:47.278526 systemd[1]: kubelet.service: Consumed 103ms CPU time, 85.9M memory peak. Sep 4 00:07:47.281075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:07:47.876132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:07:47.885657 (kubelet)[2897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:07:47.944024 kubelet[2897]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:07:47.944024 kubelet[2897]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 00:07:47.944024 kubelet[2897]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:07:47.944514 kubelet[2897]: I0904 00:07:47.944107 2897 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:07:48.184895 kubelet[2897]: I0904 00:07:48.184558 2897 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 00:07:48.184895 kubelet[2897]: I0904 00:07:48.184589 2897 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:07:48.185087 kubelet[2897]: I0904 00:07:48.185068 2897 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 00:07:48.229104 kubelet[2897]: E0904 00:07:48.228776 2897 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.29.190:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.190:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:07:48.231661 kubelet[2897]: I0904 00:07:48.230967 2897 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:07:48.254538 kubelet[2897]: I0904 00:07:48.254515 2897 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:07:48.263272 kubelet[2897]: I0904 00:07:48.263211 2897 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:07:48.267051 kubelet[2897]: I0904 00:07:48.267004 2897 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:07:48.267230 kubelet[2897]: I0904 00:07:48.267049 2897 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-190","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:07:48.269175 kubelet[2897]: I0904 00:07:48.269127 2897 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:07:48.269175 kubelet[2897]: I0904 00:07:48.269158 2897 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 00:07:48.270600 kubelet[2897]: I0904 00:07:48.270567 2897 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:07:48.277175 kubelet[2897]: I0904 00:07:48.277141 2897 kubelet.go:446] "Attempting to sync node with API server" Sep 4 00:07:48.277175 kubelet[2897]: I0904 00:07:48.277182 2897 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:07:48.277519 kubelet[2897]: I0904 00:07:48.277207 2897 kubelet.go:352] "Adding apiserver pod source" Sep 4 00:07:48.277519 kubelet[2897]: I0904 00:07:48.277218 2897 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:07:48.287314 kubelet[2897]: W0904 00:07:48.287264 2897 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.190:6443: connect: connection refused Sep 4 00:07:48.287431 kubelet[2897]: E0904 00:07:48.287322 2897 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.190:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:07:48.287431 kubelet[2897]: W0904 00:07:48.287403 2897 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-190&limit=500&resourceVersion=0": dial tcp 172.31.29.190:6443: connect: connection refused Sep 4 00:07:48.287493 kubelet[2897]: E0904 00:07:48.287428 2897 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-190&limit=500&resourceVersion=0\": dial tcp 172.31.29.190:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:07:48.289174 kubelet[2897]: I0904 00:07:48.289127 2897 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:07:48.292453 kubelet[2897]: I0904 00:07:48.292428 2897 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 00:07:48.292556 kubelet[2897]: W0904 00:07:48.292492 2897 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 00:07:48.293134 kubelet[2897]: I0904 00:07:48.293108 2897 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 00:07:48.293206 kubelet[2897]: I0904 00:07:48.293143 2897 server.go:1287] "Started kubelet" Sep 4 00:07:48.293386 kubelet[2897]: I0904 00:07:48.293351 2897 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:07:48.296025 kubelet[2897]: I0904 00:07:48.295087 2897 server.go:479] "Adding debug handlers to kubelet server" Sep 4 00:07:48.298836 kubelet[2897]: I0904 00:07:48.298794 2897 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:07:48.299474 kubelet[2897]: I0904 00:07:48.299332 2897 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:07:48.299549 kubelet[2897]: I0904 00:07:48.299534 2897 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:07:48.307024 kubelet[2897]: E0904 00:07:48.300987 2897 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.190:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.190:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-190.1861ebaf6ab30800 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-190,UID:ip-172-31-29-190,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-190,},FirstTimestamp:2025-09-04 00:07:48.293126144 +0000 UTC m=+0.403486835,LastTimestamp:2025-09-04 00:07:48.293126144 +0000 UTC m=+0.403486835,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-190,}" Sep 4 00:07:48.309294 kubelet[2897]: I0904 00:07:48.309202 2897 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 00:07:48.310255 kubelet[2897]: I0904 00:07:48.309832 2897 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:07:48.310819 kubelet[2897]: E0904 00:07:48.310795 2897 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-190\" not found" Sep 4 00:07:48.314774 kubelet[2897]: I0904 00:07:48.314750 2897 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 00:07:48.314870 kubelet[2897]: I0904 00:07:48.314815 2897 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:07:48.317803 kubelet[2897]: E0904 00:07:48.317767 2897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-190?timeout=10s\": dial tcp 172.31.29.190:6443: connect: connection refused" interval="200ms" Sep 4 00:07:48.318817 kubelet[2897]: W0904 00:07:48.318768 2897 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.190:6443: connect: connection refused Sep 4 00:07:48.319095 kubelet[2897]: E0904 00:07:48.319073 2897 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.190:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:07:48.328338 kubelet[2897]: I0904 00:07:48.327492 2897 factory.go:221] Registration of the systemd container factory successfully Sep 4 00:07:48.328338 kubelet[2897]: I0904 00:07:48.327615 2897 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:07:48.329569 kubelet[2897]: I0904 00:07:48.329528 2897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 00:07:48.331160 kubelet[2897]: I0904 00:07:48.331135 2897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 00:07:48.331318 kubelet[2897]: I0904 00:07:48.331306 2897 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 00:07:48.331419 kubelet[2897]: I0904 00:07:48.331408 2897 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 00:07:48.331486 kubelet[2897]: I0904 00:07:48.331479 2897 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 00:07:48.333072 kubelet[2897]: E0904 00:07:48.331616 2897 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:07:48.339311 kubelet[2897]: W0904 00:07:48.339007 2897 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.190:6443: connect: connection refused Sep 4 00:07:48.339500 kubelet[2897]: E0904 00:07:48.339479 2897 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.190:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:07:48.339694 kubelet[2897]: E0904 00:07:48.339672 2897 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:07:48.339694 kubelet[2897]: I0904 00:07:48.339678 2897 factory.go:221] Registration of the containerd container factory successfully Sep 4 00:07:48.361627 kubelet[2897]: I0904 00:07:48.361581 2897 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 00:07:48.361627 kubelet[2897]: I0904 00:07:48.361599 2897 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 00:07:48.361859 kubelet[2897]: I0904 00:07:48.361794 2897 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:07:48.368265 kubelet[2897]: I0904 00:07:48.368026 2897 policy_none.go:49] "None policy: Start" Sep 4 00:07:48.368265 kubelet[2897]: I0904 00:07:48.368054 2897 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 00:07:48.368265 kubelet[2897]: I0904 00:07:48.368066 2897 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:07:48.377032 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 00:07:48.389011 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 00:07:48.392960 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 00:07:48.408274 kubelet[2897]: I0904 00:07:48.408231 2897 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 00:07:48.408548 kubelet[2897]: I0904 00:07:48.408537 2897 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:07:48.408753 kubelet[2897]: I0904 00:07:48.408612 2897 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:07:48.410043 kubelet[2897]: I0904 00:07:48.409313 2897 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:07:48.411100 kubelet[2897]: E0904 00:07:48.411069 2897 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 00:07:48.411468 kubelet[2897]: E0904 00:07:48.411353 2897 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-190\" not found" Sep 4 00:07:48.441907 systemd[1]: Created slice kubepods-burstable-pod213f0f4342fa5592fb12d81e87a5977c.slice - libcontainer container kubepods-burstable-pod213f0f4342fa5592fb12d81e87a5977c.slice. Sep 4 00:07:48.453415 kubelet[2897]: E0904 00:07:48.453377 2897 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-190\" not found" node="ip-172-31-29-190" Sep 4 00:07:48.457592 systemd[1]: Created slice kubepods-burstable-podd6e05e77bc77a8729d0324932d21a0da.slice - libcontainer container kubepods-burstable-podd6e05e77bc77a8729d0324932d21a0da.slice. Sep 4 00:07:48.466046 kubelet[2897]: E0904 00:07:48.465869 2897 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-190\" not found" node="ip-172-31-29-190" Sep 4 00:07:48.468950 systemd[1]: Created slice kubepods-burstable-pod6f873f151ea64f087302fd1472009a89.slice - libcontainer container kubepods-burstable-pod6f873f151ea64f087302fd1472009a89.slice. Sep 4 00:07:48.471673 kubelet[2897]: E0904 00:07:48.471644 2897 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-190\" not found" node="ip-172-31-29-190" Sep 4 00:07:48.511173 kubelet[2897]: I0904 00:07:48.511145 2897 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-190" Sep 4 00:07:48.511597 kubelet[2897]: E0904 00:07:48.511537 2897 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.190:6443/api/v1/nodes\": dial tcp 172.31.29.190:6443: connect: connection refused" node="ip-172-31-29-190" Sep 4 00:07:48.519262 kubelet[2897]: E0904 00:07:48.519206 2897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-190?timeout=10s\": dial tcp 172.31.29.190:6443: connect: connection refused" interval="400ms" Sep 4 00:07:48.616877 kubelet[2897]: I0904 00:07:48.616825 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d6e05e77bc77a8729d0324932d21a0da-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-190\" (UID: \"d6e05e77bc77a8729d0324932d21a0da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-190" Sep 4 00:07:48.617151 kubelet[2897]: I0904 00:07:48.616886 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6e05e77bc77a8729d0324932d21a0da-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-190\" (UID: \"d6e05e77bc77a8729d0324932d21a0da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-190" Sep 4 00:07:48.617151 kubelet[2897]: I0904 00:07:48.616908 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/213f0f4342fa5592fb12d81e87a5977c-ca-certs\") pod \"kube-apiserver-ip-172-31-29-190\" (UID: \"213f0f4342fa5592fb12d81e87a5977c\") " pod="kube-system/kube-apiserver-ip-172-31-29-190" Sep 4 00:07:48.617151 kubelet[2897]: I0904 00:07:48.616923 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/213f0f4342fa5592fb12d81e87a5977c-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-190\" (UID: \"213f0f4342fa5592fb12d81e87a5977c\") " pod="kube-system/kube-apiserver-ip-172-31-29-190" Sep 4 00:07:48.617151 kubelet[2897]: I0904 00:07:48.616940 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/213f0f4342fa5592fb12d81e87a5977c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-190\" (UID: \"213f0f4342fa5592fb12d81e87a5977c\") " pod="kube-system/kube-apiserver-ip-172-31-29-190" Sep 4 00:07:48.617151 kubelet[2897]: I0904 00:07:48.616956 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d6e05e77bc77a8729d0324932d21a0da-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-190\" (UID: \"d6e05e77bc77a8729d0324932d21a0da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-190" Sep 4 00:07:48.617318 kubelet[2897]: I0904 00:07:48.616970 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6e05e77bc77a8729d0324932d21a0da-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-190\" (UID: \"d6e05e77bc77a8729d0324932d21a0da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-190" Sep 4 00:07:48.617318 kubelet[2897]: I0904 00:07:48.616987 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6e05e77bc77a8729d0324932d21a0da-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-190\" (UID: \"d6e05e77bc77a8729d0324932d21a0da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-190" Sep 4 00:07:48.617318 kubelet[2897]: I0904 00:07:48.617024 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f873f151ea64f087302fd1472009a89-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-190\" (UID: \"6f873f151ea64f087302fd1472009a89\") " pod="kube-system/kube-scheduler-ip-172-31-29-190" Sep 4 00:07:48.713882 kubelet[2897]: I0904 00:07:48.713777 2897 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-190" Sep 4 00:07:48.714826 kubelet[2897]: E0904 00:07:48.714783 2897 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.190:6443/api/v1/nodes\": dial tcp 172.31.29.190:6443: connect: connection refused" node="ip-172-31-29-190" Sep 4 00:07:48.755260 containerd[2012]: time="2025-09-04T00:07:48.755206887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-190,Uid:213f0f4342fa5592fb12d81e87a5977c,Namespace:kube-system,Attempt:0,}" Sep 4 00:07:48.774693 containerd[2012]: time="2025-09-04T00:07:48.774517931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-190,Uid:6f873f151ea64f087302fd1472009a89,Namespace:kube-system,Attempt:0,}" Sep 4 00:07:48.774693 containerd[2012]: time="2025-09-04T00:07:48.774649417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-190,Uid:d6e05e77bc77a8729d0324932d21a0da,Namespace:kube-system,Attempt:0,}" Sep 4 00:07:48.922285 kubelet[2897]: E0904 00:07:48.920222 2897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-190?timeout=10s\": dial tcp 172.31.29.190:6443: connect: connection refused" interval="800ms" Sep 4 00:07:48.932209 containerd[2012]: time="2025-09-04T00:07:48.931447897Z" level=info msg="connecting to shim db739e3ffc33f4d0d00c55ba24685d8b35b7287ce35d82bd3a7ba116b424d62f" address="unix:///run/containerd/s/1a44b04b88ed8d6fc3955d1b2715f96a415be2a6a54f8d6b0d755a68cfb72e7d" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:07:48.962018 containerd[2012]: time="2025-09-04T00:07:48.961961010Z" level=info msg="connecting to shim 6296b6ea7dc53d87c8cff92ee8b7b1a72445b47bf350e2ecbafcd2095f4101d5" address="unix:///run/containerd/s/4f202568cf365149ab74a74c1b2b88fe2176236f7bd457d9bfd7c47966d77899" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:07:48.975353 containerd[2012]: time="2025-09-04T00:07:48.974782472Z" level=info msg="connecting to shim 2965fcbba414b3e562c03253892b9938c88eb5cf656101ad54b46ffd6974ab5f" address="unix:///run/containerd/s/ff8529acb6ba0da36a398432786a8a905488addc4683b8bc9ae8060ca98299f7" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:07:49.063420 systemd[1]: Started cri-containerd-2965fcbba414b3e562c03253892b9938c88eb5cf656101ad54b46ffd6974ab5f.scope - libcontainer container 2965fcbba414b3e562c03253892b9938c88eb5cf656101ad54b46ffd6974ab5f. Sep 4 00:07:49.065780 systemd[1]: Started cri-containerd-6296b6ea7dc53d87c8cff92ee8b7b1a72445b47bf350e2ecbafcd2095f4101d5.scope - libcontainer container 6296b6ea7dc53d87c8cff92ee8b7b1a72445b47bf350e2ecbafcd2095f4101d5. Sep 4 00:07:49.067605 systemd[1]: Started cri-containerd-db739e3ffc33f4d0d00c55ba24685d8b35b7287ce35d82bd3a7ba116b424d62f.scope - libcontainer container db739e3ffc33f4d0d00c55ba24685d8b35b7287ce35d82bd3a7ba116b424d62f. Sep 4 00:07:49.113814 kubelet[2897]: W0904 00:07:49.113710 2897 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-190&limit=500&resourceVersion=0": dial tcp 172.31.29.190:6443: connect: connection refused Sep 4 00:07:49.114262 kubelet[2897]: E0904 00:07:49.113823 2897 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-190&limit=500&resourceVersion=0\": dial tcp 172.31.29.190:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:07:49.118981 kubelet[2897]: I0904 00:07:49.118948 2897 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-190" Sep 4 00:07:49.120803 kubelet[2897]: E0904 00:07:49.120672 2897 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.190:6443/api/v1/nodes\": dial tcp 172.31.29.190:6443: connect: connection refused" node="ip-172-31-29-190" Sep 4 00:07:49.165169 containerd[2012]: time="2025-09-04T00:07:49.165098482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-190,Uid:213f0f4342fa5592fb12d81e87a5977c,Namespace:kube-system,Attempt:0,} returns sandbox id \"db739e3ffc33f4d0d00c55ba24685d8b35b7287ce35d82bd3a7ba116b424d62f\"" Sep 4 00:07:49.172147 containerd[2012]: time="2025-09-04T00:07:49.171685896Z" level=info msg="CreateContainer within sandbox \"db739e3ffc33f4d0d00c55ba24685d8b35b7287ce35d82bd3a7ba116b424d62f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 00:07:49.186485 kubelet[2897]: W0904 00:07:49.186328 2897 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.190:6443: connect: connection refused Sep 4 00:07:49.186735 kubelet[2897]: E0904 00:07:49.186643 2897 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.190:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:07:49.190257 containerd[2012]: time="2025-09-04T00:07:49.190163698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-190,Uid:d6e05e77bc77a8729d0324932d21a0da,Namespace:kube-system,Attempt:0,} returns sandbox id \"6296b6ea7dc53d87c8cff92ee8b7b1a72445b47bf350e2ecbafcd2095f4101d5\"" Sep 4 00:07:49.197516 containerd[2012]: time="2025-09-04T00:07:49.197442984Z" level=info msg="CreateContainer within sandbox \"6296b6ea7dc53d87c8cff92ee8b7b1a72445b47bf350e2ecbafcd2095f4101d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 00:07:49.199813 containerd[2012]: time="2025-09-04T00:07:49.199760666Z" level=info msg="Container 2aa287677d5dac5e6cbc299d26a9c7a37417b44d9c0850d8df93a32a7dd030c4: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:49.219277 containerd[2012]: time="2025-09-04T00:07:49.219126998Z" level=info msg="CreateContainer within sandbox \"db739e3ffc33f4d0d00c55ba24685d8b35b7287ce35d82bd3a7ba116b424d62f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2aa287677d5dac5e6cbc299d26a9c7a37417b44d9c0850d8df93a32a7dd030c4\"" Sep 4 00:07:49.221143 containerd[2012]: time="2025-09-04T00:07:49.221099534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-190,Uid:6f873f151ea64f087302fd1472009a89,Namespace:kube-system,Attempt:0,} returns sandbox id \"2965fcbba414b3e562c03253892b9938c88eb5cf656101ad54b46ffd6974ab5f\"" Sep 4 00:07:49.221296 containerd[2012]: time="2025-09-04T00:07:49.221272000Z" level=info msg="Container 9f0a1f7be2afc788c2460550fd1c189f5c472afcb87277c1545e54053d316bb8: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:49.222209 containerd[2012]: time="2025-09-04T00:07:49.222085997Z" level=info msg="StartContainer for \"2aa287677d5dac5e6cbc299d26a9c7a37417b44d9c0850d8df93a32a7dd030c4\"" Sep 4 00:07:49.225031 containerd[2012]: time="2025-09-04T00:07:49.224998089Z" level=info msg="connecting to shim 2aa287677d5dac5e6cbc299d26a9c7a37417b44d9c0850d8df93a32a7dd030c4" address="unix:///run/containerd/s/1a44b04b88ed8d6fc3955d1b2715f96a415be2a6a54f8d6b0d755a68cfb72e7d" protocol=ttrpc version=3 Sep 4 00:07:49.227299 containerd[2012]: time="2025-09-04T00:07:49.226995567Z" level=info msg="CreateContainer within sandbox \"2965fcbba414b3e562c03253892b9938c88eb5cf656101ad54b46ffd6974ab5f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 00:07:49.231608 containerd[2012]: time="2025-09-04T00:07:49.231558175Z" level=info msg="CreateContainer within sandbox \"6296b6ea7dc53d87c8cff92ee8b7b1a72445b47bf350e2ecbafcd2095f4101d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9f0a1f7be2afc788c2460550fd1c189f5c472afcb87277c1545e54053d316bb8\"" Sep 4 00:07:49.232826 containerd[2012]: time="2025-09-04T00:07:49.232789513Z" level=info msg="StartContainer for \"9f0a1f7be2afc788c2460550fd1c189f5c472afcb87277c1545e54053d316bb8\"" Sep 4 00:07:49.236265 containerd[2012]: time="2025-09-04T00:07:49.235140205Z" level=info msg="connecting to shim 9f0a1f7be2afc788c2460550fd1c189f5c472afcb87277c1545e54053d316bb8" address="unix:///run/containerd/s/4f202568cf365149ab74a74c1b2b88fe2176236f7bd457d9bfd7c47966d77899" protocol=ttrpc version=3 Sep 4 00:07:49.240576 containerd[2012]: time="2025-09-04T00:07:49.240528477Z" level=info msg="Container 7559a6e72f5120c6954ef7af7e7cb35286b2e54d0f7470cf827a806600711e5c: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:49.245417 kubelet[2897]: W0904 00:07:49.245356 2897 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.190:6443: connect: connection refused Sep 4 00:07:49.245635 kubelet[2897]: E0904 00:07:49.245612 2897 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.190:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:07:49.253428 containerd[2012]: time="2025-09-04T00:07:49.253356558Z" level=info msg="CreateContainer within sandbox \"2965fcbba414b3e562c03253892b9938c88eb5cf656101ad54b46ffd6974ab5f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7559a6e72f5120c6954ef7af7e7cb35286b2e54d0f7470cf827a806600711e5c\"" Sep 4 00:07:49.254128 containerd[2012]: time="2025-09-04T00:07:49.254098169Z" level=info msg="StartContainer for \"7559a6e72f5120c6954ef7af7e7cb35286b2e54d0f7470cf827a806600711e5c\"" Sep 4 00:07:49.257585 containerd[2012]: time="2025-09-04T00:07:49.257546761Z" level=info msg="connecting to shim 7559a6e72f5120c6954ef7af7e7cb35286b2e54d0f7470cf827a806600711e5c" address="unix:///run/containerd/s/ff8529acb6ba0da36a398432786a8a905488addc4683b8bc9ae8060ca98299f7" protocol=ttrpc version=3 Sep 4 00:07:49.260610 systemd[1]: Started cri-containerd-2aa287677d5dac5e6cbc299d26a9c7a37417b44d9c0850d8df93a32a7dd030c4.scope - libcontainer container 2aa287677d5dac5e6cbc299d26a9c7a37417b44d9c0850d8df93a32a7dd030c4. Sep 4 00:07:49.276886 systemd[1]: Started cri-containerd-9f0a1f7be2afc788c2460550fd1c189f5c472afcb87277c1545e54053d316bb8.scope - libcontainer container 9f0a1f7be2afc788c2460550fd1c189f5c472afcb87277c1545e54053d316bb8. Sep 4 00:07:49.299501 systemd[1]: Started cri-containerd-7559a6e72f5120c6954ef7af7e7cb35286b2e54d0f7470cf827a806600711e5c.scope - libcontainer container 7559a6e72f5120c6954ef7af7e7cb35286b2e54d0f7470cf827a806600711e5c. Sep 4 00:07:49.390727 containerd[2012]: time="2025-09-04T00:07:49.390571619Z" level=info msg="StartContainer for \"2aa287677d5dac5e6cbc299d26a9c7a37417b44d9c0850d8df93a32a7dd030c4\" returns successfully" Sep 4 00:07:49.399026 containerd[2012]: time="2025-09-04T00:07:49.398932788Z" level=info msg="StartContainer for \"9f0a1f7be2afc788c2460550fd1c189f5c472afcb87277c1545e54053d316bb8\" returns successfully" Sep 4 00:07:49.436856 containerd[2012]: time="2025-09-04T00:07:49.436796791Z" level=info msg="StartContainer for \"7559a6e72f5120c6954ef7af7e7cb35286b2e54d0f7470cf827a806600711e5c\" returns successfully" Sep 4 00:07:49.685649 kubelet[2897]: W0904 00:07:49.685502 2897 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.190:6443: connect: connection refused Sep 4 00:07:49.685649 kubelet[2897]: E0904 00:07:49.685587 2897 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.190:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:07:49.721593 kubelet[2897]: E0904 00:07:49.721540 2897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-190?timeout=10s\": dial tcp 172.31.29.190:6443: connect: connection refused" interval="1.6s" Sep 4 00:07:49.922818 kubelet[2897]: I0904 00:07:49.922790 2897 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-190" Sep 4 00:07:49.923251 kubelet[2897]: E0904 00:07:49.923209 2897 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.190:6443/api/v1/nodes\": dial tcp 172.31.29.190:6443: connect: connection refused" node="ip-172-31-29-190" Sep 4 00:07:50.273749 kubelet[2897]: E0904 00:07:50.273706 2897 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.29.190:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.190:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:07:50.388266 kubelet[2897]: E0904 00:07:50.388224 2897 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-190\" not found" node="ip-172-31-29-190" Sep 4 00:07:50.394298 kubelet[2897]: E0904 00:07:50.394175 2897 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-190\" not found" node="ip-172-31-29-190" Sep 4 00:07:50.399940 kubelet[2897]: E0904 00:07:50.399734 2897 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-190\" not found" node="ip-172-31-29-190" Sep 4 00:07:50.743741 kubelet[2897]: E0904 00:07:50.743577 2897 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.190:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.190:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-190.1861ebaf6ab30800 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-190,UID:ip-172-31-29-190,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-190,},FirstTimestamp:2025-09-04 00:07:48.293126144 +0000 UTC m=+0.403486835,LastTimestamp:2025-09-04 00:07:48.293126144 +0000 UTC m=+0.403486835,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-190,}" Sep 4 00:07:51.398788 kubelet[2897]: E0904 00:07:51.398462 2897 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-190\" not found" node="ip-172-31-29-190" Sep 4 00:07:51.399115 kubelet[2897]: E0904 00:07:51.398820 2897 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-190\" not found" node="ip-172-31-29-190" Sep 4 00:07:51.399115 kubelet[2897]: E0904 00:07:51.399016 2897 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-190\" not found" node="ip-172-31-29-190" Sep 4 00:07:51.525362 kubelet[2897]: I0904 00:07:51.525324 2897 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-190" Sep 4 00:07:52.402260 kubelet[2897]: E0904 00:07:52.402189 2897 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-190\" not found" node="ip-172-31-29-190" Sep 4 00:07:52.403722 kubelet[2897]: E0904 00:07:52.402709 2897 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-190\" not found" node="ip-172-31-29-190" Sep 4 00:07:52.423548 kubelet[2897]: E0904 00:07:52.423514 2897 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-190\" not found" node="ip-172-31-29-190" Sep 4 00:07:52.510092 kubelet[2897]: I0904 00:07:52.509834 2897 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-190" Sep 4 00:07:52.511343 kubelet[2897]: I0904 00:07:52.511323 2897 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-190" Sep 4 00:07:52.569859 kubelet[2897]: E0904 00:07:52.569817 2897 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-190\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-29-190" Sep 4 00:07:52.569859 kubelet[2897]: I0904 00:07:52.569853 2897 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-190" Sep 4 00:07:52.571858 kubelet[2897]: E0904 00:07:52.571823 2897 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-190\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-29-190" Sep 4 00:07:52.571858 kubelet[2897]: I0904 00:07:52.571853 2897 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-190" Sep 4 00:07:52.573797 kubelet[2897]: E0904 00:07:52.573765 2897 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-190\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-29-190" Sep 4 00:07:53.287161 kubelet[2897]: I0904 00:07:53.287125 2897 apiserver.go:52] "Watching apiserver" Sep 4 00:07:53.314942 kubelet[2897]: I0904 00:07:53.314906 2897 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 00:07:54.611848 kubelet[2897]: I0904 00:07:54.611788 2897 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-190" Sep 4 00:07:54.844433 systemd[1]: Reload requested from client PID 3167 ('systemctl') (unit session-7.scope)... Sep 4 00:07:54.844452 systemd[1]: Reloading... Sep 4 00:07:54.975263 zram_generator::config[3212]: No configuration found. Sep 4 00:07:55.093410 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:07:55.252997 systemd[1]: Reloading finished in 407 ms. Sep 4 00:07:55.295387 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:07:55.297879 kubelet[2897]: I0904 00:07:55.297335 2897 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:07:55.300827 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 00:07:55.312790 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 00:07:55.313123 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:07:55.313200 systemd[1]: kubelet.service: Consumed 805ms CPU time, 128.3M memory peak. Sep 4 00:07:55.315546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:07:55.610961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:07:55.619790 (kubelet)[3275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:07:55.696927 kubelet[3275]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:07:55.697361 kubelet[3275]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 00:07:55.697421 kubelet[3275]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:07:55.697725 kubelet[3275]: I0904 00:07:55.697688 3275 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:07:55.705893 kubelet[3275]: I0904 00:07:55.705853 3275 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 00:07:55.706072 kubelet[3275]: I0904 00:07:55.706049 3275 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:07:55.706416 kubelet[3275]: I0904 00:07:55.706391 3275 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 00:07:55.707744 kubelet[3275]: I0904 00:07:55.707715 3275 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 00:07:55.710662 kubelet[3275]: I0904 00:07:55.710002 3275 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:07:55.718573 kubelet[3275]: I0904 00:07:55.718551 3275 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:07:55.721945 kubelet[3275]: I0904 00:07:55.721920 3275 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:07:55.722346 kubelet[3275]: I0904 00:07:55.722314 3275 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:07:55.722918 kubelet[3275]: I0904 00:07:55.722524 3275 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-190","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:07:55.723122 kubelet[3275]: I0904 00:07:55.723106 3275 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:07:55.723194 kubelet[3275]: I0904 00:07:55.723186 3275 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 00:07:55.723323 kubelet[3275]: I0904 00:07:55.723314 3275 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:07:55.723580 kubelet[3275]: I0904 00:07:55.723569 3275 kubelet.go:446] "Attempting to sync node with API server" Sep 4 00:07:55.723667 kubelet[3275]: I0904 00:07:55.723659 3275 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:07:55.723749 kubelet[3275]: I0904 00:07:55.723741 3275 kubelet.go:352] "Adding apiserver pod source" Sep 4 00:07:55.723809 kubelet[3275]: I0904 00:07:55.723802 3275 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:07:55.731835 kubelet[3275]: I0904 00:07:55.731365 3275 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:07:55.732201 kubelet[3275]: I0904 00:07:55.732187 3275 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 00:07:55.742352 kubelet[3275]: I0904 00:07:55.742326 3275 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 00:07:55.743429 kubelet[3275]: I0904 00:07:55.743405 3275 server.go:1287] "Started kubelet" Sep 4 00:07:55.747270 kubelet[3275]: I0904 00:07:55.746578 3275 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:07:55.747270 kubelet[3275]: I0904 00:07:55.746965 3275 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:07:55.747270 kubelet[3275]: I0904 00:07:55.747023 3275 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:07:55.748255 kubelet[3275]: I0904 00:07:55.748222 3275 server.go:479] "Adding debug handlers to kubelet server" Sep 4 00:07:55.749435 kubelet[3275]: I0904 00:07:55.749116 3275 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:07:55.750600 kubelet[3275]: I0904 00:07:55.750549 3275 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:07:55.761954 kubelet[3275]: I0904 00:07:55.761929 3275 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 00:07:55.762415 kubelet[3275]: E0904 00:07:55.762393 3275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-190\" not found" Sep 4 00:07:55.764635 kubelet[3275]: I0904 00:07:55.764336 3275 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 00:07:55.764635 kubelet[3275]: I0904 00:07:55.764476 3275 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:07:55.766731 kubelet[3275]: I0904 00:07:55.766702 3275 factory.go:221] Registration of the systemd container factory successfully Sep 4 00:07:55.766858 kubelet[3275]: I0904 00:07:55.766828 3275 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:07:55.767333 kubelet[3275]: I0904 00:07:55.767307 3275 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 00:07:55.770059 kubelet[3275]: I0904 00:07:55.769664 3275 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 00:07:55.770059 kubelet[3275]: I0904 00:07:55.769733 3275 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 00:07:55.770059 kubelet[3275]: I0904 00:07:55.769757 3275 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 00:07:55.770059 kubelet[3275]: I0904 00:07:55.769767 3275 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 00:07:55.770059 kubelet[3275]: E0904 00:07:55.769821 3275 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:07:55.774644 kubelet[3275]: I0904 00:07:55.774610 3275 factory.go:221] Registration of the containerd container factory successfully Sep 4 00:07:55.785426 kubelet[3275]: E0904 00:07:55.785401 3275 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:07:55.841142 kubelet[3275]: I0904 00:07:55.841101 3275 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 00:07:55.841786 kubelet[3275]: I0904 00:07:55.841410 3275 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 00:07:55.841786 kubelet[3275]: I0904 00:07:55.841433 3275 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:07:55.841786 kubelet[3275]: I0904 00:07:55.841586 3275 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 00:07:55.841786 kubelet[3275]: I0904 00:07:55.841596 3275 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 00:07:55.841786 kubelet[3275]: I0904 00:07:55.841613 3275 policy_none.go:49] "None policy: Start" Sep 4 00:07:55.841786 kubelet[3275]: I0904 00:07:55.841623 3275 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 00:07:55.841786 kubelet[3275]: I0904 00:07:55.841631 3275 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:07:55.841786 kubelet[3275]: I0904 00:07:55.841723 3275 state_mem.go:75] "Updated machine memory state" Sep 4 00:07:55.849859 kubelet[3275]: I0904 00:07:55.849726 3275 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 00:07:55.852363 kubelet[3275]: I0904 00:07:55.852040 3275 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:07:55.852363 kubelet[3275]: I0904 00:07:55.852066 3275 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:07:55.855758 kubelet[3275]: I0904 00:07:55.855453 3275 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:07:55.857182 kubelet[3275]: E0904 00:07:55.857157 3275 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 00:07:55.870641 kubelet[3275]: I0904 00:07:55.870529 3275 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-190" Sep 4 00:07:55.877579 kubelet[3275]: I0904 00:07:55.877473 3275 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-190" Sep 4 00:07:55.877813 kubelet[3275]: I0904 00:07:55.877659 3275 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-190" Sep 4 00:07:55.894137 kubelet[3275]: E0904 00:07:55.893741 3275 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-190\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-29-190" Sep 4 00:07:55.955651 kubelet[3275]: I0904 00:07:55.955524 3275 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-190" Sep 4 00:07:55.968544 kubelet[3275]: I0904 00:07:55.968440 3275 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-29-190" Sep 4 00:07:55.968544 kubelet[3275]: I0904 00:07:55.968534 3275 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-190" Sep 4 00:07:56.065774 kubelet[3275]: I0904 00:07:56.065482 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/213f0f4342fa5592fb12d81e87a5977c-ca-certs\") pod \"kube-apiserver-ip-172-31-29-190\" (UID: \"213f0f4342fa5592fb12d81e87a5977c\") " pod="kube-system/kube-apiserver-ip-172-31-29-190" Sep 4 00:07:56.065774 kubelet[3275]: I0904 00:07:56.065530 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6e05e77bc77a8729d0324932d21a0da-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-190\" (UID: \"d6e05e77bc77a8729d0324932d21a0da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-190" Sep 4 00:07:56.065774 kubelet[3275]: I0904 00:07:56.065570 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f873f151ea64f087302fd1472009a89-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-190\" (UID: \"6f873f151ea64f087302fd1472009a89\") " pod="kube-system/kube-scheduler-ip-172-31-29-190" Sep 4 00:07:56.065774 kubelet[3275]: I0904 00:07:56.065601 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/213f0f4342fa5592fb12d81e87a5977c-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-190\" (UID: \"213f0f4342fa5592fb12d81e87a5977c\") " pod="kube-system/kube-apiserver-ip-172-31-29-190" Sep 4 00:07:56.065774 kubelet[3275]: I0904 00:07:56.065624 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/213f0f4342fa5592fb12d81e87a5977c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-190\" (UID: \"213f0f4342fa5592fb12d81e87a5977c\") " pod="kube-system/kube-apiserver-ip-172-31-29-190" Sep 4 00:07:56.066083 kubelet[3275]: I0904 00:07:56.065652 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6e05e77bc77a8729d0324932d21a0da-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-190\" (UID: \"d6e05e77bc77a8729d0324932d21a0da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-190" Sep 4 00:07:56.066083 kubelet[3275]: I0904 00:07:56.065674 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d6e05e77bc77a8729d0324932d21a0da-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-190\" (UID: \"d6e05e77bc77a8729d0324932d21a0da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-190" Sep 4 00:07:56.066083 kubelet[3275]: I0904 00:07:56.065709 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6e05e77bc77a8729d0324932d21a0da-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-190\" (UID: \"d6e05e77bc77a8729d0324932d21a0da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-190" Sep 4 00:07:56.066083 kubelet[3275]: I0904 00:07:56.065729 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d6e05e77bc77a8729d0324932d21a0da-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-190\" (UID: \"d6e05e77bc77a8729d0324932d21a0da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-190" Sep 4 00:07:56.735346 kubelet[3275]: I0904 00:07:56.735307 3275 apiserver.go:52] "Watching apiserver" Sep 4 00:07:56.765936 kubelet[3275]: I0904 00:07:56.765414 3275 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 00:07:56.813784 kubelet[3275]: I0904 00:07:56.812960 3275 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-190" Sep 4 00:07:56.830853 kubelet[3275]: E0904 00:07:56.830815 3275 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-190\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-190" Sep 4 00:07:56.861348 kubelet[3275]: I0904 00:07:56.860913 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-190" podStartSLOduration=1.860873911 podStartE2EDuration="1.860873911s" podCreationTimestamp="2025-09-04 00:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:07:56.85709428 +0000 UTC m=+1.229431774" watchObservedRunningTime="2025-09-04 00:07:56.860873911 +0000 UTC m=+1.233211399" Sep 4 00:07:56.897943 kubelet[3275]: I0904 00:07:56.897513 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-190" podStartSLOduration=2.897493068 podStartE2EDuration="2.897493068s" podCreationTimestamp="2025-09-04 00:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:07:56.897361864 +0000 UTC m=+1.269699358" watchObservedRunningTime="2025-09-04 00:07:56.897493068 +0000 UTC m=+1.269830562" Sep 4 00:07:56.897943 kubelet[3275]: I0904 00:07:56.897616 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-190" podStartSLOduration=1.897610315 podStartE2EDuration="1.897610315s" podCreationTimestamp="2025-09-04 00:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:07:56.883123566 +0000 UTC m=+1.255461062" watchObservedRunningTime="2025-09-04 00:07:56.897610315 +0000 UTC m=+1.269947816" Sep 4 00:08:00.801520 kubelet[3275]: I0904 00:08:00.801059 3275 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 00:08:00.801991 containerd[2012]: time="2025-09-04T00:08:00.801456291Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 00:08:00.802455 kubelet[3275]: I0904 00:08:00.802428 3275 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 00:08:01.223814 systemd[1]: Created slice kubepods-besteffort-pod7ccc0105_bef0_41d2_abee_a087ade179fd.slice - libcontainer container kubepods-besteffort-pod7ccc0105_bef0_41d2_abee_a087ade179fd.slice. Sep 4 00:08:01.400256 kubelet[3275]: I0904 00:08:01.400051 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ccc0105-bef0-41d2-abee-a087ade179fd-lib-modules\") pod \"kube-proxy-678v7\" (UID: \"7ccc0105-bef0-41d2-abee-a087ade179fd\") " pod="kube-system/kube-proxy-678v7" Sep 4 00:08:01.400256 kubelet[3275]: I0904 00:08:01.400188 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjtsk\" (UniqueName: \"kubernetes.io/projected/7ccc0105-bef0-41d2-abee-a087ade179fd-kube-api-access-zjtsk\") pod \"kube-proxy-678v7\" (UID: \"7ccc0105-bef0-41d2-abee-a087ade179fd\") " pod="kube-system/kube-proxy-678v7" Sep 4 00:08:01.400477 kubelet[3275]: I0904 00:08:01.400305 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7ccc0105-bef0-41d2-abee-a087ade179fd-kube-proxy\") pod \"kube-proxy-678v7\" (UID: \"7ccc0105-bef0-41d2-abee-a087ade179fd\") " pod="kube-system/kube-proxy-678v7" Sep 4 00:08:01.400477 kubelet[3275]: I0904 00:08:01.400373 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ccc0105-bef0-41d2-abee-a087ade179fd-xtables-lock\") pod \"kube-proxy-678v7\" (UID: \"7ccc0105-bef0-41d2-abee-a087ade179fd\") " pod="kube-system/kube-proxy-678v7" Sep 4 00:08:01.570191 kubelet[3275]: E0904 00:08:01.558252 3275 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 00:08:01.570191 kubelet[3275]: E0904 00:08:01.558415 3275 projected.go:194] Error preparing data for projected volume kube-api-access-zjtsk for pod kube-system/kube-proxy-678v7: configmap "kube-root-ca.crt" not found Sep 4 00:08:01.570626 kubelet[3275]: E0904 00:08:01.570562 3275 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7ccc0105-bef0-41d2-abee-a087ade179fd-kube-api-access-zjtsk podName:7ccc0105-bef0-41d2-abee-a087ade179fd nodeName:}" failed. No retries permitted until 2025-09-04 00:08:02.070520836 +0000 UTC m=+6.442858328 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zjtsk" (UniqueName: "kubernetes.io/projected/7ccc0105-bef0-41d2-abee-a087ade179fd-kube-api-access-zjtsk") pod "kube-proxy-678v7" (UID: "7ccc0105-bef0-41d2-abee-a087ade179fd") : configmap "kube-root-ca.crt" not found Sep 4 00:08:02.093063 systemd[1]: Created slice kubepods-besteffort-pod674905ca_2d68_4c09_a78a_d2ad8711f8e8.slice - libcontainer container kubepods-besteffort-pod674905ca_2d68_4c09_a78a_d2ad8711f8e8.slice. Sep 4 00:08:02.222275 kubelet[3275]: I0904 00:08:02.222192 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/674905ca-2d68-4c09-a78a-d2ad8711f8e8-var-lib-calico\") pod \"tigera-operator-755d956888-smsnw\" (UID: \"674905ca-2d68-4c09-a78a-d2ad8711f8e8\") " pod="tigera-operator/tigera-operator-755d956888-smsnw" Sep 4 00:08:02.222275 kubelet[3275]: I0904 00:08:02.222303 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w8xx\" (UniqueName: \"kubernetes.io/projected/674905ca-2d68-4c09-a78a-d2ad8711f8e8-kube-api-access-9w8xx\") pod \"tigera-operator-755d956888-smsnw\" (UID: \"674905ca-2d68-4c09-a78a-d2ad8711f8e8\") " pod="tigera-operator/tigera-operator-755d956888-smsnw" Sep 4 00:08:02.400329 containerd[2012]: time="2025-09-04T00:08:02.400194991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-smsnw,Uid:674905ca-2d68-4c09-a78a-d2ad8711f8e8,Namespace:tigera-operator,Attempt:0,}" Sep 4 00:08:02.430262 containerd[2012]: time="2025-09-04T00:08:02.429933841Z" level=info msg="connecting to shim 92b34ed66844a4266ca7f265ffb3e221c56dfde65e22ba92d76cab032c2e917b" address="unix:///run/containerd/s/9450c386a5700a314714e9c0bcb7fca553d3de75d75b2935b4d23809f0944ba4" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:08:02.436648 containerd[2012]: time="2025-09-04T00:08:02.436602930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-678v7,Uid:7ccc0105-bef0-41d2-abee-a087ade179fd,Namespace:kube-system,Attempt:0,}" Sep 4 00:08:02.474514 systemd[1]: Started cri-containerd-92b34ed66844a4266ca7f265ffb3e221c56dfde65e22ba92d76cab032c2e917b.scope - libcontainer container 92b34ed66844a4266ca7f265ffb3e221c56dfde65e22ba92d76cab032c2e917b. Sep 4 00:08:02.479329 containerd[2012]: time="2025-09-04T00:08:02.479280802Z" level=info msg="connecting to shim b78c7abb7fc31c391803efbad4d6064433546795c7fc765727dcdb9ce2090ce2" address="unix:///run/containerd/s/063ec5fdc75c0d8101fe9696b42654dc24b575b285cb47d1d265612005ebc1fd" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:08:02.520497 systemd[1]: Started cri-containerd-b78c7abb7fc31c391803efbad4d6064433546795c7fc765727dcdb9ce2090ce2.scope - libcontainer container b78c7abb7fc31c391803efbad4d6064433546795c7fc765727dcdb9ce2090ce2. Sep 4 00:08:02.574032 containerd[2012]: time="2025-09-04T00:08:02.573327376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-678v7,Uid:7ccc0105-bef0-41d2-abee-a087ade179fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"b78c7abb7fc31c391803efbad4d6064433546795c7fc765727dcdb9ce2090ce2\"" Sep 4 00:08:02.577205 containerd[2012]: time="2025-09-04T00:08:02.577149870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-smsnw,Uid:674905ca-2d68-4c09-a78a-d2ad8711f8e8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"92b34ed66844a4266ca7f265ffb3e221c56dfde65e22ba92d76cab032c2e917b\"" Sep 4 00:08:02.580467 containerd[2012]: time="2025-09-04T00:08:02.580428803Z" level=info msg="CreateContainer within sandbox \"b78c7abb7fc31c391803efbad4d6064433546795c7fc765727dcdb9ce2090ce2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 00:08:02.582331 containerd[2012]: time="2025-09-04T00:08:02.582294126Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 4 00:08:02.602100 containerd[2012]: time="2025-09-04T00:08:02.602039565Z" level=info msg="Container d3a2da583577dd3b063fa28584edc0ae9efab0021160803eecfccefdd4021eda: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:08:02.612281 containerd[2012]: time="2025-09-04T00:08:02.612206497Z" level=info msg="CreateContainer within sandbox \"b78c7abb7fc31c391803efbad4d6064433546795c7fc765727dcdb9ce2090ce2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d3a2da583577dd3b063fa28584edc0ae9efab0021160803eecfccefdd4021eda\"" Sep 4 00:08:02.613328 containerd[2012]: time="2025-09-04T00:08:02.612958861Z" level=info msg="StartContainer for \"d3a2da583577dd3b063fa28584edc0ae9efab0021160803eecfccefdd4021eda\"" Sep 4 00:08:02.615037 containerd[2012]: time="2025-09-04T00:08:02.614985480Z" level=info msg="connecting to shim d3a2da583577dd3b063fa28584edc0ae9efab0021160803eecfccefdd4021eda" address="unix:///run/containerd/s/063ec5fdc75c0d8101fe9696b42654dc24b575b285cb47d1d265612005ebc1fd" protocol=ttrpc version=3 Sep 4 00:08:02.637461 systemd[1]: Started cri-containerd-d3a2da583577dd3b063fa28584edc0ae9efab0021160803eecfccefdd4021eda.scope - libcontainer container d3a2da583577dd3b063fa28584edc0ae9efab0021160803eecfccefdd4021eda. Sep 4 00:08:02.681679 containerd[2012]: time="2025-09-04T00:08:02.681546838Z" level=info msg="StartContainer for \"d3a2da583577dd3b063fa28584edc0ae9efab0021160803eecfccefdd4021eda\" returns successfully" Sep 4 00:08:02.848578 kubelet[3275]: I0904 00:08:02.847996 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-678v7" podStartSLOduration=1.847980847 podStartE2EDuration="1.847980847s" podCreationTimestamp="2025-09-04 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:08:02.847764144 +0000 UTC m=+7.220101678" watchObservedRunningTime="2025-09-04 00:08:02.847980847 +0000 UTC m=+7.220318341" Sep 4 00:08:04.221866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274746626.mount: Deactivated successfully. Sep 4 00:08:05.725941 containerd[2012]: time="2025-09-04T00:08:05.725890475Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:05.727822 containerd[2012]: time="2025-09-04T00:08:05.727651225Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 4 00:08:05.729786 containerd[2012]: time="2025-09-04T00:08:05.729746740Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:05.733277 containerd[2012]: time="2025-09-04T00:08:05.733226680Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:05.734158 containerd[2012]: time="2025-09-04T00:08:05.734111491Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.151776482s" Sep 4 00:08:05.734158 containerd[2012]: time="2025-09-04T00:08:05.734147182Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 4 00:08:05.736887 containerd[2012]: time="2025-09-04T00:08:05.736857448Z" level=info msg="CreateContainer within sandbox \"92b34ed66844a4266ca7f265ffb3e221c56dfde65e22ba92d76cab032c2e917b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 00:08:05.750620 containerd[2012]: time="2025-09-04T00:08:05.750370582Z" level=info msg="Container 68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:08:05.758591 containerd[2012]: time="2025-09-04T00:08:05.758548919Z" level=info msg="CreateContainer within sandbox \"92b34ed66844a4266ca7f265ffb3e221c56dfde65e22ba92d76cab032c2e917b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505\"" Sep 4 00:08:05.759070 containerd[2012]: time="2025-09-04T00:08:05.759044521Z" level=info msg="StartContainer for \"68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505\"" Sep 4 00:08:05.761195 containerd[2012]: time="2025-09-04T00:08:05.761157782Z" level=info msg="connecting to shim 68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505" address="unix:///run/containerd/s/9450c386a5700a314714e9c0bcb7fca553d3de75d75b2935b4d23809f0944ba4" protocol=ttrpc version=3 Sep 4 00:08:05.794508 systemd[1]: Started cri-containerd-68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505.scope - libcontainer container 68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505. Sep 4 00:08:05.830496 containerd[2012]: time="2025-09-04T00:08:05.830444491Z" level=info msg="StartContainer for \"68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505\" returns successfully" Sep 4 00:08:05.885533 kubelet[3275]: I0904 00:08:05.884227 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-smsnw" podStartSLOduration=1.7291330299999998 podStartE2EDuration="4.884211371s" podCreationTimestamp="2025-09-04 00:08:01 +0000 UTC" firstStartedPulling="2025-09-04 00:08:02.580217481 +0000 UTC m=+6.952554962" lastFinishedPulling="2025-09-04 00:08:05.73529583 +0000 UTC m=+10.107633303" observedRunningTime="2025-09-04 00:08:05.883905709 +0000 UTC m=+10.256243203" watchObservedRunningTime="2025-09-04 00:08:05.884211371 +0000 UTC m=+10.256548864" Sep 4 00:08:09.289067 update_engine[1980]: I20250904 00:08:09.288290 1980 update_attempter.cc:509] Updating boot flags... Sep 4 00:08:13.100121 sudo[2346]: pam_unix(sudo:session): session closed for user root Sep 4 00:08:13.122629 sshd[2345]: Connection closed by 139.178.68.195 port 49372 Sep 4 00:08:13.123990 sshd-session[2343]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:13.128083 systemd[1]: sshd@6-172.31.29.190:22-139.178.68.195:49372.service: Deactivated successfully. Sep 4 00:08:13.128611 systemd-logind[1979]: Session 7 logged out. Waiting for processes to exit. Sep 4 00:08:13.132956 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 00:08:13.133129 systemd[1]: session-7.scope: Consumed 5.052s CPU time, 152.6M memory peak. Sep 4 00:08:13.138873 systemd-logind[1979]: Removed session 7. Sep 4 00:08:18.183637 systemd[1]: Created slice kubepods-besteffort-pod65d0d8f7_f2e6_40aa_b0fc_06a7a72bd1a5.slice - libcontainer container kubepods-besteffort-pod65d0d8f7_f2e6_40aa_b0fc_06a7a72bd1a5.slice. Sep 4 00:08:18.230335 kubelet[3275]: I0904 00:08:18.229178 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65d0d8f7-f2e6-40aa-b0fc-06a7a72bd1a5-tigera-ca-bundle\") pod \"calico-typha-984589745-5bbgh\" (UID: \"65d0d8f7-f2e6-40aa-b0fc-06a7a72bd1a5\") " pod="calico-system/calico-typha-984589745-5bbgh" Sep 4 00:08:18.230335 kubelet[3275]: I0904 00:08:18.229230 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/65d0d8f7-f2e6-40aa-b0fc-06a7a72bd1a5-typha-certs\") pod \"calico-typha-984589745-5bbgh\" (UID: \"65d0d8f7-f2e6-40aa-b0fc-06a7a72bd1a5\") " pod="calico-system/calico-typha-984589745-5bbgh" Sep 4 00:08:18.230335 kubelet[3275]: I0904 00:08:18.229281 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmxkz\" (UniqueName: \"kubernetes.io/projected/65d0d8f7-f2e6-40aa-b0fc-06a7a72bd1a5-kube-api-access-pmxkz\") pod \"calico-typha-984589745-5bbgh\" (UID: \"65d0d8f7-f2e6-40aa-b0fc-06a7a72bd1a5\") " pod="calico-system/calico-typha-984589745-5bbgh" Sep 4 00:08:18.404522 systemd[1]: Created slice kubepods-besteffort-podb5ea0edb_a4f9_4a16_8a7c_1f30cb9bae9f.slice - libcontainer container kubepods-besteffort-podb5ea0edb_a4f9_4a16_8a7c_1f30cb9bae9f.slice. Sep 4 00:08:18.430139 kubelet[3275]: I0904 00:08:18.430080 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f-cni-log-dir\") pod \"calico-node-9w4ng\" (UID: \"b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f\") " pod="calico-system/calico-node-9w4ng" Sep 4 00:08:18.430528 kubelet[3275]: I0904 00:08:18.430390 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f-cni-net-dir\") pod \"calico-node-9w4ng\" (UID: \"b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f\") " pod="calico-system/calico-node-9w4ng" Sep 4 00:08:18.430729 kubelet[3275]: I0904 00:08:18.430482 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f-lib-modules\") pod \"calico-node-9w4ng\" (UID: \"b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f\") " pod="calico-system/calico-node-9w4ng" Sep 4 00:08:18.430880 kubelet[3275]: I0904 00:08:18.430839 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f-node-certs\") pod \"calico-node-9w4ng\" (UID: \"b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f\") " pod="calico-system/calico-node-9w4ng" Sep 4 00:08:18.431044 kubelet[3275]: I0904 00:08:18.430931 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f-policysync\") pod \"calico-node-9w4ng\" (UID: \"b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f\") " pod="calico-system/calico-node-9w4ng" Sep 4 00:08:18.431157 kubelet[3275]: I0904 00:08:18.431139 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f-cni-bin-dir\") pod \"calico-node-9w4ng\" (UID: \"b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f\") " pod="calico-system/calico-node-9w4ng" Sep 4 00:08:18.431368 kubelet[3275]: I0904 00:08:18.431335 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f-tigera-ca-bundle\") pod \"calico-node-9w4ng\" (UID: \"b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f\") " pod="calico-system/calico-node-9w4ng" Sep 4 00:08:18.431565 kubelet[3275]: I0904 00:08:18.431540 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f-var-lib-calico\") pod \"calico-node-9w4ng\" (UID: \"b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f\") " pod="calico-system/calico-node-9w4ng" Sep 4 00:08:18.431728 kubelet[3275]: I0904 00:08:18.431710 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdwvs\" (UniqueName: \"kubernetes.io/projected/b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f-kube-api-access-vdwvs\") pod \"calico-node-9w4ng\" (UID: \"b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f\") " pod="calico-system/calico-node-9w4ng" Sep 4 00:08:18.431935 kubelet[3275]: I0904 00:08:18.431892 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f-xtables-lock\") pod \"calico-node-9w4ng\" (UID: \"b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f\") " pod="calico-system/calico-node-9w4ng" Sep 4 00:08:18.432070 kubelet[3275]: I0904 00:08:18.432026 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f-var-run-calico\") pod \"calico-node-9w4ng\" (UID: \"b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f\") " pod="calico-system/calico-node-9w4ng" Sep 4 00:08:18.432188 kubelet[3275]: I0904 00:08:18.432172 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f-flexvol-driver-host\") pod \"calico-node-9w4ng\" (UID: \"b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f\") " pod="calico-system/calico-node-9w4ng" Sep 4 00:08:18.489775 containerd[2012]: time="2025-09-04T00:08:18.489717536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-984589745-5bbgh,Uid:65d0d8f7-f2e6-40aa-b0fc-06a7a72bd1a5,Namespace:calico-system,Attempt:0,}" Sep 4 00:08:18.532132 containerd[2012]: time="2025-09-04T00:08:18.531689883Z" level=info msg="connecting to shim 6648053002d50e843487fe6167d439044a9fe9df1b8242bbbac9cdf3eeeb6cd1" address="unix:///run/containerd/s/58650ee4be0fdc5ea3ee039a72efbd7dfa30180ad41e8ed3ca1e5e8cfb1afe3a" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:08:18.551054 kubelet[3275]: E0904 00:08:18.551014 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.552295 kubelet[3275]: W0904 00:08:18.552182 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.553633 kubelet[3275]: E0904 00:08:18.553566 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.575952 kubelet[3275]: E0904 00:08:18.575772 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.575952 kubelet[3275]: W0904 00:08:18.575796 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.575952 kubelet[3275]: E0904 00:08:18.575822 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.597710 systemd[1]: Started cri-containerd-6648053002d50e843487fe6167d439044a9fe9df1b8242bbbac9cdf3eeeb6cd1.scope - libcontainer container 6648053002d50e843487fe6167d439044a9fe9df1b8242bbbac9cdf3eeeb6cd1. Sep 4 00:08:18.627031 kubelet[3275]: E0904 00:08:18.626549 3275 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4csm7" podUID="855604f4-24b1-40c7-86a6-198bf7be5142" Sep 4 00:08:18.630766 kubelet[3275]: E0904 00:08:18.630740 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.631060 kubelet[3275]: W0904 00:08:18.631017 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.631342 kubelet[3275]: E0904 00:08:18.631158 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.631780 kubelet[3275]: E0904 00:08:18.631704 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.631780 kubelet[3275]: W0904 00:08:18.631730 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.631780 kubelet[3275]: E0904 00:08:18.631746 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.632492 kubelet[3275]: E0904 00:08:18.632312 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.632492 kubelet[3275]: W0904 00:08:18.632426 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.632492 kubelet[3275]: E0904 00:08:18.632443 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.633084 kubelet[3275]: E0904 00:08:18.633070 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.633282 kubelet[3275]: W0904 00:08:18.633151 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.633282 kubelet[3275]: E0904 00:08:18.633168 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.633876 kubelet[3275]: E0904 00:08:18.633819 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.633876 kubelet[3275]: W0904 00:08:18.633833 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.634173 kubelet[3275]: E0904 00:08:18.634029 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.634470 kubelet[3275]: E0904 00:08:18.634458 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.634652 kubelet[3275]: W0904 00:08:18.634578 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.634652 kubelet[3275]: E0904 00:08:18.634596 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.635003 kubelet[3275]: E0904 00:08:18.634927 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.635003 kubelet[3275]: W0904 00:08:18.634940 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.635003 kubelet[3275]: E0904 00:08:18.634954 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.635371 kubelet[3275]: E0904 00:08:18.635322 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.635371 kubelet[3275]: W0904 00:08:18.635336 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.635371 kubelet[3275]: E0904 00:08:18.635349 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.635952 kubelet[3275]: E0904 00:08:18.635845 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.635952 kubelet[3275]: W0904 00:08:18.635873 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.635952 kubelet[3275]: E0904 00:08:18.635888 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.636534 kubelet[3275]: E0904 00:08:18.636505 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.636697 kubelet[3275]: W0904 00:08:18.636619 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.636697 kubelet[3275]: E0904 00:08:18.636637 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.637134 kubelet[3275]: E0904 00:08:18.637038 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.637134 kubelet[3275]: W0904 00:08:18.637061 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.637134 kubelet[3275]: E0904 00:08:18.637074 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.637590 kubelet[3275]: E0904 00:08:18.637578 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.637740 kubelet[3275]: W0904 00:08:18.637629 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.637740 kubelet[3275]: E0904 00:08:18.637646 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.638614 kubelet[3275]: E0904 00:08:18.638571 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.638614 kubelet[3275]: W0904 00:08:18.638585 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.638905 kubelet[3275]: E0904 00:08:18.638735 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.639211 kubelet[3275]: E0904 00:08:18.639147 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.639211 kubelet[3275]: W0904 00:08:18.639161 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.639211 kubelet[3275]: E0904 00:08:18.639174 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.639892 kubelet[3275]: E0904 00:08:18.639711 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.640269 kubelet[3275]: W0904 00:08:18.640136 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.640269 kubelet[3275]: E0904 00:08:18.640157 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.640900 kubelet[3275]: E0904 00:08:18.640756 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.640900 kubelet[3275]: W0904 00:08:18.640770 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.640900 kubelet[3275]: E0904 00:08:18.640783 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.641539 kubelet[3275]: E0904 00:08:18.641390 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.641539 kubelet[3275]: W0904 00:08:18.641426 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.641539 kubelet[3275]: E0904 00:08:18.641440 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.642126 kubelet[3275]: E0904 00:08:18.642083 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.642126 kubelet[3275]: W0904 00:08:18.642097 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.642444 kubelet[3275]: E0904 00:08:18.642328 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.642793 kubelet[3275]: E0904 00:08:18.642780 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.643048 kubelet[3275]: W0904 00:08:18.642976 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.643048 kubelet[3275]: E0904 00:08:18.642994 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.643614 kubelet[3275]: E0904 00:08:18.643601 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.643775 kubelet[3275]: W0904 00:08:18.643711 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.643775 kubelet[3275]: E0904 00:08:18.643729 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.644362 kubelet[3275]: E0904 00:08:18.644337 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.644613 kubelet[3275]: W0904 00:08:18.644452 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.644613 kubelet[3275]: E0904 00:08:18.644469 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.644613 kubelet[3275]: I0904 00:08:18.644521 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/855604f4-24b1-40c7-86a6-198bf7be5142-registration-dir\") pod \"csi-node-driver-4csm7\" (UID: \"855604f4-24b1-40c7-86a6-198bf7be5142\") " pod="calico-system/csi-node-driver-4csm7" Sep 4 00:08:18.645397 kubelet[3275]: E0904 00:08:18.645375 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.645763 kubelet[3275]: W0904 00:08:18.645521 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.645763 kubelet[3275]: E0904 00:08:18.645553 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.646024 kubelet[3275]: I0904 00:08:18.645828 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/855604f4-24b1-40c7-86a6-198bf7be5142-kubelet-dir\") pod \"csi-node-driver-4csm7\" (UID: \"855604f4-24b1-40c7-86a6-198bf7be5142\") " pod="calico-system/csi-node-driver-4csm7" Sep 4 00:08:18.647441 kubelet[3275]: E0904 00:08:18.646548 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.647441 kubelet[3275]: W0904 00:08:18.646562 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.647441 kubelet[3275]: E0904 00:08:18.647301 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.647935 kubelet[3275]: E0904 00:08:18.647764 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.647935 kubelet[3275]: W0904 00:08:18.647782 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.647935 kubelet[3275]: E0904 00:08:18.647866 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.648302 kubelet[3275]: E0904 00:08:18.648288 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.648422 kubelet[3275]: W0904 00:08:18.648407 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.648588 kubelet[3275]: E0904 00:08:18.648574 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.648835 kubelet[3275]: I0904 00:08:18.648700 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnbmm\" (UniqueName: \"kubernetes.io/projected/855604f4-24b1-40c7-86a6-198bf7be5142-kube-api-access-rnbmm\") pod \"csi-node-driver-4csm7\" (UID: \"855604f4-24b1-40c7-86a6-198bf7be5142\") " pod="calico-system/csi-node-driver-4csm7" Sep 4 00:08:18.649456 kubelet[3275]: E0904 00:08:18.649440 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.649660 kubelet[3275]: W0904 00:08:18.649635 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.649891 kubelet[3275]: E0904 00:08:18.649741 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.650428 kubelet[3275]: E0904 00:08:18.650396 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.650428 kubelet[3275]: W0904 00:08:18.650411 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.650826 kubelet[3275]: E0904 00:08:18.650548 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.651159 kubelet[3275]: E0904 00:08:18.651107 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.651505 kubelet[3275]: W0904 00:08:18.651480 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.651718 kubelet[3275]: E0904 00:08:18.651588 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.651975 kubelet[3275]: I0904 00:08:18.651621 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/855604f4-24b1-40c7-86a6-198bf7be5142-socket-dir\") pod \"csi-node-driver-4csm7\" (UID: \"855604f4-24b1-40c7-86a6-198bf7be5142\") " pod="calico-system/csi-node-driver-4csm7" Sep 4 00:08:18.652385 kubelet[3275]: E0904 00:08:18.652348 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.652567 kubelet[3275]: W0904 00:08:18.652470 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.652567 kubelet[3275]: E0904 00:08:18.652507 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.652975 kubelet[3275]: E0904 00:08:18.652933 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.652975 kubelet[3275]: W0904 00:08:18.652959 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.653439 kubelet[3275]: E0904 00:08:18.653171 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.653610 kubelet[3275]: E0904 00:08:18.653522 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.653726 kubelet[3275]: W0904 00:08:18.653535 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.653887 kubelet[3275]: E0904 00:08:18.653822 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.654140 kubelet[3275]: I0904 00:08:18.653854 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/855604f4-24b1-40c7-86a6-198bf7be5142-varrun\") pod \"csi-node-driver-4csm7\" (UID: \"855604f4-24b1-40c7-86a6-198bf7be5142\") " pod="calico-system/csi-node-driver-4csm7" Sep 4 00:08:18.654564 kubelet[3275]: E0904 00:08:18.654532 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.654564 kubelet[3275]: W0904 00:08:18.654547 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.654781 kubelet[3275]: E0904 00:08:18.654702 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.655119 kubelet[3275]: E0904 00:08:18.655106 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.655274 kubelet[3275]: W0904 00:08:18.655207 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.655470 kubelet[3275]: E0904 00:08:18.655456 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.655854 kubelet[3275]: E0904 00:08:18.655751 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.655854 kubelet[3275]: W0904 00:08:18.655794 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.655854 kubelet[3275]: E0904 00:08:18.655808 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.656451 kubelet[3275]: E0904 00:08:18.656275 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.656451 kubelet[3275]: W0904 00:08:18.656289 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.656451 kubelet[3275]: E0904 00:08:18.656303 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.709867 containerd[2012]: time="2025-09-04T00:08:18.709751431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9w4ng,Uid:b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f,Namespace:calico-system,Attempt:0,}" Sep 4 00:08:18.758749 kubelet[3275]: E0904 00:08:18.758265 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.758749 kubelet[3275]: W0904 00:08:18.758306 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.758749 kubelet[3275]: E0904 00:08:18.758338 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.758749 kubelet[3275]: E0904 00:08:18.758704 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.758749 kubelet[3275]: W0904 00:08:18.758719 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.761179 kubelet[3275]: E0904 00:08:18.758939 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.761179 kubelet[3275]: W0904 00:08:18.758949 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.761179 kubelet[3275]: E0904 00:08:18.758966 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.761179 kubelet[3275]: E0904 00:08:18.759126 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.762354 kubelet[3275]: E0904 00:08:18.761332 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.762443 kubelet[3275]: W0904 00:08:18.762358 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.762757 kubelet[3275]: E0904 00:08:18.762665 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.762757 kubelet[3275]: W0904 00:08:18.762698 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.762757 kubelet[3275]: E0904 00:08:18.762717 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.762919 kubelet[3275]: E0904 00:08:18.762768 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.763470 kubelet[3275]: E0904 00:08:18.763021 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.763470 kubelet[3275]: W0904 00:08:18.763034 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.763470 kubelet[3275]: E0904 00:08:18.763066 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.765085 kubelet[3275]: E0904 00:08:18.764444 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.765085 kubelet[3275]: W0904 00:08:18.764460 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.765085 kubelet[3275]: E0904 00:08:18.764493 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.765085 kubelet[3275]: E0904 00:08:18.764752 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.765085 kubelet[3275]: W0904 00:08:18.764763 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.765085 kubelet[3275]: E0904 00:08:18.764789 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.765085 kubelet[3275]: E0904 00:08:18.765037 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.765085 kubelet[3275]: W0904 00:08:18.765050 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.765085 kubelet[3275]: E0904 00:08:18.765074 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.766317 kubelet[3275]: E0904 00:08:18.765330 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.766402 kubelet[3275]: W0904 00:08:18.766320 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.766466 kubelet[3275]: E0904 00:08:18.766448 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.767272 kubelet[3275]: E0904 00:08:18.766636 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.767272 kubelet[3275]: W0904 00:08:18.766661 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.767272 kubelet[3275]: E0904 00:08:18.766742 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.767272 kubelet[3275]: E0904 00:08:18.767148 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.767272 kubelet[3275]: W0904 00:08:18.767161 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.767495 kubelet[3275]: E0904 00:08:18.767339 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.769415 kubelet[3275]: E0904 00:08:18.768913 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.769415 kubelet[3275]: W0904 00:08:18.768932 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.769415 kubelet[3275]: E0904 00:08:18.769018 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.769415 kubelet[3275]: E0904 00:08:18.769370 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.769415 kubelet[3275]: W0904 00:08:18.769381 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.770105 kubelet[3275]: E0904 00:08:18.770080 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.771334 kubelet[3275]: E0904 00:08:18.771316 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.771419 kubelet[3275]: W0904 00:08:18.771335 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.771541 kubelet[3275]: E0904 00:08:18.771521 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.771610 kubelet[3275]: E0904 00:08:18.771580 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.771610 kubelet[3275]: W0904 00:08:18.771590 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.773331 kubelet[3275]: E0904 00:08:18.773309 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.773436 kubelet[3275]: E0904 00:08:18.773399 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.773436 kubelet[3275]: W0904 00:08:18.773410 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.773528 kubelet[3275]: E0904 00:08:18.773493 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.773867 kubelet[3275]: E0904 00:08:18.773768 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.773867 kubelet[3275]: W0904 00:08:18.773782 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.775128 kubelet[3275]: E0904 00:08:18.775107 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.775329 kubelet[3275]: E0904 00:08:18.775314 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.775392 kubelet[3275]: W0904 00:08:18.775329 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.775451 kubelet[3275]: E0904 00:08:18.775413 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.776344 kubelet[3275]: E0904 00:08:18.776220 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.776344 kubelet[3275]: W0904 00:08:18.776281 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.776511 kubelet[3275]: E0904 00:08:18.776492 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.777049 kubelet[3275]: E0904 00:08:18.777031 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.777049 kubelet[3275]: W0904 00:08:18.777049 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.778360 containerd[2012]: time="2025-09-04T00:08:18.777453740Z" level=info msg="connecting to shim ab738cb848d88ee8f5dae8da7c822f8765d67c777dc39347ebcfcb23e013bb22" address="unix:///run/containerd/s/3ac1c8ceebcad29de4cf629c011772ce9c5c938c4443e13a840488248eb734fd" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:08:18.778792 kubelet[3275]: E0904 00:08:18.778769 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.779061 kubelet[3275]: E0904 00:08:18.779046 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.779266 kubelet[3275]: W0904 00:08:18.779062 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.779266 kubelet[3275]: E0904 00:08:18.779207 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.780651 kubelet[3275]: E0904 00:08:18.780632 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.780651 kubelet[3275]: W0904 00:08:18.780650 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.781359 kubelet[3275]: E0904 00:08:18.781313 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.781769 kubelet[3275]: E0904 00:08:18.781754 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.781769 kubelet[3275]: W0904 00:08:18.781769 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.783063 kubelet[3275]: E0904 00:08:18.783040 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.783367 kubelet[3275]: E0904 00:08:18.783348 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.783367 kubelet[3275]: W0904 00:08:18.783363 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.783593 kubelet[3275]: E0904 00:08:18.783377 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.822306 kubelet[3275]: E0904 00:08:18.822164 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:18.822306 kubelet[3275]: W0904 00:08:18.822190 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:18.822306 kubelet[3275]: E0904 00:08:18.822214 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:18.839685 containerd[2012]: time="2025-09-04T00:08:18.839364093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-984589745-5bbgh,Uid:65d0d8f7-f2e6-40aa-b0fc-06a7a72bd1a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"6648053002d50e843487fe6167d439044a9fe9df1b8242bbbac9cdf3eeeb6cd1\"" Sep 4 00:08:18.845910 containerd[2012]: time="2025-09-04T00:08:18.845873432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 4 00:08:18.867959 systemd[1]: Started cri-containerd-ab738cb848d88ee8f5dae8da7c822f8765d67c777dc39347ebcfcb23e013bb22.scope - libcontainer container ab738cb848d88ee8f5dae8da7c822f8765d67c777dc39347ebcfcb23e013bb22. Sep 4 00:08:19.008516 containerd[2012]: time="2025-09-04T00:08:19.008418537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9w4ng,Uid:b5ea0edb-a4f9-4a16-8a7c-1f30cb9bae9f,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab738cb848d88ee8f5dae8da7c822f8765d67c777dc39347ebcfcb23e013bb22\"" Sep 4 00:08:20.464791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073351813.mount: Deactivated successfully. Sep 4 00:08:20.772901 kubelet[3275]: E0904 00:08:20.772825 3275 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4csm7" podUID="855604f4-24b1-40c7-86a6-198bf7be5142" Sep 4 00:08:21.675363 containerd[2012]: time="2025-09-04T00:08:21.675304491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:21.677025 containerd[2012]: time="2025-09-04T00:08:21.676972387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 4 00:08:21.679319 containerd[2012]: time="2025-09-04T00:08:21.679265624Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:21.681330 containerd[2012]: time="2025-09-04T00:08:21.681279050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:21.681947 containerd[2012]: time="2025-09-04T00:08:21.681768128Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.83561171s" Sep 4 00:08:21.681947 containerd[2012]: time="2025-09-04T00:08:21.681798957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 4 00:08:21.683029 containerd[2012]: time="2025-09-04T00:08:21.683007835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 4 00:08:21.698884 containerd[2012]: time="2025-09-04T00:08:21.698823889Z" level=info msg="CreateContainer within sandbox \"6648053002d50e843487fe6167d439044a9fe9df1b8242bbbac9cdf3eeeb6cd1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 00:08:21.707411 containerd[2012]: time="2025-09-04T00:08:21.707374165Z" level=info msg="Container 9840a07cb306be75475877a4bb1f6ad78de8375971f949392a3145da3f4393cc: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:08:21.728821 containerd[2012]: time="2025-09-04T00:08:21.728533375Z" level=info msg="CreateContainer within sandbox \"6648053002d50e843487fe6167d439044a9fe9df1b8242bbbac9cdf3eeeb6cd1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9840a07cb306be75475877a4bb1f6ad78de8375971f949392a3145da3f4393cc\"" Sep 4 00:08:21.730093 containerd[2012]: time="2025-09-04T00:08:21.730061292Z" level=info msg="StartContainer for \"9840a07cb306be75475877a4bb1f6ad78de8375971f949392a3145da3f4393cc\"" Sep 4 00:08:21.732050 containerd[2012]: time="2025-09-04T00:08:21.732008436Z" level=info msg="connecting to shim 9840a07cb306be75475877a4bb1f6ad78de8375971f949392a3145da3f4393cc" address="unix:///run/containerd/s/58650ee4be0fdc5ea3ee039a72efbd7dfa30180ad41e8ed3ca1e5e8cfb1afe3a" protocol=ttrpc version=3 Sep 4 00:08:21.768699 systemd[1]: Started cri-containerd-9840a07cb306be75475877a4bb1f6ad78de8375971f949392a3145da3f4393cc.scope - libcontainer container 9840a07cb306be75475877a4bb1f6ad78de8375971f949392a3145da3f4393cc. Sep 4 00:08:21.888330 containerd[2012]: time="2025-09-04T00:08:21.888274116Z" level=info msg="StartContainer for \"9840a07cb306be75475877a4bb1f6ad78de8375971f949392a3145da3f4393cc\" returns successfully" Sep 4 00:08:22.053666 kubelet[3275]: I0904 00:08:22.053550 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-984589745-5bbgh" podStartSLOduration=1.2154786180000001 podStartE2EDuration="4.05351435s" podCreationTimestamp="2025-09-04 00:08:18 +0000 UTC" firstStartedPulling="2025-09-04 00:08:18.844832529 +0000 UTC m=+23.217170005" lastFinishedPulling="2025-09-04 00:08:21.682868263 +0000 UTC m=+26.055205737" observedRunningTime="2025-09-04 00:08:22.053398194 +0000 UTC m=+26.425735690" watchObservedRunningTime="2025-09-04 00:08:22.05351435 +0000 UTC m=+26.425851841" Sep 4 00:08:22.073141 kubelet[3275]: E0904 00:08:22.072945 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.073141 kubelet[3275]: W0904 00:08:22.072977 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.073141 kubelet[3275]: E0904 00:08:22.073004 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.073677 kubelet[3275]: E0904 00:08:22.073514 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.073677 kubelet[3275]: W0904 00:08:22.073533 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.073677 kubelet[3275]: E0904 00:08:22.073550 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.074544 kubelet[3275]: E0904 00:08:22.074426 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.074544 kubelet[3275]: W0904 00:08:22.074453 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.074544 kubelet[3275]: E0904 00:08:22.074471 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.084195 kubelet[3275]: E0904 00:08:22.084081 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.084621 kubelet[3275]: W0904 00:08:22.084269 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.084621 kubelet[3275]: E0904 00:08:22.084317 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.085294 kubelet[3275]: E0904 00:08:22.085275 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.085639 kubelet[3275]: W0904 00:08:22.085407 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.085639 kubelet[3275]: E0904 00:08:22.085435 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.116199 kubelet[3275]: E0904 00:08:22.085764 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.116199 kubelet[3275]: W0904 00:08:22.085775 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.116199 kubelet[3275]: E0904 00:08:22.085790 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.116199 kubelet[3275]: E0904 00:08:22.086039 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.116199 kubelet[3275]: W0904 00:08:22.086050 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.116199 kubelet[3275]: E0904 00:08:22.086293 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.116199 kubelet[3275]: E0904 00:08:22.086588 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.116199 kubelet[3275]: W0904 00:08:22.086599 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.116199 kubelet[3275]: E0904 00:08:22.086612 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.116199 kubelet[3275]: E0904 00:08:22.086865 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.116725 kubelet[3275]: W0904 00:08:22.086877 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.116725 kubelet[3275]: E0904 00:08:22.086890 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.116725 kubelet[3275]: E0904 00:08:22.087117 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.116725 kubelet[3275]: W0904 00:08:22.087130 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.116725 kubelet[3275]: E0904 00:08:22.087310 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.116725 kubelet[3275]: E0904 00:08:22.087729 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.116725 kubelet[3275]: W0904 00:08:22.087741 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.116725 kubelet[3275]: E0904 00:08:22.087755 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.116725 kubelet[3275]: E0904 00:08:22.088018 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.116725 kubelet[3275]: W0904 00:08:22.088038 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.117099 kubelet[3275]: E0904 00:08:22.088051 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.117099 kubelet[3275]: E0904 00:08:22.088311 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.117099 kubelet[3275]: W0904 00:08:22.088322 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.117099 kubelet[3275]: E0904 00:08:22.088345 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.117099 kubelet[3275]: E0904 00:08:22.088591 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.117099 kubelet[3275]: W0904 00:08:22.088602 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.117099 kubelet[3275]: E0904 00:08:22.088615 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.117099 kubelet[3275]: E0904 00:08:22.088837 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.117099 kubelet[3275]: W0904 00:08:22.088858 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.117099 kubelet[3275]: E0904 00:08:22.088871 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.118061 kubelet[3275]: E0904 00:08:22.101494 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.118061 kubelet[3275]: W0904 00:08:22.101516 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.118061 kubelet[3275]: E0904 00:08:22.101542 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.118061 kubelet[3275]: E0904 00:08:22.101942 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.118061 kubelet[3275]: W0904 00:08:22.101965 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.118061 kubelet[3275]: E0904 00:08:22.102003 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.118061 kubelet[3275]: E0904 00:08:22.102366 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.118061 kubelet[3275]: W0904 00:08:22.102380 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.118061 kubelet[3275]: E0904 00:08:22.102423 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.118061 kubelet[3275]: E0904 00:08:22.102859 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.118506 kubelet[3275]: W0904 00:08:22.102873 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.118506 kubelet[3275]: E0904 00:08:22.103125 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.118506 kubelet[3275]: W0904 00:08:22.103143 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.118506 kubelet[3275]: E0904 00:08:22.103159 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.118506 kubelet[3275]: E0904 00:08:22.103260 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.118506 kubelet[3275]: E0904 00:08:22.103462 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.118506 kubelet[3275]: W0904 00:08:22.103472 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.118506 kubelet[3275]: E0904 00:08:22.103526 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.118506 kubelet[3275]: E0904 00:08:22.103817 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.118506 kubelet[3275]: W0904 00:08:22.103829 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.118884 kubelet[3275]: E0904 00:08:22.103850 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.118884 kubelet[3275]: E0904 00:08:22.106049 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.118884 kubelet[3275]: W0904 00:08:22.106061 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.118884 kubelet[3275]: E0904 00:08:22.106094 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.118884 kubelet[3275]: E0904 00:08:22.106748 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.118884 kubelet[3275]: W0904 00:08:22.106760 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.118884 kubelet[3275]: E0904 00:08:22.106779 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.118884 kubelet[3275]: E0904 00:08:22.106978 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.118884 kubelet[3275]: W0904 00:08:22.107001 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.118884 kubelet[3275]: E0904 00:08:22.107015 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.119286 kubelet[3275]: E0904 00:08:22.107247 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.119286 kubelet[3275]: W0904 00:08:22.107273 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.119286 kubelet[3275]: E0904 00:08:22.107354 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.119286 kubelet[3275]: E0904 00:08:22.107737 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.119286 kubelet[3275]: W0904 00:08:22.107773 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.119286 kubelet[3275]: E0904 00:08:22.107967 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.119286 kubelet[3275]: E0904 00:08:22.108045 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.119286 kubelet[3275]: W0904 00:08:22.108054 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.119286 kubelet[3275]: E0904 00:08:22.108160 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.119286 kubelet[3275]: E0904 00:08:22.108444 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.119668 kubelet[3275]: W0904 00:08:22.108456 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.119668 kubelet[3275]: E0904 00:08:22.108483 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.119668 kubelet[3275]: E0904 00:08:22.108728 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.119668 kubelet[3275]: W0904 00:08:22.108738 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.119668 kubelet[3275]: E0904 00:08:22.108751 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.119668 kubelet[3275]: E0904 00:08:22.108979 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.119668 kubelet[3275]: W0904 00:08:22.108990 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.119668 kubelet[3275]: E0904 00:08:22.109002 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.119668 kubelet[3275]: E0904 00:08:22.109227 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.119668 kubelet[3275]: W0904 00:08:22.109261 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.120032 kubelet[3275]: E0904 00:08:22.109275 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.120032 kubelet[3275]: E0904 00:08:22.109649 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:22.120032 kubelet[3275]: W0904 00:08:22.109659 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:22.120032 kubelet[3275]: E0904 00:08:22.109671 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:22.770722 kubelet[3275]: E0904 00:08:22.770686 3275 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4csm7" podUID="855604f4-24b1-40c7-86a6-198bf7be5142" Sep 4 00:08:23.015984 kubelet[3275]: I0904 00:08:23.015686 3275 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:08:23.063649 containerd[2012]: time="2025-09-04T00:08:23.063539718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:23.064770 containerd[2012]: time="2025-09-04T00:08:23.064644443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 4 00:08:23.065642 containerd[2012]: time="2025-09-04T00:08:23.065613251Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:23.067975 containerd[2012]: time="2025-09-04T00:08:23.067427600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:23.067975 containerd[2012]: time="2025-09-04T00:08:23.067869823Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.384834337s" Sep 4 00:08:23.067975 containerd[2012]: time="2025-09-04T00:08:23.067896982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 4 00:08:23.070878 containerd[2012]: time="2025-09-04T00:08:23.070844276Z" level=info msg="CreateContainer within sandbox \"ab738cb848d88ee8f5dae8da7c822f8765d67c777dc39347ebcfcb23e013bb22\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 00:08:23.081410 containerd[2012]: time="2025-09-04T00:08:23.079319279Z" level=info msg="Container 977c302d876616a176833b45992eae4d40dacec51de645dc1e2646bea3ed60b5: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:08:23.096923 kubelet[3275]: E0904 00:08:23.096885 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.096923 kubelet[3275]: W0904 00:08:23.096919 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.097941 containerd[2012]: time="2025-09-04T00:08:23.097898480Z" level=info msg="CreateContainer within sandbox \"ab738cb848d88ee8f5dae8da7c822f8765d67c777dc39347ebcfcb23e013bb22\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"977c302d876616a176833b45992eae4d40dacec51de645dc1e2646bea3ed60b5\"" Sep 4 00:08:23.098683 containerd[2012]: time="2025-09-04T00:08:23.098658377Z" level=info msg="StartContainer for \"977c302d876616a176833b45992eae4d40dacec51de645dc1e2646bea3ed60b5\"" Sep 4 00:08:23.100596 containerd[2012]: time="2025-09-04T00:08:23.100380301Z" level=info msg="connecting to shim 977c302d876616a176833b45992eae4d40dacec51de645dc1e2646bea3ed60b5" address="unix:///run/containerd/s/3ac1c8ceebcad29de4cf629c011772ce9c5c938c4443e13a840488248eb734fd" protocol=ttrpc version=3 Sep 4 00:08:23.100820 kubelet[3275]: E0904 00:08:23.100748 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.101098 kubelet[3275]: E0904 00:08:23.101081 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.101230 kubelet[3275]: W0904 00:08:23.101168 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.101230 kubelet[3275]: E0904 00:08:23.101187 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.101944 kubelet[3275]: E0904 00:08:23.101934 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.102169 kubelet[3275]: W0904 00:08:23.101973 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.102169 kubelet[3275]: E0904 00:08:23.101986 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.102568 kubelet[3275]: E0904 00:08:23.102520 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.102568 kubelet[3275]: W0904 00:08:23.102531 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.102568 kubelet[3275]: E0904 00:08:23.102541 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.102890 kubelet[3275]: E0904 00:08:23.102842 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.102890 kubelet[3275]: W0904 00:08:23.102852 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.102890 kubelet[3275]: E0904 00:08:23.102861 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.104029 kubelet[3275]: E0904 00:08:23.103121 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.104111 kubelet[3275]: W0904 00:08:23.104098 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.104159 kubelet[3275]: E0904 00:08:23.104152 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.104388 kubelet[3275]: E0904 00:08:23.104378 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.104456 kubelet[3275]: W0904 00:08:23.104447 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.104520 kubelet[3275]: E0904 00:08:23.104512 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.104712 kubelet[3275]: E0904 00:08:23.104704 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.104757 kubelet[3275]: W0904 00:08:23.104751 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.104804 kubelet[3275]: E0904 00:08:23.104796 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.104990 kubelet[3275]: E0904 00:08:23.104983 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.105042 kubelet[3275]: W0904 00:08:23.105035 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.105088 kubelet[3275]: E0904 00:08:23.105080 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.105288 kubelet[3275]: E0904 00:08:23.105280 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.105337 kubelet[3275]: W0904 00:08:23.105330 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.105377 kubelet[3275]: E0904 00:08:23.105370 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.105600 kubelet[3275]: E0904 00:08:23.105553 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.105600 kubelet[3275]: W0904 00:08:23.105561 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.105600 kubelet[3275]: E0904 00:08:23.105568 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.105852 kubelet[3275]: E0904 00:08:23.105799 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.105852 kubelet[3275]: W0904 00:08:23.105806 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.105852 kubelet[3275]: E0904 00:08:23.105814 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.106162 kubelet[3275]: E0904 00:08:23.106104 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.106162 kubelet[3275]: W0904 00:08:23.106120 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.106162 kubelet[3275]: E0904 00:08:23.106129 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.106432 kubelet[3275]: E0904 00:08:23.106384 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.106432 kubelet[3275]: W0904 00:08:23.106393 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.106432 kubelet[3275]: E0904 00:08:23.106402 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.106721 kubelet[3275]: E0904 00:08:23.106651 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.106721 kubelet[3275]: W0904 00:08:23.106659 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.106721 kubelet[3275]: E0904 00:08:23.106667 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.110927 kubelet[3275]: E0904 00:08:23.110141 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.110927 kubelet[3275]: W0904 00:08:23.110157 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.110927 kubelet[3275]: E0904 00:08:23.110172 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.110927 kubelet[3275]: E0904 00:08:23.110411 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.110927 kubelet[3275]: W0904 00:08:23.110418 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.110927 kubelet[3275]: E0904 00:08:23.110436 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.110927 kubelet[3275]: E0904 00:08:23.110635 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.110927 kubelet[3275]: W0904 00:08:23.110641 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.110927 kubelet[3275]: E0904 00:08:23.110659 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.110927 kubelet[3275]: E0904 00:08:23.110848 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.111416 kubelet[3275]: W0904 00:08:23.110855 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.111416 kubelet[3275]: E0904 00:08:23.110876 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.111824 kubelet[3275]: E0904 00:08:23.111671 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.111824 kubelet[3275]: W0904 00:08:23.111682 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.111824 kubelet[3275]: E0904 00:08:23.111759 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.112074 kubelet[3275]: E0904 00:08:23.112036 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.112074 kubelet[3275]: W0904 00:08:23.112046 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.112394 kubelet[3275]: E0904 00:08:23.112296 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.112719 kubelet[3275]: E0904 00:08:23.112663 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.112719 kubelet[3275]: W0904 00:08:23.112673 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.112945 kubelet[3275]: E0904 00:08:23.112880 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.113353 kubelet[3275]: E0904 00:08:23.113214 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.113353 kubelet[3275]: W0904 00:08:23.113241 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.113353 kubelet[3275]: E0904 00:08:23.113255 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.113816 kubelet[3275]: E0904 00:08:23.113805 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.114084 kubelet[3275]: W0904 00:08:23.113868 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.114327 kubelet[3275]: E0904 00:08:23.114198 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.114535 kubelet[3275]: E0904 00:08:23.114436 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.114535 kubelet[3275]: W0904 00:08:23.114448 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.114831 kubelet[3275]: E0904 00:08:23.114740 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.115284 kubelet[3275]: E0904 00:08:23.115274 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.115366 kubelet[3275]: W0904 00:08:23.115343 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.115576 kubelet[3275]: E0904 00:08:23.115449 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.116608 kubelet[3275]: E0904 00:08:23.116588 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.116608 kubelet[3275]: W0904 00:08:23.116602 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.116708 kubelet[3275]: E0904 00:08:23.116630 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.116884 kubelet[3275]: E0904 00:08:23.116868 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.116884 kubelet[3275]: W0904 00:08:23.116880 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.116974 kubelet[3275]: E0904 00:08:23.116960 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.119166 kubelet[3275]: E0904 00:08:23.119139 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.119166 kubelet[3275]: W0904 00:08:23.119153 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.119280 kubelet[3275]: E0904 00:08:23.119244 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.119405 kubelet[3275]: E0904 00:08:23.119380 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.119405 kubelet[3275]: W0904 00:08:23.119390 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.119558 kubelet[3275]: E0904 00:08:23.119456 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.119592 kubelet[3275]: E0904 00:08:23.119569 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.119592 kubelet[3275]: W0904 00:08:23.119575 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.119648 kubelet[3275]: E0904 00:08:23.119594 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.119820 kubelet[3275]: E0904 00:08:23.119805 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.119820 kubelet[3275]: W0904 00:08:23.119816 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.119892 kubelet[3275]: E0904 00:08:23.119826 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.120155 kubelet[3275]: E0904 00:08:23.120138 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:08:23.120155 kubelet[3275]: W0904 00:08:23.120151 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:08:23.120223 kubelet[3275]: E0904 00:08:23.120159 3275 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:08:23.133479 systemd[1]: Started cri-containerd-977c302d876616a176833b45992eae4d40dacec51de645dc1e2646bea3ed60b5.scope - libcontainer container 977c302d876616a176833b45992eae4d40dacec51de645dc1e2646bea3ed60b5. Sep 4 00:08:23.181025 containerd[2012]: time="2025-09-04T00:08:23.180988930Z" level=info msg="StartContainer for \"977c302d876616a176833b45992eae4d40dacec51de645dc1e2646bea3ed60b5\" returns successfully" Sep 4 00:08:23.192547 systemd[1]: cri-containerd-977c302d876616a176833b45992eae4d40dacec51de645dc1e2646bea3ed60b5.scope: Deactivated successfully. Sep 4 00:08:23.216099 containerd[2012]: time="2025-09-04T00:08:23.214995156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"977c302d876616a176833b45992eae4d40dacec51de645dc1e2646bea3ed60b5\" id:\"977c302d876616a176833b45992eae4d40dacec51de645dc1e2646bea3ed60b5\" pid:4077 exited_at:{seconds:1756944503 nanos:197797209}" Sep 4 00:08:23.259935 containerd[2012]: time="2025-09-04T00:08:23.259894551Z" level=info msg="received exit event container_id:\"977c302d876616a176833b45992eae4d40dacec51de645dc1e2646bea3ed60b5\" id:\"977c302d876616a176833b45992eae4d40dacec51de645dc1e2646bea3ed60b5\" pid:4077 exited_at:{seconds:1756944503 nanos:197797209}" Sep 4 00:08:23.299970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-977c302d876616a176833b45992eae4d40dacec51de645dc1e2646bea3ed60b5-rootfs.mount: Deactivated successfully. Sep 4 00:08:24.027118 containerd[2012]: time="2025-09-04T00:08:24.026841587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 4 00:08:24.770409 kubelet[3275]: E0904 00:08:24.770352 3275 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4csm7" podUID="855604f4-24b1-40c7-86a6-198bf7be5142" Sep 4 00:08:26.770209 kubelet[3275]: E0904 00:08:26.770159 3275 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4csm7" podUID="855604f4-24b1-40c7-86a6-198bf7be5142" Sep 4 00:08:27.865318 containerd[2012]: time="2025-09-04T00:08:27.865260298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:27.866496 containerd[2012]: time="2025-09-04T00:08:27.866454289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 4 00:08:27.867787 containerd[2012]: time="2025-09-04T00:08:27.867726202Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:27.869984 containerd[2012]: time="2025-09-04T00:08:27.869920615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:27.871125 containerd[2012]: time="2025-09-04T00:08:27.870603753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.843718408s" Sep 4 00:08:27.871125 containerd[2012]: time="2025-09-04T00:08:27.870643487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 4 00:08:27.892960 containerd[2012]: time="2025-09-04T00:08:27.892907152Z" level=info msg="CreateContainer within sandbox \"ab738cb848d88ee8f5dae8da7c822f8765d67c777dc39347ebcfcb23e013bb22\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 00:08:27.906587 containerd[2012]: time="2025-09-04T00:08:27.906530509Z" level=info msg="Container 643d8419e94fc8eba1c3ee7e6dee494bfbaddd0c1cf24dc2aff9835ae0c99a10: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:08:27.932677 containerd[2012]: time="2025-09-04T00:08:27.932618309Z" level=info msg="CreateContainer within sandbox \"ab738cb848d88ee8f5dae8da7c822f8765d67c777dc39347ebcfcb23e013bb22\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"643d8419e94fc8eba1c3ee7e6dee494bfbaddd0c1cf24dc2aff9835ae0c99a10\"" Sep 4 00:08:27.934619 containerd[2012]: time="2025-09-04T00:08:27.934590093Z" level=info msg="StartContainer for \"643d8419e94fc8eba1c3ee7e6dee494bfbaddd0c1cf24dc2aff9835ae0c99a10\"" Sep 4 00:08:27.935968 containerd[2012]: time="2025-09-04T00:08:27.935936733Z" level=info msg="connecting to shim 643d8419e94fc8eba1c3ee7e6dee494bfbaddd0c1cf24dc2aff9835ae0c99a10" address="unix:///run/containerd/s/3ac1c8ceebcad29de4cf629c011772ce9c5c938c4443e13a840488248eb734fd" protocol=ttrpc version=3 Sep 4 00:08:27.959420 systemd[1]: Started cri-containerd-643d8419e94fc8eba1c3ee7e6dee494bfbaddd0c1cf24dc2aff9835ae0c99a10.scope - libcontainer container 643d8419e94fc8eba1c3ee7e6dee494bfbaddd0c1cf24dc2aff9835ae0c99a10. Sep 4 00:08:28.047490 containerd[2012]: time="2025-09-04T00:08:28.047453644Z" level=info msg="StartContainer for \"643d8419e94fc8eba1c3ee7e6dee494bfbaddd0c1cf24dc2aff9835ae0c99a10\" returns successfully" Sep 4 00:08:28.770846 kubelet[3275]: E0904 00:08:28.770609 3275 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4csm7" podUID="855604f4-24b1-40c7-86a6-198bf7be5142" Sep 4 00:08:29.158934 systemd[1]: cri-containerd-643d8419e94fc8eba1c3ee7e6dee494bfbaddd0c1cf24dc2aff9835ae0c99a10.scope: Deactivated successfully. Sep 4 00:08:29.159182 systemd[1]: cri-containerd-643d8419e94fc8eba1c3ee7e6dee494bfbaddd0c1cf24dc2aff9835ae0c99a10.scope: Consumed 586ms CPU time, 166.8M memory peak, 5.5M read from disk, 171.3M written to disk. Sep 4 00:08:29.253888 containerd[2012]: time="2025-09-04T00:08:29.253840305Z" level=info msg="received exit event container_id:\"643d8419e94fc8eba1c3ee7e6dee494bfbaddd0c1cf24dc2aff9835ae0c99a10\" id:\"643d8419e94fc8eba1c3ee7e6dee494bfbaddd0c1cf24dc2aff9835ae0c99a10\" pid:4135 exited_at:{seconds:1756944509 nanos:253159631}" Sep 4 00:08:29.258176 containerd[2012]: time="2025-09-04T00:08:29.258128473Z" level=info msg="TaskExit event in podsandbox handler container_id:\"643d8419e94fc8eba1c3ee7e6dee494bfbaddd0c1cf24dc2aff9835ae0c99a10\" id:\"643d8419e94fc8eba1c3ee7e6dee494bfbaddd0c1cf24dc2aff9835ae0c99a10\" pid:4135 exited_at:{seconds:1756944509 nanos:253159631}" Sep 4 00:08:29.350192 kubelet[3275]: I0904 00:08:29.350157 3275 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 00:08:29.392930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-643d8419e94fc8eba1c3ee7e6dee494bfbaddd0c1cf24dc2aff9835ae0c99a10-rootfs.mount: Deactivated successfully. Sep 4 00:08:29.480402 systemd[1]: Created slice kubepods-burstable-pod785d2c30_0b96_43f5_9b76_56ac0617432c.slice - libcontainer container kubepods-burstable-pod785d2c30_0b96_43f5_9b76_56ac0617432c.slice. Sep 4 00:08:29.500168 systemd[1]: Created slice kubepods-burstable-pod6d71db07_e698_49b3_b244_3fc1673cedef.slice - libcontainer container kubepods-burstable-pod6d71db07_e698_49b3_b244_3fc1673cedef.slice. Sep 4 00:08:29.526053 systemd[1]: Created slice kubepods-besteffort-pod23f0ea80_398b_4f23_8e24_f4c05cda5b7a.slice - libcontainer container kubepods-besteffort-pod23f0ea80_398b_4f23_8e24_f4c05cda5b7a.slice. Sep 4 00:08:29.534344 systemd[1]: Created slice kubepods-besteffort-pod85b1ac46_1183_4716_965a_da420e906863.slice - libcontainer container kubepods-besteffort-pod85b1ac46_1183_4716_965a_da420e906863.slice. Sep 4 00:08:29.546509 systemd[1]: Created slice kubepods-besteffort-pod47765c8f_e89b_4d77_9556_3f7f45952dc3.slice - libcontainer container kubepods-besteffort-pod47765c8f_e89b_4d77_9556_3f7f45952dc3.slice. Sep 4 00:08:29.551716 kubelet[3275]: I0904 00:08:29.550121 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbmth\" (UniqueName: \"kubernetes.io/projected/85b1ac46-1183-4716-965a-da420e906863-kube-api-access-zbmth\") pod \"calico-apiserver-664cb9f68d-t5fls\" (UID: \"85b1ac46-1183-4716-965a-da420e906863\") " pod="calico-apiserver/calico-apiserver-664cb9f68d-t5fls" Sep 4 00:08:29.553197 kubelet[3275]: I0904 00:08:29.553091 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82hfv\" (UniqueName: \"kubernetes.io/projected/aa9754da-3cb3-4d5b-886e-a8706c00d845-kube-api-access-82hfv\") pod \"goldmane-54d579b49d-4dg2h\" (UID: \"aa9754da-3cb3-4d5b-886e-a8706c00d845\") " pod="calico-system/goldmane-54d579b49d-4dg2h" Sep 4 00:08:29.553448 kubelet[3275]: I0904 00:08:29.553381 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrmds\" (UniqueName: \"kubernetes.io/projected/5f012bd7-0ff2-46a3-ac4f-80680b0884e0-kube-api-access-wrmds\") pod \"calico-apiserver-6d5d477df-hxgst\" (UID: \"5f012bd7-0ff2-46a3-ac4f-80680b0884e0\") " pod="calico-apiserver/calico-apiserver-6d5d477df-hxgst" Sep 4 00:08:29.553785 kubelet[3275]: I0904 00:08:29.553635 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/785d2c30-0b96-43f5-9b76-56ac0617432c-config-volume\") pod \"coredns-668d6bf9bc-rs87l\" (UID: \"785d2c30-0b96-43f5-9b76-56ac0617432c\") " pod="kube-system/coredns-668d6bf9bc-rs87l" Sep 4 00:08:29.554018 kubelet[3275]: I0904 00:08:29.553955 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnwl7\" (UniqueName: \"kubernetes.io/projected/785d2c30-0b96-43f5-9b76-56ac0617432c-kube-api-access-dnwl7\") pod \"coredns-668d6bf9bc-rs87l\" (UID: \"785d2c30-0b96-43f5-9b76-56ac0617432c\") " pod="kube-system/coredns-668d6bf9bc-rs87l" Sep 4 00:08:29.558080 kubelet[3275]: I0904 00:08:29.558038 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/85b1ac46-1183-4716-965a-da420e906863-calico-apiserver-certs\") pod \"calico-apiserver-664cb9f68d-t5fls\" (UID: \"85b1ac46-1183-4716-965a-da420e906863\") " pod="calico-apiserver/calico-apiserver-664cb9f68d-t5fls" Sep 4 00:08:29.558219 kubelet[3275]: I0904 00:08:29.558115 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5f012bd7-0ff2-46a3-ac4f-80680b0884e0-calico-apiserver-certs\") pod \"calico-apiserver-6d5d477df-hxgst\" (UID: \"5f012bd7-0ff2-46a3-ac4f-80680b0884e0\") " pod="calico-apiserver/calico-apiserver-6d5d477df-hxgst" Sep 4 00:08:29.558219 kubelet[3275]: I0904 00:08:29.558156 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa9754da-3cb3-4d5b-886e-a8706c00d845-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-4dg2h\" (UID: \"aa9754da-3cb3-4d5b-886e-a8706c00d845\") " pod="calico-system/goldmane-54d579b49d-4dg2h" Sep 4 00:08:29.558219 kubelet[3275]: I0904 00:08:29.558191 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/aa9754da-3cb3-4d5b-886e-a8706c00d845-goldmane-key-pair\") pod \"goldmane-54d579b49d-4dg2h\" (UID: \"aa9754da-3cb3-4d5b-886e-a8706c00d845\") " pod="calico-system/goldmane-54d579b49d-4dg2h" Sep 4 00:08:29.558405 kubelet[3275]: I0904 00:08:29.558220 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vdtv\" (UniqueName: \"kubernetes.io/projected/e303bdd1-9653-4113-9d2e-efb6f383697a-kube-api-access-5vdtv\") pod \"calico-apiserver-664cb9f68d-sjbqq\" (UID: \"e303bdd1-9653-4113-9d2e-efb6f383697a\") " pod="calico-apiserver/calico-apiserver-664cb9f68d-sjbqq" Sep 4 00:08:29.559245 systemd[1]: Created slice kubepods-besteffort-pode303bdd1_9653_4113_9d2e_efb6f383697a.slice - libcontainer container kubepods-besteffort-pode303bdd1_9653_4113_9d2e_efb6f383697a.slice. Sep 4 00:08:29.560572 kubelet[3275]: I0904 00:08:29.560531 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e303bdd1-9653-4113-9d2e-efb6f383697a-calico-apiserver-certs\") pod \"calico-apiserver-664cb9f68d-sjbqq\" (UID: \"e303bdd1-9653-4113-9d2e-efb6f383697a\") " pod="calico-apiserver/calico-apiserver-664cb9f68d-sjbqq" Sep 4 00:08:29.560763 kubelet[3275]: I0904 00:08:29.560589 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa9754da-3cb3-4d5b-886e-a8706c00d845-config\") pod \"goldmane-54d579b49d-4dg2h\" (UID: \"aa9754da-3cb3-4d5b-886e-a8706c00d845\") " pod="calico-system/goldmane-54d579b49d-4dg2h" Sep 4 00:08:29.577282 systemd[1]: Created slice kubepods-besteffort-podaa9754da_3cb3_4d5b_886e_a8706c00d845.slice - libcontainer container kubepods-besteffort-podaa9754da_3cb3_4d5b_886e_a8706c00d845.slice. Sep 4 00:08:29.589747 systemd[1]: Created slice kubepods-besteffort-pod5f012bd7_0ff2_46a3_ac4f_80680b0884e0.slice - libcontainer container kubepods-besteffort-pod5f012bd7_0ff2_46a3_ac4f_80680b0884e0.slice. Sep 4 00:08:29.661424 kubelet[3275]: I0904 00:08:29.661373 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23f0ea80-398b-4f23-8e24-f4c05cda5b7a-tigera-ca-bundle\") pod \"calico-kube-controllers-655448545f-s7m6x\" (UID: \"23f0ea80-398b-4f23-8e24-f4c05cda5b7a\") " pod="calico-system/calico-kube-controllers-655448545f-s7m6x" Sep 4 00:08:29.661424 kubelet[3275]: I0904 00:08:29.661422 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47765c8f-e89b-4d77-9556-3f7f45952dc3-whisker-ca-bundle\") pod \"whisker-56bfc954f4-lf2wc\" (UID: \"47765c8f-e89b-4d77-9556-3f7f45952dc3\") " pod="calico-system/whisker-56bfc954f4-lf2wc" Sep 4 00:08:29.661598 kubelet[3275]: I0904 00:08:29.661467 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f5vq\" (UniqueName: \"kubernetes.io/projected/6d71db07-e698-49b3-b244-3fc1673cedef-kube-api-access-4f5vq\") pod \"coredns-668d6bf9bc-9hw58\" (UID: \"6d71db07-e698-49b3-b244-3fc1673cedef\") " pod="kube-system/coredns-668d6bf9bc-9hw58" Sep 4 00:08:29.661598 kubelet[3275]: I0904 00:08:29.661514 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47765c8f-e89b-4d77-9556-3f7f45952dc3-whisker-backend-key-pair\") pod \"whisker-56bfc954f4-lf2wc\" (UID: \"47765c8f-e89b-4d77-9556-3f7f45952dc3\") " pod="calico-system/whisker-56bfc954f4-lf2wc" Sep 4 00:08:29.661598 kubelet[3275]: I0904 00:08:29.661546 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2w6v\" (UniqueName: \"kubernetes.io/projected/47765c8f-e89b-4d77-9556-3f7f45952dc3-kube-api-access-q2w6v\") pod \"whisker-56bfc954f4-lf2wc\" (UID: \"47765c8f-e89b-4d77-9556-3f7f45952dc3\") " pod="calico-system/whisker-56bfc954f4-lf2wc" Sep 4 00:08:29.661598 kubelet[3275]: I0904 00:08:29.661593 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d71db07-e698-49b3-b244-3fc1673cedef-config-volume\") pod \"coredns-668d6bf9bc-9hw58\" (UID: \"6d71db07-e698-49b3-b244-3fc1673cedef\") " pod="kube-system/coredns-668d6bf9bc-9hw58" Sep 4 00:08:29.661713 kubelet[3275]: I0904 00:08:29.661649 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bvmh\" (UniqueName: \"kubernetes.io/projected/23f0ea80-398b-4f23-8e24-f4c05cda5b7a-kube-api-access-4bvmh\") pod \"calico-kube-controllers-655448545f-s7m6x\" (UID: \"23f0ea80-398b-4f23-8e24-f4c05cda5b7a\") " pod="calico-system/calico-kube-controllers-655448545f-s7m6x" Sep 4 00:08:29.799252 containerd[2012]: time="2025-09-04T00:08:29.799107280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rs87l,Uid:785d2c30-0b96-43f5-9b76-56ac0617432c,Namespace:kube-system,Attempt:0,}" Sep 4 00:08:29.818579 containerd[2012]: time="2025-09-04T00:08:29.818495297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9hw58,Uid:6d71db07-e698-49b3-b244-3fc1673cedef,Namespace:kube-system,Attempt:0,}" Sep 4 00:08:29.845869 containerd[2012]: time="2025-09-04T00:08:29.845682158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-655448545f-s7m6x,Uid:23f0ea80-398b-4f23-8e24-f4c05cda5b7a,Namespace:calico-system,Attempt:0,}" Sep 4 00:08:29.845869 containerd[2012]: time="2025-09-04T00:08:29.845761291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-664cb9f68d-t5fls,Uid:85b1ac46-1183-4716-965a-da420e906863,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:08:29.875348 containerd[2012]: time="2025-09-04T00:08:29.875314266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-664cb9f68d-sjbqq,Uid:e303bdd1-9653-4113-9d2e-efb6f383697a,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:08:29.877684 containerd[2012]: time="2025-09-04T00:08:29.877648650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56bfc954f4-lf2wc,Uid:47765c8f-e89b-4d77-9556-3f7f45952dc3,Namespace:calico-system,Attempt:0,}" Sep 4 00:08:29.886742 containerd[2012]: time="2025-09-04T00:08:29.886704806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-4dg2h,Uid:aa9754da-3cb3-4d5b-886e-a8706c00d845,Namespace:calico-system,Attempt:0,}" Sep 4 00:08:29.897579 containerd[2012]: time="2025-09-04T00:08:29.897504203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5d477df-hxgst,Uid:5f012bd7-0ff2-46a3-ac4f-80680b0884e0,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:08:30.058441 containerd[2012]: time="2025-09-04T00:08:30.058220283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 4 00:08:30.778318 systemd[1]: Created slice kubepods-besteffort-pod855604f4_24b1_40c7_86a6_198bf7be5142.slice - libcontainer container kubepods-besteffort-pod855604f4_24b1_40c7_86a6_198bf7be5142.slice. Sep 4 00:08:30.781395 containerd[2012]: time="2025-09-04T00:08:30.781362765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4csm7,Uid:855604f4-24b1-40c7-86a6-198bf7be5142,Namespace:calico-system,Attempt:0,}" Sep 4 00:08:32.076563 containerd[2012]: time="2025-09-04T00:08:32.076504970Z" level=error msg="Failed to destroy network for sandbox \"f7e8314b58ea690701d60f12ccd6ad2b987e04596d0408f9e8c20db5601023cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.080229 systemd[1]: run-netns-cni\x2dc1561936\x2d889c\x2dc557\x2d5411\x2d6c0a64c51a6e.mount: Deactivated successfully. Sep 4 00:08:32.085403 containerd[2012]: time="2025-09-04T00:08:32.085331850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-664cb9f68d-sjbqq,Uid:e303bdd1-9653-4113-9d2e-efb6f383697a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7e8314b58ea690701d60f12ccd6ad2b987e04596d0408f9e8c20db5601023cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.085693 kubelet[3275]: E0904 00:08:32.085632 3275 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7e8314b58ea690701d60f12ccd6ad2b987e04596d0408f9e8c20db5601023cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.086833 kubelet[3275]: E0904 00:08:32.085719 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7e8314b58ea690701d60f12ccd6ad2b987e04596d0408f9e8c20db5601023cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-664cb9f68d-sjbqq" Sep 4 00:08:32.086833 kubelet[3275]: E0904 00:08:32.085740 3275 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7e8314b58ea690701d60f12ccd6ad2b987e04596d0408f9e8c20db5601023cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-664cb9f68d-sjbqq" Sep 4 00:08:32.086833 kubelet[3275]: E0904 00:08:32.085801 3275 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-664cb9f68d-sjbqq_calico-apiserver(e303bdd1-9653-4113-9d2e-efb6f383697a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-664cb9f68d-sjbqq_calico-apiserver(e303bdd1-9653-4113-9d2e-efb6f383697a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7e8314b58ea690701d60f12ccd6ad2b987e04596d0408f9e8c20db5601023cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-664cb9f68d-sjbqq" podUID="e303bdd1-9653-4113-9d2e-efb6f383697a" Sep 4 00:08:32.086962 containerd[2012]: time="2025-09-04T00:08:32.086211880Z" level=error msg="Failed to destroy network for sandbox \"8520e74253fc9a570522795b12e70ecc370dbb7280f3637321baa96a39b5f3ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.090449 systemd[1]: run-netns-cni\x2dd36b49be\x2d4d56\x2d5ece\x2dc895\x2dcccb50924246.mount: Deactivated successfully. Sep 4 00:08:32.093358 containerd[2012]: time="2025-09-04T00:08:32.093104728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-4dg2h,Uid:aa9754da-3cb3-4d5b-886e-a8706c00d845,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8520e74253fc9a570522795b12e70ecc370dbb7280f3637321baa96a39b5f3ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.093358 containerd[2012]: time="2025-09-04T00:08:32.093343148Z" level=error msg="Failed to destroy network for sandbox \"fca0fb1620103310aa1b11b59614e54344a518fb147255dba14ec072f9d26e1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.095527 kubelet[3275]: E0904 00:08:32.095354 3275 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8520e74253fc9a570522795b12e70ecc370dbb7280f3637321baa96a39b5f3ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.095527 kubelet[3275]: E0904 00:08:32.095425 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8520e74253fc9a570522795b12e70ecc370dbb7280f3637321baa96a39b5f3ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-4dg2h" Sep 4 00:08:32.095527 kubelet[3275]: E0904 00:08:32.095450 3275 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8520e74253fc9a570522795b12e70ecc370dbb7280f3637321baa96a39b5f3ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-4dg2h" Sep 4 00:08:32.095683 kubelet[3275]: E0904 00:08:32.095498 3275 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-4dg2h_calico-system(aa9754da-3cb3-4d5b-886e-a8706c00d845)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-4dg2h_calico-system(aa9754da-3cb3-4d5b-886e-a8706c00d845)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8520e74253fc9a570522795b12e70ecc370dbb7280f3637321baa96a39b5f3ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-4dg2h" podUID="aa9754da-3cb3-4d5b-886e-a8706c00d845" Sep 4 00:08:32.096496 systemd[1]: run-netns-cni\x2dcf7d1f32\x2d7960\x2dda7b\x2d3f32\x2dd4f8bec7b606.mount: Deactivated successfully. Sep 4 00:08:32.098354 containerd[2012]: time="2025-09-04T00:08:32.098317675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5d477df-hxgst,Uid:5f012bd7-0ff2-46a3-ac4f-80680b0884e0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca0fb1620103310aa1b11b59614e54344a518fb147255dba14ec072f9d26e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.098525 kubelet[3275]: E0904 00:08:32.098498 3275 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca0fb1620103310aa1b11b59614e54344a518fb147255dba14ec072f9d26e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.098581 kubelet[3275]: E0904 00:08:32.098546 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca0fb1620103310aa1b11b59614e54344a518fb147255dba14ec072f9d26e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5d477df-hxgst" Sep 4 00:08:32.098611 kubelet[3275]: E0904 00:08:32.098566 3275 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca0fb1620103310aa1b11b59614e54344a518fb147255dba14ec072f9d26e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5d477df-hxgst" Sep 4 00:08:32.098778 kubelet[3275]: E0904 00:08:32.098749 3275 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d5d477df-hxgst_calico-apiserver(5f012bd7-0ff2-46a3-ac4f-80680b0884e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d5d477df-hxgst_calico-apiserver(5f012bd7-0ff2-46a3-ac4f-80680b0884e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fca0fb1620103310aa1b11b59614e54344a518fb147255dba14ec072f9d26e1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d5d477df-hxgst" podUID="5f012bd7-0ff2-46a3-ac4f-80680b0884e0" Sep 4 00:08:32.105683 containerd[2012]: time="2025-09-04T00:08:32.105641815Z" level=error msg="Failed to destroy network for sandbox \"9448dee625a828abb297b5af746692758336769158d6f28b746618bb9ae599ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.107991 systemd[1]: run-netns-cni\x2d77ec38d5\x2d9744\x2dd1a3\x2d4cab\x2de7613799f799.mount: Deactivated successfully. Sep 4 00:08:32.109859 containerd[2012]: time="2025-09-04T00:08:32.109589322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9hw58,Uid:6d71db07-e698-49b3-b244-3fc1673cedef,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9448dee625a828abb297b5af746692758336769158d6f28b746618bb9ae599ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.110821 kubelet[3275]: E0904 00:08:32.110017 3275 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9448dee625a828abb297b5af746692758336769158d6f28b746618bb9ae599ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.110821 kubelet[3275]: E0904 00:08:32.110067 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9448dee625a828abb297b5af746692758336769158d6f28b746618bb9ae599ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9hw58" Sep 4 00:08:32.110821 kubelet[3275]: E0904 00:08:32.110086 3275 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9448dee625a828abb297b5af746692758336769158d6f28b746618bb9ae599ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9hw58" Sep 4 00:08:32.110947 kubelet[3275]: E0904 00:08:32.110121 3275 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9hw58_kube-system(6d71db07-e698-49b3-b244-3fc1673cedef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9hw58_kube-system(6d71db07-e698-49b3-b244-3fc1673cedef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9448dee625a828abb297b5af746692758336769158d6f28b746618bb9ae599ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9hw58" podUID="6d71db07-e698-49b3-b244-3fc1673cedef" Sep 4 00:08:32.120266 containerd[2012]: time="2025-09-04T00:08:32.117551217Z" level=error msg="Failed to destroy network for sandbox \"35de386831ea7eba0de533229791ad3ffac55e0d1296001646fcf6993a1f8219\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.120687 containerd[2012]: time="2025-09-04T00:08:32.120640387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-664cb9f68d-t5fls,Uid:85b1ac46-1183-4716-965a-da420e906863,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"35de386831ea7eba0de533229791ad3ffac55e0d1296001646fcf6993a1f8219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.121165 systemd[1]: run-netns-cni\x2d1bbc51f4\x2d7f88\x2d8eee\x2d1206\x2de0da144a8fa8.mount: Deactivated successfully. Sep 4 00:08:32.121610 kubelet[3275]: E0904 00:08:32.120908 3275 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35de386831ea7eba0de533229791ad3ffac55e0d1296001646fcf6993a1f8219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.121768 kubelet[3275]: E0904 00:08:32.121743 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35de386831ea7eba0de533229791ad3ffac55e0d1296001646fcf6993a1f8219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-664cb9f68d-t5fls" Sep 4 00:08:32.122611 kubelet[3275]: E0904 00:08:32.121780 3275 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35de386831ea7eba0de533229791ad3ffac55e0d1296001646fcf6993a1f8219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-664cb9f68d-t5fls" Sep 4 00:08:32.123263 kubelet[3275]: E0904 00:08:32.123207 3275 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-664cb9f68d-t5fls_calico-apiserver(85b1ac46-1183-4716-965a-da420e906863)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-664cb9f68d-t5fls_calico-apiserver(85b1ac46-1183-4716-965a-da420e906863)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35de386831ea7eba0de533229791ad3ffac55e0d1296001646fcf6993a1f8219\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-664cb9f68d-t5fls" podUID="85b1ac46-1183-4716-965a-da420e906863" Sep 4 00:08:32.133592 containerd[2012]: time="2025-09-04T00:08:32.133551042Z" level=error msg="Failed to destroy network for sandbox \"a3b117a1626c98ecf1043adf47079a9663e4a1bf6b0a00edaef13b9d7e0ba082\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.134038 containerd[2012]: time="2025-09-04T00:08:32.134013296Z" level=error msg="Failed to destroy network for sandbox \"35f6fbe4deef04a8b382132b783ed23cb1e786e61c11bd51b2f658e90c4aba9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.135565 containerd[2012]: time="2025-09-04T00:08:32.135520366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-655448545f-s7m6x,Uid:23f0ea80-398b-4f23-8e24-f4c05cda5b7a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3b117a1626c98ecf1043adf47079a9663e4a1bf6b0a00edaef13b9d7e0ba082\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.135875 containerd[2012]: time="2025-09-04T00:08:32.135855747Z" level=error msg="Failed to destroy network for sandbox \"9b69270cdf945340bd4dd4a321d59023988ad91c94f1d50153bb18289f13d584\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.136743 kubelet[3275]: E0904 00:08:32.136410 3275 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3b117a1626c98ecf1043adf47079a9663e4a1bf6b0a00edaef13b9d7e0ba082\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.136743 kubelet[3275]: E0904 00:08:32.136463 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3b117a1626c98ecf1043adf47079a9663e4a1bf6b0a00edaef13b9d7e0ba082\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-655448545f-s7m6x" Sep 4 00:08:32.136743 kubelet[3275]: E0904 00:08:32.136483 3275 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3b117a1626c98ecf1043adf47079a9663e4a1bf6b0a00edaef13b9d7e0ba082\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-655448545f-s7m6x" Sep 4 00:08:32.136869 kubelet[3275]: E0904 00:08:32.136519 3275 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-655448545f-s7m6x_calico-system(23f0ea80-398b-4f23-8e24-f4c05cda5b7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-655448545f-s7m6x_calico-system(23f0ea80-398b-4f23-8e24-f4c05cda5b7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3b117a1626c98ecf1043adf47079a9663e4a1bf6b0a00edaef13b9d7e0ba082\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-655448545f-s7m6x" podUID="23f0ea80-398b-4f23-8e24-f4c05cda5b7a" Sep 4 00:08:32.137739 containerd[2012]: time="2025-09-04T00:08:32.137607829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rs87l,Uid:785d2c30-0b96-43f5-9b76-56ac0617432c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"35f6fbe4deef04a8b382132b783ed23cb1e786e61c11bd51b2f658e90c4aba9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.138168 kubelet[3275]: E0904 00:08:32.137933 3275 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35f6fbe4deef04a8b382132b783ed23cb1e786e61c11bd51b2f658e90c4aba9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.138168 kubelet[3275]: E0904 00:08:32.138112 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35f6fbe4deef04a8b382132b783ed23cb1e786e61c11bd51b2f658e90c4aba9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rs87l" Sep 4 00:08:32.138168 kubelet[3275]: E0904 00:08:32.138129 3275 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35f6fbe4deef04a8b382132b783ed23cb1e786e61c11bd51b2f658e90c4aba9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rs87l" Sep 4 00:08:32.139271 kubelet[3275]: E0904 00:08:32.139100 3275 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rs87l_kube-system(785d2c30-0b96-43f5-9b76-56ac0617432c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rs87l_kube-system(785d2c30-0b96-43f5-9b76-56ac0617432c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35f6fbe4deef04a8b382132b783ed23cb1e786e61c11bd51b2f658e90c4aba9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rs87l" podUID="785d2c30-0b96-43f5-9b76-56ac0617432c" Sep 4 00:08:32.139814 containerd[2012]: time="2025-09-04T00:08:32.139773819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4csm7,Uid:855604f4-24b1-40c7-86a6-198bf7be5142,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b69270cdf945340bd4dd4a321d59023988ad91c94f1d50153bb18289f13d584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.139946 kubelet[3275]: E0904 00:08:32.139917 3275 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b69270cdf945340bd4dd4a321d59023988ad91c94f1d50153bb18289f13d584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.139990 kubelet[3275]: E0904 00:08:32.139961 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b69270cdf945340bd4dd4a321d59023988ad91c94f1d50153bb18289f13d584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4csm7" Sep 4 00:08:32.139990 kubelet[3275]: E0904 00:08:32.139980 3275 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b69270cdf945340bd4dd4a321d59023988ad91c94f1d50153bb18289f13d584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4csm7" Sep 4 00:08:32.140080 kubelet[3275]: E0904 00:08:32.140016 3275 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4csm7_calico-system(855604f4-24b1-40c7-86a6-198bf7be5142)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4csm7_calico-system(855604f4-24b1-40c7-86a6-198bf7be5142)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b69270cdf945340bd4dd4a321d59023988ad91c94f1d50153bb18289f13d584\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4csm7" podUID="855604f4-24b1-40c7-86a6-198bf7be5142" Sep 4 00:08:32.140147 containerd[2012]: time="2025-09-04T00:08:32.140109538Z" level=error msg="Failed to destroy network for sandbox \"487fb598ffecfb305a58e072ae5f86232ebb6634bf4ed3b04a912dc1adf6ac0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.143678 containerd[2012]: time="2025-09-04T00:08:32.142975163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56bfc954f4-lf2wc,Uid:47765c8f-e89b-4d77-9556-3f7f45952dc3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"487fb598ffecfb305a58e072ae5f86232ebb6634bf4ed3b04a912dc1adf6ac0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.143784 kubelet[3275]: E0904 00:08:32.143546 3275 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"487fb598ffecfb305a58e072ae5f86232ebb6634bf4ed3b04a912dc1adf6ac0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:08:32.143784 kubelet[3275]: E0904 00:08:32.143585 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"487fb598ffecfb305a58e072ae5f86232ebb6634bf4ed3b04a912dc1adf6ac0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56bfc954f4-lf2wc" Sep 4 00:08:32.143784 kubelet[3275]: E0904 00:08:32.143603 3275 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"487fb598ffecfb305a58e072ae5f86232ebb6634bf4ed3b04a912dc1adf6ac0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56bfc954f4-lf2wc" Sep 4 00:08:32.143884 kubelet[3275]: E0904 00:08:32.143636 3275 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56bfc954f4-lf2wc_calico-system(47765c8f-e89b-4d77-9556-3f7f45952dc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56bfc954f4-lf2wc_calico-system(47765c8f-e89b-4d77-9556-3f7f45952dc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"487fb598ffecfb305a58e072ae5f86232ebb6634bf4ed3b04a912dc1adf6ac0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56bfc954f4-lf2wc" podUID="47765c8f-e89b-4d77-9556-3f7f45952dc3" Sep 4 00:08:33.081674 systemd[1]: run-netns-cni\x2d12d75c39\x2d1e06\x2d0eac\x2d064c\x2dcfce1f1422fe.mount: Deactivated successfully. Sep 4 00:08:33.081781 systemd[1]: run-netns-cni\x2d5fdb5a44\x2df991\x2d4aee\x2dd505\x2d33b73056a328.mount: Deactivated successfully. Sep 4 00:08:33.081839 systemd[1]: run-netns-cni\x2d733cf05e\x2d9bae\x2deeb2\x2d27d0\x2d4a10482431ac.mount: Deactivated successfully. Sep 4 00:08:33.081890 systemd[1]: run-netns-cni\x2da2e00648\x2db61f\x2d18dc\x2da05e\x2d7f7132c20155.mount: Deactivated successfully. Sep 4 00:08:38.986817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount176515208.mount: Deactivated successfully. Sep 4 00:08:39.065224 containerd[2012]: time="2025-09-04T00:08:39.065161743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 4 00:08:39.067311 containerd[2012]: time="2025-09-04T00:08:39.042292940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:39.077271 containerd[2012]: time="2025-09-04T00:08:39.076984654Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:39.082025 containerd[2012]: time="2025-09-04T00:08:39.080824659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.019567629s" Sep 4 00:08:39.082025 containerd[2012]: time="2025-09-04T00:08:39.080882052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 4 00:08:39.083563 containerd[2012]: time="2025-09-04T00:08:39.083532242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:39.104667 containerd[2012]: time="2025-09-04T00:08:39.104623112Z" level=info msg="CreateContainer within sandbox \"ab738cb848d88ee8f5dae8da7c822f8765d67c777dc39347ebcfcb23e013bb22\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 00:08:39.252509 containerd[2012]: time="2025-09-04T00:08:39.252453663Z" level=info msg="Container f3c2a2e541f0330bea7895fbccd6459a88be193e0becaa32c6aa88d6011eaa05: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:08:39.255573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount161030930.mount: Deactivated successfully. Sep 4 00:08:39.307879 containerd[2012]: time="2025-09-04T00:08:39.307831185Z" level=info msg="CreateContainer within sandbox \"ab738cb848d88ee8f5dae8da7c822f8765d67c777dc39347ebcfcb23e013bb22\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f3c2a2e541f0330bea7895fbccd6459a88be193e0becaa32c6aa88d6011eaa05\"" Sep 4 00:08:39.308813 containerd[2012]: time="2025-09-04T00:08:39.308701609Z" level=info msg="StartContainer for \"f3c2a2e541f0330bea7895fbccd6459a88be193e0becaa32c6aa88d6011eaa05\"" Sep 4 00:08:39.319884 containerd[2012]: time="2025-09-04T00:08:39.319842547Z" level=info msg="connecting to shim f3c2a2e541f0330bea7895fbccd6459a88be193e0becaa32c6aa88d6011eaa05" address="unix:///run/containerd/s/3ac1c8ceebcad29de4cf629c011772ce9c5c938c4443e13a840488248eb734fd" protocol=ttrpc version=3 Sep 4 00:08:39.403428 systemd[1]: Started cri-containerd-f3c2a2e541f0330bea7895fbccd6459a88be193e0becaa32c6aa88d6011eaa05.scope - libcontainer container f3c2a2e541f0330bea7895fbccd6459a88be193e0becaa32c6aa88d6011eaa05. Sep 4 00:08:39.461376 containerd[2012]: time="2025-09-04T00:08:39.461310992Z" level=info msg="StartContainer for \"f3c2a2e541f0330bea7895fbccd6459a88be193e0becaa32c6aa88d6011eaa05\" returns successfully" Sep 4 00:08:39.659620 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 00:08:39.661102 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 00:08:40.107901 kubelet[3275]: I0904 00:08:40.107822 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9w4ng" podStartSLOduration=2.036647875 podStartE2EDuration="22.107802283s" podCreationTimestamp="2025-09-04 00:08:18 +0000 UTC" firstStartedPulling="2025-09-04 00:08:19.012579383 +0000 UTC m=+23.384916869" lastFinishedPulling="2025-09-04 00:08:39.083733793 +0000 UTC m=+43.456071277" observedRunningTime="2025-09-04 00:08:40.106531152 +0000 UTC m=+44.478868646" watchObservedRunningTime="2025-09-04 00:08:40.107802283 +0000 UTC m=+44.480139825" Sep 4 00:08:40.544655 kubelet[3275]: I0904 00:08:40.544605 3275 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47765c8f-e89b-4d77-9556-3f7f45952dc3-whisker-backend-key-pair\") pod \"47765c8f-e89b-4d77-9556-3f7f45952dc3\" (UID: \"47765c8f-e89b-4d77-9556-3f7f45952dc3\") " Sep 4 00:08:40.544655 kubelet[3275]: I0904 00:08:40.544665 3275 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2w6v\" (UniqueName: \"kubernetes.io/projected/47765c8f-e89b-4d77-9556-3f7f45952dc3-kube-api-access-q2w6v\") pod \"47765c8f-e89b-4d77-9556-3f7f45952dc3\" (UID: \"47765c8f-e89b-4d77-9556-3f7f45952dc3\") " Sep 4 00:08:40.544847 kubelet[3275]: I0904 00:08:40.544688 3275 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47765c8f-e89b-4d77-9556-3f7f45952dc3-whisker-ca-bundle\") pod \"47765c8f-e89b-4d77-9556-3f7f45952dc3\" (UID: \"47765c8f-e89b-4d77-9556-3f7f45952dc3\") " Sep 4 00:08:40.548912 kubelet[3275]: I0904 00:08:40.548515 3275 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47765c8f-e89b-4d77-9556-3f7f45952dc3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "47765c8f-e89b-4d77-9556-3f7f45952dc3" (UID: "47765c8f-e89b-4d77-9556-3f7f45952dc3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 00:08:40.552566 kubelet[3275]: I0904 00:08:40.552522 3275 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47765c8f-e89b-4d77-9556-3f7f45952dc3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "47765c8f-e89b-4d77-9556-3f7f45952dc3" (UID: "47765c8f-e89b-4d77-9556-3f7f45952dc3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 00:08:40.554022 systemd[1]: var-lib-kubelet-pods-47765c8f\x2de89b\x2d4d77\x2d9556\x2d3f7f45952dc3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 4 00:08:40.557320 kubelet[3275]: I0904 00:08:40.555390 3275 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47765c8f-e89b-4d77-9556-3f7f45952dc3-kube-api-access-q2w6v" (OuterVolumeSpecName: "kube-api-access-q2w6v") pod "47765c8f-e89b-4d77-9556-3f7f45952dc3" (UID: "47765c8f-e89b-4d77-9556-3f7f45952dc3"). InnerVolumeSpecName "kube-api-access-q2w6v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 00:08:40.558086 systemd[1]: var-lib-kubelet-pods-47765c8f\x2de89b\x2d4d77\x2d9556\x2d3f7f45952dc3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq2w6v.mount: Deactivated successfully. Sep 4 00:08:40.645945 kubelet[3275]: I0904 00:08:40.645893 3275 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47765c8f-e89b-4d77-9556-3f7f45952dc3-whisker-backend-key-pair\") on node \"ip-172-31-29-190\" DevicePath \"\"" Sep 4 00:08:40.645945 kubelet[3275]: I0904 00:08:40.645932 3275 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q2w6v\" (UniqueName: \"kubernetes.io/projected/47765c8f-e89b-4d77-9556-3f7f45952dc3-kube-api-access-q2w6v\") on node \"ip-172-31-29-190\" DevicePath \"\"" Sep 4 00:08:40.645945 kubelet[3275]: I0904 00:08:40.645949 3275 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47765c8f-e89b-4d77-9556-3f7f45952dc3-whisker-ca-bundle\") on node \"ip-172-31-29-190\" DevicePath \"\"" Sep 4 00:08:41.097039 systemd[1]: Removed slice kubepods-besteffort-pod47765c8f_e89b_4d77_9556_3f7f45952dc3.slice - libcontainer container kubepods-besteffort-pod47765c8f_e89b_4d77_9556_3f7f45952dc3.slice. Sep 4 00:08:41.454824 systemd[1]: Created slice kubepods-besteffort-pode15b83d1_c430_4c25_886b_e5f3eade4e66.slice - libcontainer container kubepods-besteffort-pode15b83d1_c430_4c25_886b_e5f3eade4e66.slice. Sep 4 00:08:41.551144 kubelet[3275]: I0904 00:08:41.551094 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e15b83d1-c430-4c25-886b-e5f3eade4e66-whisker-ca-bundle\") pod \"whisker-6d54dfb589-htm7w\" (UID: \"e15b83d1-c430-4c25-886b-e5f3eade4e66\") " pod="calico-system/whisker-6d54dfb589-htm7w" Sep 4 00:08:41.551144 kubelet[3275]: I0904 00:08:41.551144 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqsr5\" (UniqueName: \"kubernetes.io/projected/e15b83d1-c430-4c25-886b-e5f3eade4e66-kube-api-access-wqsr5\") pod \"whisker-6d54dfb589-htm7w\" (UID: \"e15b83d1-c430-4c25-886b-e5f3eade4e66\") " pod="calico-system/whisker-6d54dfb589-htm7w" Sep 4 00:08:41.551144 kubelet[3275]: I0904 00:08:41.551167 3275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e15b83d1-c430-4c25-886b-e5f3eade4e66-whisker-backend-key-pair\") pod \"whisker-6d54dfb589-htm7w\" (UID: \"e15b83d1-c430-4c25-886b-e5f3eade4e66\") " pod="calico-system/whisker-6d54dfb589-htm7w" Sep 4 00:08:41.764021 containerd[2012]: time="2025-09-04T00:08:41.763748875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d54dfb589-htm7w,Uid:e15b83d1-c430-4c25-886b-e5f3eade4e66,Namespace:calico-system,Attempt:0,}" Sep 4 00:08:41.786701 kubelet[3275]: I0904 00:08:41.785638 3275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47765c8f-e89b-4d77-9556-3f7f45952dc3" path="/var/lib/kubelet/pods/47765c8f-e89b-4d77-9556-3f7f45952dc3/volumes" Sep 4 00:08:42.827192 systemd-networkd[1879]: caliae5cdee3f83: Link UP Sep 4 00:08:42.828128 systemd-networkd[1879]: caliae5cdee3f83: Gained carrier Sep 4 00:08:42.830429 (udev-worker)[4629]: Network interface NamePolicy= disabled on kernel command line. Sep 4 00:08:42.848902 containerd[2012]: 2025-09-04 00:08:41.824 [INFO][4584] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:08:42.848902 containerd[2012]: 2025-09-04 00:08:42.368 [INFO][4584] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--190-k8s-whisker--6d54dfb589--htm7w-eth0 whisker-6d54dfb589- calico-system e15b83d1-c430-4c25-886b-e5f3eade4e66 892 0 2025-09-04 00:08:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d54dfb589 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-29-190 whisker-6d54dfb589-htm7w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliae5cdee3f83 [] [] }} ContainerID="d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" Namespace="calico-system" Pod="whisker-6d54dfb589-htm7w" WorkloadEndpoint="ip--172--31--29--190-k8s-whisker--6d54dfb589--htm7w-" Sep 4 00:08:42.848902 containerd[2012]: 2025-09-04 00:08:42.368 [INFO][4584] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" Namespace="calico-system" Pod="whisker-6d54dfb589-htm7w" WorkloadEndpoint="ip--172--31--29--190-k8s-whisker--6d54dfb589--htm7w-eth0" Sep 4 00:08:42.848902 containerd[2012]: 2025-09-04 00:08:42.733 [INFO][4599] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" HandleID="k8s-pod-network.d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" Workload="ip--172--31--29--190-k8s-whisker--6d54dfb589--htm7w-eth0" Sep 4 00:08:42.849424 containerd[2012]: 2025-09-04 00:08:42.738 [INFO][4599] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" HandleID="k8s-pod-network.d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" Workload="ip--172--31--29--190-k8s-whisker--6d54dfb589--htm7w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000305400), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-190", "pod":"whisker-6d54dfb589-htm7w", "timestamp":"2025-09-04 00:08:42.733434402 +0000 UTC"}, Hostname:"ip-172-31-29-190", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:08:42.849424 containerd[2012]: 2025-09-04 00:08:42.738 [INFO][4599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:08:42.849424 containerd[2012]: 2025-09-04 00:08:42.738 [INFO][4599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:08:42.849424 containerd[2012]: 2025-09-04 00:08:42.739 [INFO][4599] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-190' Sep 4 00:08:42.849424 containerd[2012]: 2025-09-04 00:08:42.762 [INFO][4599] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" host="ip-172-31-29-190" Sep 4 00:08:42.849424 containerd[2012]: 2025-09-04 00:08:42.773 [INFO][4599] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-190" Sep 4 00:08:42.849424 containerd[2012]: 2025-09-04 00:08:42.779 [INFO][4599] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:42.849424 containerd[2012]: 2025-09-04 00:08:42.781 [INFO][4599] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:42.849424 containerd[2012]: 2025-09-04 00:08:42.783 [INFO][4599] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:42.851552 containerd[2012]: 2025-09-04 00:08:42.783 [INFO][4599] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" host="ip-172-31-29-190" Sep 4 00:08:42.851552 containerd[2012]: 2025-09-04 00:08:42.785 [INFO][4599] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c Sep 4 00:08:42.851552 containerd[2012]: 2025-09-04 00:08:42.790 [INFO][4599] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" host="ip-172-31-29-190" Sep 4 00:08:42.851552 containerd[2012]: 2025-09-04 00:08:42.798 [INFO][4599] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.121.193/26] block=192.168.121.192/26 handle="k8s-pod-network.d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" host="ip-172-31-29-190" Sep 4 00:08:42.851552 containerd[2012]: 2025-09-04 00:08:42.798 [INFO][4599] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.193/26] handle="k8s-pod-network.d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" host="ip-172-31-29-190" Sep 4 00:08:42.851552 containerd[2012]: 2025-09-04 00:08:42.799 [INFO][4599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:08:42.851552 containerd[2012]: 2025-09-04 00:08:42.799 [INFO][4599] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.193/26] IPv6=[] ContainerID="d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" HandleID="k8s-pod-network.d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" Workload="ip--172--31--29--190-k8s-whisker--6d54dfb589--htm7w-eth0" Sep 4 00:08:42.851715 containerd[2012]: 2025-09-04 00:08:42.801 [INFO][4584] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" Namespace="calico-system" Pod="whisker-6d54dfb589-htm7w" WorkloadEndpoint="ip--172--31--29--190-k8s-whisker--6d54dfb589--htm7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-whisker--6d54dfb589--htm7w-eth0", GenerateName:"whisker-6d54dfb589-", Namespace:"calico-system", SelfLink:"", UID:"e15b83d1-c430-4c25-886b-e5f3eade4e66", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d54dfb589", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"", Pod:"whisker-6d54dfb589-htm7w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.121.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliae5cdee3f83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:42.851715 containerd[2012]: 2025-09-04 00:08:42.802 [INFO][4584] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.193/32] ContainerID="d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" Namespace="calico-system" Pod="whisker-6d54dfb589-htm7w" WorkloadEndpoint="ip--172--31--29--190-k8s-whisker--6d54dfb589--htm7w-eth0" Sep 4 00:08:42.851802 containerd[2012]: 2025-09-04 00:08:42.802 [INFO][4584] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae5cdee3f83 ContainerID="d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" Namespace="calico-system" Pod="whisker-6d54dfb589-htm7w" WorkloadEndpoint="ip--172--31--29--190-k8s-whisker--6d54dfb589--htm7w-eth0" Sep 4 00:08:42.851802 containerd[2012]: 2025-09-04 00:08:42.829 [INFO][4584] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" Namespace="calico-system" Pod="whisker-6d54dfb589-htm7w" WorkloadEndpoint="ip--172--31--29--190-k8s-whisker--6d54dfb589--htm7w-eth0" Sep 4 00:08:42.851854 containerd[2012]: 2025-09-04 00:08:42.830 [INFO][4584] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" Namespace="calico-system" Pod="whisker-6d54dfb589-htm7w" WorkloadEndpoint="ip--172--31--29--190-k8s-whisker--6d54dfb589--htm7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-whisker--6d54dfb589--htm7w-eth0", GenerateName:"whisker-6d54dfb589-", Namespace:"calico-system", SelfLink:"", UID:"e15b83d1-c430-4c25-886b-e5f3eade4e66", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d54dfb589", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c", Pod:"whisker-6d54dfb589-htm7w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.121.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliae5cdee3f83", MAC:"ea:7f:17:d4:ae:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:42.851911 containerd[2012]: 2025-09-04 00:08:42.843 [INFO][4584] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" Namespace="calico-system" Pod="whisker-6d54dfb589-htm7w" WorkloadEndpoint="ip--172--31--29--190-k8s-whisker--6d54dfb589--htm7w-eth0" Sep 4 00:08:42.987634 kubelet[3275]: I0904 00:08:42.987595 3275 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:08:43.210629 containerd[2012]: time="2025-09-04T00:08:43.210382833Z" level=info msg="connecting to shim d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c" address="unix:///run/containerd/s/890d3ec27061bf1a509a242f88ee2a53d81b9ba62d6a967696b7c1ed273e9a75" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:08:43.252693 systemd[1]: Started cri-containerd-d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c.scope - libcontainer container d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c. Sep 4 00:08:43.326368 containerd[2012]: time="2025-09-04T00:08:43.326273235Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3c2a2e541f0330bea7895fbccd6459a88be193e0becaa32c6aa88d6011eaa05\" id:\"24e0a32fc27f2da2129725780db6c45a84c9bdf7efcb5aa53486624b3b822d08\" pid:4651 exit_status:1 exited_at:{seconds:1756944523 nanos:325902005}" Sep 4 00:08:43.370108 containerd[2012]: time="2025-09-04T00:08:43.370025576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d54dfb589-htm7w,Uid:e15b83d1-c430-4c25-886b-e5f3eade4e66,Namespace:calico-system,Attempt:0,} returns sandbox id \"d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c\"" Sep 4 00:08:43.373856 containerd[2012]: time="2025-09-04T00:08:43.373788875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 4 00:08:43.474310 containerd[2012]: time="2025-09-04T00:08:43.473572445Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3c2a2e541f0330bea7895fbccd6459a88be193e0becaa32c6aa88d6011eaa05\" id:\"083a29ce2dfa99555869387aadc094614b07be10917801241c8145574fe899b8\" pid:4720 exit_status:1 exited_at:{seconds:1756944523 nanos:469974417}" Sep 4 00:08:43.783694 containerd[2012]: time="2025-09-04T00:08:43.782798968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-655448545f-s7m6x,Uid:23f0ea80-398b-4f23-8e24-f4c05cda5b7a,Namespace:calico-system,Attempt:0,}" Sep 4 00:08:43.795192 containerd[2012]: time="2025-09-04T00:08:43.795127235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-664cb9f68d-t5fls,Uid:85b1ac46-1183-4716-965a-da420e906863,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:08:43.797792 containerd[2012]: time="2025-09-04T00:08:43.796391636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4csm7,Uid:855604f4-24b1-40c7-86a6-198bf7be5142,Namespace:calico-system,Attempt:0,}" Sep 4 00:08:43.797792 containerd[2012]: time="2025-09-04T00:08:43.796567731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5d477df-hxgst,Uid:5f012bd7-0ff2-46a3-ac4f-80680b0884e0,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:08:44.073934 systemd-networkd[1879]: cali741ece4d514: Link UP Sep 4 00:08:44.074864 systemd-networkd[1879]: cali741ece4d514: Gained carrier Sep 4 00:08:44.101202 containerd[2012]: 2025-09-04 00:08:43.829 [INFO][4750] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:08:44.101202 containerd[2012]: 2025-09-04 00:08:43.878 [INFO][4750] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--190-k8s-calico--kube--controllers--655448545f--s7m6x-eth0 calico-kube-controllers-655448545f- calico-system 23f0ea80-398b-4f23-8e24-f4c05cda5b7a 819 0 2025-09-04 00:08:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:655448545f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-190 calico-kube-controllers-655448545f-s7m6x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali741ece4d514 [] [] }} ContainerID="8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" Namespace="calico-system" Pod="calico-kube-controllers-655448545f-s7m6x" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--kube--controllers--655448545f--s7m6x-" Sep 4 00:08:44.101202 containerd[2012]: 2025-09-04 00:08:43.878 [INFO][4750] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" Namespace="calico-system" Pod="calico-kube-controllers-655448545f-s7m6x" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--kube--controllers--655448545f--s7m6x-eth0" Sep 4 00:08:44.101202 containerd[2012]: 2025-09-04 00:08:44.000 [INFO][4790] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" HandleID="k8s-pod-network.8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" Workload="ip--172--31--29--190-k8s-calico--kube--controllers--655448545f--s7m6x-eth0" Sep 4 00:08:44.101894 containerd[2012]: 2025-09-04 00:08:44.001 [INFO][4790] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" HandleID="k8s-pod-network.8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" Workload="ip--172--31--29--190-k8s-calico--kube--controllers--655448545f--s7m6x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000381ee0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-190", "pod":"calico-kube-controllers-655448545f-s7m6x", "timestamp":"2025-09-04 00:08:44.000970997 +0000 UTC"}, Hostname:"ip-172-31-29-190", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:08:44.101894 containerd[2012]: 2025-09-04 00:08:44.001 [INFO][4790] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:08:44.101894 containerd[2012]: 2025-09-04 00:08:44.001 [INFO][4790] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:08:44.101894 containerd[2012]: 2025-09-04 00:08:44.001 [INFO][4790] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-190' Sep 4 00:08:44.101894 containerd[2012]: 2025-09-04 00:08:44.016 [INFO][4790] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" host="ip-172-31-29-190" Sep 4 00:08:44.101894 containerd[2012]: 2025-09-04 00:08:44.025 [INFO][4790] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-190" Sep 4 00:08:44.101894 containerd[2012]: 2025-09-04 00:08:44.037 [INFO][4790] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:44.101894 containerd[2012]: 2025-09-04 00:08:44.041 [INFO][4790] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:44.101894 containerd[2012]: 2025-09-04 00:08:44.044 [INFO][4790] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:44.104780 containerd[2012]: 2025-09-04 00:08:44.044 [INFO][4790] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" host="ip-172-31-29-190" Sep 4 00:08:44.104780 containerd[2012]: 2025-09-04 00:08:44.047 [INFO][4790] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b Sep 4 00:08:44.104780 containerd[2012]: 2025-09-04 00:08:44.052 [INFO][4790] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" host="ip-172-31-29-190" Sep 4 00:08:44.104780 containerd[2012]: 2025-09-04 00:08:44.062 [INFO][4790] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.121.194/26] block=192.168.121.192/26 handle="k8s-pod-network.8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" host="ip-172-31-29-190" Sep 4 00:08:44.104780 containerd[2012]: 2025-09-04 00:08:44.062 [INFO][4790] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.194/26] handle="k8s-pod-network.8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" host="ip-172-31-29-190" Sep 4 00:08:44.104780 containerd[2012]: 2025-09-04 00:08:44.062 [INFO][4790] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:08:44.104780 containerd[2012]: 2025-09-04 00:08:44.062 [INFO][4790] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.194/26] IPv6=[] ContainerID="8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" HandleID="k8s-pod-network.8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" Workload="ip--172--31--29--190-k8s-calico--kube--controllers--655448545f--s7m6x-eth0" Sep 4 00:08:44.105088 containerd[2012]: 2025-09-04 00:08:44.069 [INFO][4750] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" Namespace="calico-system" Pod="calico-kube-controllers-655448545f-s7m6x" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--kube--controllers--655448545f--s7m6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-calico--kube--controllers--655448545f--s7m6x-eth0", GenerateName:"calico-kube-controllers-655448545f-", Namespace:"calico-system", SelfLink:"", UID:"23f0ea80-398b-4f23-8e24-f4c05cda5b7a", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"655448545f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"", Pod:"calico-kube-controllers-655448545f-s7m6x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali741ece4d514", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:44.105230 containerd[2012]: 2025-09-04 00:08:44.069 [INFO][4750] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.194/32] ContainerID="8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" Namespace="calico-system" Pod="calico-kube-controllers-655448545f-s7m6x" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--kube--controllers--655448545f--s7m6x-eth0" Sep 4 00:08:44.105230 containerd[2012]: 2025-09-04 00:08:44.069 [INFO][4750] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali741ece4d514 ContainerID="8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" Namespace="calico-system" Pod="calico-kube-controllers-655448545f-s7m6x" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--kube--controllers--655448545f--s7m6x-eth0" Sep 4 00:08:44.105230 containerd[2012]: 2025-09-04 00:08:44.075 [INFO][4750] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" Namespace="calico-system" Pod="calico-kube-controllers-655448545f-s7m6x" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--kube--controllers--655448545f--s7m6x-eth0" Sep 4 00:08:44.106063 containerd[2012]: 2025-09-04 00:08:44.076 [INFO][4750] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" Namespace="calico-system" Pod="calico-kube-controllers-655448545f-s7m6x" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--kube--controllers--655448545f--s7m6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-calico--kube--controllers--655448545f--s7m6x-eth0", GenerateName:"calico-kube-controllers-655448545f-", Namespace:"calico-system", SelfLink:"", UID:"23f0ea80-398b-4f23-8e24-f4c05cda5b7a", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"655448545f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b", Pod:"calico-kube-controllers-655448545f-s7m6x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali741ece4d514", MAC:"1e:41:a0:2d:99:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:44.106168 containerd[2012]: 2025-09-04 00:08:44.095 [INFO][4750] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" Namespace="calico-system" Pod="calico-kube-controllers-655448545f-s7m6x" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--kube--controllers--655448545f--s7m6x-eth0" Sep 4 00:08:44.141057 containerd[2012]: time="2025-09-04T00:08:44.141015672Z" level=info msg="connecting to shim 8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b" address="unix:///run/containerd/s/da1321e82c6d31bbce4d7a8d689ec3ce5777dc3d1c2d2c1effaec0bc36da95a5" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:08:44.174813 systemd-networkd[1879]: calie1ef844bac6: Link UP Sep 4 00:08:44.174990 systemd-networkd[1879]: calie1ef844bac6: Gained carrier Sep 4 00:08:44.195449 systemd[1]: Started cri-containerd-8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b.scope - libcontainer container 8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b. Sep 4 00:08:44.196652 containerd[2012]: 2025-09-04 00:08:43.911 [INFO][4763] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:08:44.196652 containerd[2012]: 2025-09-04 00:08:43.928 [INFO][4763] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0 calico-apiserver-664cb9f68d- calico-apiserver 85b1ac46-1183-4716-965a-da420e906863 820 0 2025-09-04 00:08:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:664cb9f68d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-190 calico-apiserver-664cb9f68d-t5fls eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie1ef844bac6 [] [] }} ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Namespace="calico-apiserver" Pod="calico-apiserver-664cb9f68d-t5fls" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-" Sep 4 00:08:44.196652 containerd[2012]: 2025-09-04 00:08:43.928 [INFO][4763] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Namespace="calico-apiserver" Pod="calico-apiserver-664cb9f68d-t5fls" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:08:44.196652 containerd[2012]: 2025-09-04 00:08:44.023 [INFO][4798] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" HandleID="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:08:44.196858 containerd[2012]: 2025-09-04 00:08:44.024 [INFO][4798] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" HandleID="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-190", "pod":"calico-apiserver-664cb9f68d-t5fls", "timestamp":"2025-09-04 00:08:44.023804018 +0000 UTC"}, Hostname:"ip-172-31-29-190", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:08:44.196858 containerd[2012]: 2025-09-04 00:08:44.024 [INFO][4798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:08:44.196858 containerd[2012]: 2025-09-04 00:08:44.063 [INFO][4798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:08:44.196858 containerd[2012]: 2025-09-04 00:08:44.063 [INFO][4798] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-190' Sep 4 00:08:44.196858 containerd[2012]: 2025-09-04 00:08:44.115 [INFO][4798] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" host="ip-172-31-29-190" Sep 4 00:08:44.196858 containerd[2012]: 2025-09-04 00:08:44.128 [INFO][4798] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-190" Sep 4 00:08:44.196858 containerd[2012]: 2025-09-04 00:08:44.135 [INFO][4798] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:44.196858 containerd[2012]: 2025-09-04 00:08:44.138 [INFO][4798] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:44.196858 containerd[2012]: 2025-09-04 00:08:44.142 [INFO][4798] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:44.197149 containerd[2012]: 2025-09-04 00:08:44.142 [INFO][4798] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" host="ip-172-31-29-190" Sep 4 00:08:44.197149 containerd[2012]: 2025-09-04 00:08:44.145 [INFO][4798] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421 Sep 4 00:08:44.197149 containerd[2012]: 2025-09-04 00:08:44.155 [INFO][4798] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" host="ip-172-31-29-190" Sep 4 00:08:44.197149 containerd[2012]: 2025-09-04 00:08:44.164 [INFO][4798] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.121.195/26] block=192.168.121.192/26 handle="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" host="ip-172-31-29-190" Sep 4 00:08:44.197149 containerd[2012]: 2025-09-04 00:08:44.164 [INFO][4798] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.195/26] handle="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" host="ip-172-31-29-190" Sep 4 00:08:44.197149 containerd[2012]: 2025-09-04 00:08:44.164 [INFO][4798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:08:44.197149 containerd[2012]: 2025-09-04 00:08:44.164 [INFO][4798] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.195/26] IPv6=[] ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" HandleID="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:08:44.197861 containerd[2012]: 2025-09-04 00:08:44.169 [INFO][4763] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Namespace="calico-apiserver" Pod="calico-apiserver-664cb9f68d-t5fls" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0", GenerateName:"calico-apiserver-664cb9f68d-", Namespace:"calico-apiserver", SelfLink:"", UID:"85b1ac46-1183-4716-965a-da420e906863", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"664cb9f68d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"", Pod:"calico-apiserver-664cb9f68d-t5fls", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1ef844bac6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:44.197931 containerd[2012]: 2025-09-04 00:08:44.171 [INFO][4763] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.195/32] ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Namespace="calico-apiserver" Pod="calico-apiserver-664cb9f68d-t5fls" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:08:44.197931 containerd[2012]: 2025-09-04 00:08:44.171 [INFO][4763] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1ef844bac6 ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Namespace="calico-apiserver" Pod="calico-apiserver-664cb9f68d-t5fls" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:08:44.197931 containerd[2012]: 2025-09-04 00:08:44.176 [INFO][4763] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Namespace="calico-apiserver" Pod="calico-apiserver-664cb9f68d-t5fls" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:08:44.198665 containerd[2012]: 2025-09-04 00:08:44.176 [INFO][4763] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Namespace="calico-apiserver" Pod="calico-apiserver-664cb9f68d-t5fls" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0", GenerateName:"calico-apiserver-664cb9f68d-", Namespace:"calico-apiserver", SelfLink:"", UID:"85b1ac46-1183-4716-965a-da420e906863", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"664cb9f68d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421", Pod:"calico-apiserver-664cb9f68d-t5fls", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1ef844bac6", MAC:"7e:c7:b6:91:7c:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:44.198785 containerd[2012]: 2025-09-04 00:08:44.191 [INFO][4763] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Namespace="calico-apiserver" Pod="calico-apiserver-664cb9f68d-t5fls" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:08:44.261594 containerd[2012]: time="2025-09-04T00:08:44.261542889Z" level=info msg="connecting to shim c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" address="unix:///run/containerd/s/0e9d2d07bf2e0ae223fb55c87c2403ad5984b94b97f1258ec7772079db8622e6" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:08:44.320352 containerd[2012]: time="2025-09-04T00:08:44.319675494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-655448545f-s7m6x,Uid:23f0ea80-398b-4f23-8e24-f4c05cda5b7a,Namespace:calico-system,Attempt:0,} returns sandbox id \"8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b\"" Sep 4 00:08:44.327421 systemd-networkd[1879]: cali0ef2561a35a: Link UP Sep 4 00:08:44.328530 systemd-networkd[1879]: cali0ef2561a35a: Gained carrier Sep 4 00:08:44.366155 kubelet[3275]: I0904 00:08:44.366118 3275 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:08:44.381484 systemd[1]: Started cri-containerd-c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421.scope - libcontainer container c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421. Sep 4 00:08:44.408294 containerd[2012]: 2025-09-04 00:08:43.971 [INFO][4761] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:08:44.408294 containerd[2012]: 2025-09-04 00:08:43.986 [INFO][4761] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--190-k8s-csi--node--driver--4csm7-eth0 csi-node-driver- calico-system 855604f4-24b1-40c7-86a6-198bf7be5142 686 0 2025-09-04 00:08:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-29-190 csi-node-driver-4csm7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0ef2561a35a [] [] }} ContainerID="07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" Namespace="calico-system" Pod="csi-node-driver-4csm7" WorkloadEndpoint="ip--172--31--29--190-k8s-csi--node--driver--4csm7-" Sep 4 00:08:44.408294 containerd[2012]: 2025-09-04 00:08:43.986 [INFO][4761] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" Namespace="calico-system" Pod="csi-node-driver-4csm7" WorkloadEndpoint="ip--172--31--29--190-k8s-csi--node--driver--4csm7-eth0" Sep 4 00:08:44.408294 containerd[2012]: 2025-09-04 00:08:44.049 [INFO][4811] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" HandleID="k8s-pod-network.07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" Workload="ip--172--31--29--190-k8s-csi--node--driver--4csm7-eth0" Sep 4 00:08:44.408643 containerd[2012]: 2025-09-04 00:08:44.049 [INFO][4811] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" HandleID="k8s-pod-network.07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" Workload="ip--172--31--29--190-k8s-csi--node--driver--4csm7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-190", "pod":"csi-node-driver-4csm7", "timestamp":"2025-09-04 00:08:44.049432884 +0000 UTC"}, Hostname:"ip-172-31-29-190", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:08:44.408643 containerd[2012]: 2025-09-04 00:08:44.050 [INFO][4811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:08:44.408643 containerd[2012]: 2025-09-04 00:08:44.164 [INFO][4811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:08:44.408643 containerd[2012]: 2025-09-04 00:08:44.165 [INFO][4811] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-190' Sep 4 00:08:44.408643 containerd[2012]: 2025-09-04 00:08:44.216 [INFO][4811] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" host="ip-172-31-29-190" Sep 4 00:08:44.408643 containerd[2012]: 2025-09-04 00:08:44.225 [INFO][4811] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-190" Sep 4 00:08:44.408643 containerd[2012]: 2025-09-04 00:08:44.247 [INFO][4811] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:44.408643 containerd[2012]: 2025-09-04 00:08:44.252 [INFO][4811] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:44.408643 containerd[2012]: 2025-09-04 00:08:44.257 [INFO][4811] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:44.410135 containerd[2012]: 2025-09-04 00:08:44.258 [INFO][4811] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" host="ip-172-31-29-190" Sep 4 00:08:44.410135 containerd[2012]: 2025-09-04 00:08:44.263 [INFO][4811] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d Sep 4 00:08:44.410135 containerd[2012]: 2025-09-04 00:08:44.277 [INFO][4811] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" host="ip-172-31-29-190" Sep 4 00:08:44.410135 containerd[2012]: 2025-09-04 00:08:44.299 [INFO][4811] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.121.196/26] block=192.168.121.192/26 handle="k8s-pod-network.07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" host="ip-172-31-29-190" Sep 4 00:08:44.410135 containerd[2012]: 2025-09-04 00:08:44.299 [INFO][4811] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.196/26] handle="k8s-pod-network.07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" host="ip-172-31-29-190" Sep 4 00:08:44.410135 containerd[2012]: 2025-09-04 00:08:44.300 [INFO][4811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:08:44.410135 containerd[2012]: 2025-09-04 00:08:44.300 [INFO][4811] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.196/26] IPv6=[] ContainerID="07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" HandleID="k8s-pod-network.07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" Workload="ip--172--31--29--190-k8s-csi--node--driver--4csm7-eth0" Sep 4 00:08:44.411483 containerd[2012]: 2025-09-04 00:08:44.321 [INFO][4761] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" Namespace="calico-system" Pod="csi-node-driver-4csm7" WorkloadEndpoint="ip--172--31--29--190-k8s-csi--node--driver--4csm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-csi--node--driver--4csm7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"855604f4-24b1-40c7-86a6-198bf7be5142", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"", Pod:"csi-node-driver-4csm7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0ef2561a35a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:44.411588 containerd[2012]: 2025-09-04 00:08:44.322 [INFO][4761] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.196/32] ContainerID="07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" Namespace="calico-system" Pod="csi-node-driver-4csm7" WorkloadEndpoint="ip--172--31--29--190-k8s-csi--node--driver--4csm7-eth0" Sep 4 00:08:44.411588 containerd[2012]: 2025-09-04 00:08:44.322 [INFO][4761] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ef2561a35a ContainerID="07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" Namespace="calico-system" Pod="csi-node-driver-4csm7" WorkloadEndpoint="ip--172--31--29--190-k8s-csi--node--driver--4csm7-eth0" Sep 4 00:08:44.411588 containerd[2012]: 2025-09-04 00:08:44.324 [INFO][4761] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" Namespace="calico-system" Pod="csi-node-driver-4csm7" WorkloadEndpoint="ip--172--31--29--190-k8s-csi--node--driver--4csm7-eth0" Sep 4 00:08:44.411699 containerd[2012]: 2025-09-04 00:08:44.327 [INFO][4761] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" Namespace="calico-system" Pod="csi-node-driver-4csm7" WorkloadEndpoint="ip--172--31--29--190-k8s-csi--node--driver--4csm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-csi--node--driver--4csm7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"855604f4-24b1-40c7-86a6-198bf7be5142", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d", Pod:"csi-node-driver-4csm7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0ef2561a35a", MAC:"f2:e7:63:8f:47:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:44.411783 containerd[2012]: 2025-09-04 00:08:44.380 [INFO][4761] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" Namespace="calico-system" Pod="csi-node-driver-4csm7" WorkloadEndpoint="ip--172--31--29--190-k8s-csi--node--driver--4csm7-eth0" Sep 4 00:08:44.473011 containerd[2012]: time="2025-09-04T00:08:44.472536425Z" level=info msg="connecting to shim 07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d" address="unix:///run/containerd/s/58a6a15d1035bb50c7708a456e618a7cf6de8345c9c48db4226cc391260e05df" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:08:44.490578 systemd-networkd[1879]: cali8bd8d344aea: Link UP Sep 4 00:08:44.493614 systemd-networkd[1879]: cali8bd8d344aea: Gained carrier Sep 4 00:08:44.539363 systemd[1]: Started cri-containerd-07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d.scope - libcontainer container 07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d. Sep 4 00:08:44.555054 containerd[2012]: 2025-09-04 00:08:43.962 [INFO][4767] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:08:44.555054 containerd[2012]: 2025-09-04 00:08:43.984 [INFO][4767] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--190-k8s-calico--apiserver--6d5d477df--hxgst-eth0 calico-apiserver-6d5d477df- calico-apiserver 5f012bd7-0ff2-46a3-ac4f-80680b0884e0 817 0 2025-09-04 00:08:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d5d477df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-190 calico-apiserver-6d5d477df-hxgst eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8bd8d344aea [] [] }} ContainerID="acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d477df-hxgst" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--6d5d477df--hxgst-" Sep 4 00:08:44.555054 containerd[2012]: 2025-09-04 00:08:43.984 [INFO][4767] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d477df-hxgst" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--6d5d477df--hxgst-eth0" Sep 4 00:08:44.555054 containerd[2012]: 2025-09-04 00:08:44.050 [INFO][4808] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" HandleID="k8s-pod-network.acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" Workload="ip--172--31--29--190-k8s-calico--apiserver--6d5d477df--hxgst-eth0" Sep 4 00:08:44.556174 containerd[2012]: 2025-09-04 00:08:44.050 [INFO][4808] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" HandleID="k8s-pod-network.acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" Workload="ip--172--31--29--190-k8s-calico--apiserver--6d5d477df--hxgst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-190", "pod":"calico-apiserver-6d5d477df-hxgst", "timestamp":"2025-09-04 00:08:44.050412215 +0000 UTC"}, Hostname:"ip-172-31-29-190", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:08:44.556174 containerd[2012]: 2025-09-04 00:08:44.050 [INFO][4808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:08:44.556174 containerd[2012]: 2025-09-04 00:08:44.300 [INFO][4808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:08:44.556174 containerd[2012]: 2025-09-04 00:08:44.300 [INFO][4808] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-190' Sep 4 00:08:44.556174 containerd[2012]: 2025-09-04 00:08:44.318 [INFO][4808] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" host="ip-172-31-29-190" Sep 4 00:08:44.556174 containerd[2012]: 2025-09-04 00:08:44.347 [INFO][4808] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-190" Sep 4 00:08:44.556174 containerd[2012]: 2025-09-04 00:08:44.361 [INFO][4808] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:44.556174 containerd[2012]: 2025-09-04 00:08:44.365 [INFO][4808] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:44.556174 containerd[2012]: 2025-09-04 00:08:44.386 [INFO][4808] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:44.558192 containerd[2012]: 2025-09-04 00:08:44.387 [INFO][4808] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" host="ip-172-31-29-190" Sep 4 00:08:44.558192 containerd[2012]: 2025-09-04 00:08:44.394 [INFO][4808] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df Sep 4 00:08:44.558192 containerd[2012]: 2025-09-04 00:08:44.430 [INFO][4808] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" host="ip-172-31-29-190" Sep 4 00:08:44.558192 containerd[2012]: 2025-09-04 00:08:44.456 [INFO][4808] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.121.197/26] block=192.168.121.192/26 handle="k8s-pod-network.acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" host="ip-172-31-29-190" Sep 4 00:08:44.558192 containerd[2012]: 2025-09-04 00:08:44.456 [INFO][4808] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.197/26] handle="k8s-pod-network.acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" host="ip-172-31-29-190" Sep 4 00:08:44.558192 containerd[2012]: 2025-09-04 00:08:44.456 [INFO][4808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:08:44.558192 containerd[2012]: 2025-09-04 00:08:44.456 [INFO][4808] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.197/26] IPv6=[] ContainerID="acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" HandleID="k8s-pod-network.acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" Workload="ip--172--31--29--190-k8s-calico--apiserver--6d5d477df--hxgst-eth0" Sep 4 00:08:44.558490 containerd[2012]: 2025-09-04 00:08:44.467 [INFO][4767] cni-plugin/k8s.go 418: Populated endpoint ContainerID="acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d477df-hxgst" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--6d5d477df--hxgst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-calico--apiserver--6d5d477df--hxgst-eth0", GenerateName:"calico-apiserver-6d5d477df-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f012bd7-0ff2-46a3-ac4f-80680b0884e0", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5d477df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"", Pod:"calico-apiserver-6d5d477df-hxgst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8bd8d344aea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:44.558589 containerd[2012]: 2025-09-04 00:08:44.468 [INFO][4767] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.197/32] ContainerID="acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d477df-hxgst" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--6d5d477df--hxgst-eth0" Sep 4 00:08:44.558589 containerd[2012]: 2025-09-04 00:08:44.472 [INFO][4767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bd8d344aea ContainerID="acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d477df-hxgst" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--6d5d477df--hxgst-eth0" Sep 4 00:08:44.558589 containerd[2012]: 2025-09-04 00:08:44.494 [INFO][4767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d477df-hxgst" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--6d5d477df--hxgst-eth0" Sep 4 00:08:44.558703 containerd[2012]: 2025-09-04 00:08:44.496 [INFO][4767] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d477df-hxgst" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--6d5d477df--hxgst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-calico--apiserver--6d5d477df--hxgst-eth0", GenerateName:"calico-apiserver-6d5d477df-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f012bd7-0ff2-46a3-ac4f-80680b0884e0", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5d477df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df", Pod:"calico-apiserver-6d5d477df-hxgst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8bd8d344aea", MAC:"fa:82:13:86:b2:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:44.558785 containerd[2012]: 2025-09-04 00:08:44.549 [INFO][4767] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d477df-hxgst" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--6d5d477df--hxgst-eth0" Sep 4 00:08:44.651480 systemd-networkd[1879]: caliae5cdee3f83: Gained IPv6LL Sep 4 00:08:44.731145 containerd[2012]: time="2025-09-04T00:08:44.731097620Z" level=info msg="connecting to shim acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df" address="unix:///run/containerd/s/39e56294e0cf25eaeebf9f8d26bc8755890e5e2baa938ee25eb254e9ae8f93b1" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:08:44.734725 containerd[2012]: time="2025-09-04T00:08:44.734682774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-664cb9f68d-t5fls,Uid:85b1ac46-1183-4716-965a-da420e906863,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421\"" Sep 4 00:08:44.811799 systemd[1]: Started cri-containerd-acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df.scope - libcontainer container acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df. Sep 4 00:08:44.819143 containerd[2012]: time="2025-09-04T00:08:44.819102828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4csm7,Uid:855604f4-24b1-40c7-86a6-198bf7be5142,Namespace:calico-system,Attempt:0,} returns sandbox id \"07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d\"" Sep 4 00:08:45.185537 containerd[2012]: time="2025-09-04T00:08:45.185357305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5d477df-hxgst,Uid:5f012bd7-0ff2-46a3-ac4f-80680b0884e0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df\"" Sep 4 00:08:45.232492 containerd[2012]: time="2025-09-04T00:08:45.231360758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:45.236051 containerd[2012]: time="2025-09-04T00:08:45.235995137Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:45.236881 containerd[2012]: time="2025-09-04T00:08:45.236851376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 4 00:08:45.241571 containerd[2012]: time="2025-09-04T00:08:45.241526385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:45.243257 containerd[2012]: time="2025-09-04T00:08:45.242850215Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.86901464s" Sep 4 00:08:45.243257 containerd[2012]: time="2025-09-04T00:08:45.242895514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 4 00:08:45.265762 containerd[2012]: time="2025-09-04T00:08:45.265705868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 4 00:08:45.306391 containerd[2012]: time="2025-09-04T00:08:45.306197965Z" level=info msg="CreateContainer within sandbox \"d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 4 00:08:45.321527 containerd[2012]: time="2025-09-04T00:08:45.319402905Z" level=info msg="Container 662a16aac06d6dfe5522f8f3e616fb36588e63e555ac06b8a5c822fc443b3501: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:08:45.341189 containerd[2012]: time="2025-09-04T00:08:45.341148895Z" level=info msg="CreateContainer within sandbox \"d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"662a16aac06d6dfe5522f8f3e616fb36588e63e555ac06b8a5c822fc443b3501\"" Sep 4 00:08:45.342826 containerd[2012]: time="2025-09-04T00:08:45.342786504Z" level=info msg="StartContainer for \"662a16aac06d6dfe5522f8f3e616fb36588e63e555ac06b8a5c822fc443b3501\"" Sep 4 00:08:45.346509 containerd[2012]: time="2025-09-04T00:08:45.346433144Z" level=info msg="connecting to shim 662a16aac06d6dfe5522f8f3e616fb36588e63e555ac06b8a5c822fc443b3501" address="unix:///run/containerd/s/890d3ec27061bf1a509a242f88ee2a53d81b9ba62d6a967696b7c1ed273e9a75" protocol=ttrpc version=3 Sep 4 00:08:45.408685 systemd[1]: Started cri-containerd-662a16aac06d6dfe5522f8f3e616fb36588e63e555ac06b8a5c822fc443b3501.scope - libcontainer container 662a16aac06d6dfe5522f8f3e616fb36588e63e555ac06b8a5c822fc443b3501. Sep 4 00:08:45.483440 systemd-networkd[1879]: cali741ece4d514: Gained IPv6LL Sep 4 00:08:45.644542 containerd[2012]: time="2025-09-04T00:08:45.644492468Z" level=info msg="StartContainer for \"662a16aac06d6dfe5522f8f3e616fb36588e63e555ac06b8a5c822fc443b3501\" returns successfully" Sep 4 00:08:45.739450 systemd-networkd[1879]: cali8bd8d344aea: Gained IPv6LL Sep 4 00:08:45.763659 systemd[1]: Started sshd@7-172.31.29.190:22-139.178.68.195:59532.service - OpenSSH per-connection server daemon (139.178.68.195:59532). Sep 4 00:08:45.775850 containerd[2012]: time="2025-09-04T00:08:45.775793934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-664cb9f68d-sjbqq,Uid:e303bdd1-9653-4113-9d2e-efb6f383697a,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:08:45.776685 containerd[2012]: time="2025-09-04T00:08:45.776543714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-4dg2h,Uid:aa9754da-3cb3-4d5b-886e-a8706c00d845,Namespace:calico-system,Attempt:0,}" Sep 4 00:08:46.081706 sshd[5118]: Accepted publickey for core from 139.178.68.195 port 59532 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:08:46.090304 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:46.110224 systemd-logind[1979]: New session 8 of user core. Sep 4 00:08:46.111853 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 00:08:46.187391 systemd-networkd[1879]: calie1ef844bac6: Gained IPv6LL Sep 4 00:08:46.187644 systemd-networkd[1879]: cali0ef2561a35a: Gained IPv6LL Sep 4 00:08:46.203852 systemd-networkd[1879]: cali073d8fdf5a5: Link UP Sep 4 00:08:46.208815 systemd-networkd[1879]: cali073d8fdf5a5: Gained carrier Sep 4 00:08:46.284261 containerd[2012]: 2025-09-04 00:08:45.953 [INFO][5121] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--190-k8s-goldmane--54d579b49d--4dg2h-eth0 goldmane-54d579b49d- calico-system aa9754da-3cb3-4d5b-886e-a8706c00d845 818 0 2025-09-04 00:08:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-29-190 goldmane-54d579b49d-4dg2h eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali073d8fdf5a5 [] [] }} ContainerID="b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" Namespace="calico-system" Pod="goldmane-54d579b49d-4dg2h" WorkloadEndpoint="ip--172--31--29--190-k8s-goldmane--54d579b49d--4dg2h-" Sep 4 00:08:46.284261 containerd[2012]: 2025-09-04 00:08:45.953 [INFO][5121] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" Namespace="calico-system" Pod="goldmane-54d579b49d-4dg2h" WorkloadEndpoint="ip--172--31--29--190-k8s-goldmane--54d579b49d--4dg2h-eth0" Sep 4 00:08:46.284261 containerd[2012]: 2025-09-04 00:08:46.070 [INFO][5150] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" HandleID="k8s-pod-network.b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" Workload="ip--172--31--29--190-k8s-goldmane--54d579b49d--4dg2h-eth0" Sep 4 00:08:46.284771 containerd[2012]: 2025-09-04 00:08:46.071 [INFO][5150] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" HandleID="k8s-pod-network.b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" Workload="ip--172--31--29--190-k8s-goldmane--54d579b49d--4dg2h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd600), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-190", "pod":"goldmane-54d579b49d-4dg2h", "timestamp":"2025-09-04 00:08:46.070464648 +0000 UTC"}, Hostname:"ip-172-31-29-190", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:08:46.284771 containerd[2012]: 2025-09-04 00:08:46.072 [INFO][5150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:08:46.284771 containerd[2012]: 2025-09-04 00:08:46.072 [INFO][5150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:08:46.284771 containerd[2012]: 2025-09-04 00:08:46.072 [INFO][5150] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-190' Sep 4 00:08:46.284771 containerd[2012]: 2025-09-04 00:08:46.100 [INFO][5150] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" host="ip-172-31-29-190" Sep 4 00:08:46.284771 containerd[2012]: 2025-09-04 00:08:46.122 [INFO][5150] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-190" Sep 4 00:08:46.284771 containerd[2012]: 2025-09-04 00:08:46.135 [INFO][5150] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:46.284771 containerd[2012]: 2025-09-04 00:08:46.144 [INFO][5150] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:46.284771 containerd[2012]: 2025-09-04 00:08:46.151 [INFO][5150] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:46.285040 containerd[2012]: 2025-09-04 00:08:46.151 [INFO][5150] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" host="ip-172-31-29-190" Sep 4 00:08:46.285040 containerd[2012]: 2025-09-04 00:08:46.154 [INFO][5150] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b Sep 4 00:08:46.285040 containerd[2012]: 2025-09-04 00:08:46.161 [INFO][5150] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" host="ip-172-31-29-190" Sep 4 00:08:46.285040 containerd[2012]: 2025-09-04 00:08:46.172 [INFO][5150] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.121.198/26] block=192.168.121.192/26 handle="k8s-pod-network.b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" host="ip-172-31-29-190" Sep 4 00:08:46.285040 containerd[2012]: 2025-09-04 00:08:46.172 [INFO][5150] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.198/26] handle="k8s-pod-network.b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" host="ip-172-31-29-190" Sep 4 00:08:46.285040 containerd[2012]: 2025-09-04 00:08:46.172 [INFO][5150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:08:46.285040 containerd[2012]: 2025-09-04 00:08:46.172 [INFO][5150] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.198/26] IPv6=[] ContainerID="b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" HandleID="k8s-pod-network.b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" Workload="ip--172--31--29--190-k8s-goldmane--54d579b49d--4dg2h-eth0" Sep 4 00:08:46.285693 containerd[2012]: 2025-09-04 00:08:46.195 [INFO][5121] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" Namespace="calico-system" Pod="goldmane-54d579b49d-4dg2h" WorkloadEndpoint="ip--172--31--29--190-k8s-goldmane--54d579b49d--4dg2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-goldmane--54d579b49d--4dg2h-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"aa9754da-3cb3-4d5b-886e-a8706c00d845", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"", Pod:"goldmane-54d579b49d-4dg2h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali073d8fdf5a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:46.285693 containerd[2012]: 2025-09-04 00:08:46.195 [INFO][5121] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.198/32] ContainerID="b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" Namespace="calico-system" Pod="goldmane-54d579b49d-4dg2h" WorkloadEndpoint="ip--172--31--29--190-k8s-goldmane--54d579b49d--4dg2h-eth0" Sep 4 00:08:46.285916 containerd[2012]: 2025-09-04 00:08:46.196 [INFO][5121] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali073d8fdf5a5 ContainerID="b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" Namespace="calico-system" Pod="goldmane-54d579b49d-4dg2h" WorkloadEndpoint="ip--172--31--29--190-k8s-goldmane--54d579b49d--4dg2h-eth0" Sep 4 00:08:46.285916 containerd[2012]: 2025-09-04 00:08:46.204 [INFO][5121] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" Namespace="calico-system" Pod="goldmane-54d579b49d-4dg2h" WorkloadEndpoint="ip--172--31--29--190-k8s-goldmane--54d579b49d--4dg2h-eth0" Sep 4 00:08:46.287348 containerd[2012]: 2025-09-04 00:08:46.205 [INFO][5121] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" Namespace="calico-system" Pod="goldmane-54d579b49d-4dg2h" WorkloadEndpoint="ip--172--31--29--190-k8s-goldmane--54d579b49d--4dg2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-goldmane--54d579b49d--4dg2h-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"aa9754da-3cb3-4d5b-886e-a8706c00d845", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b", Pod:"goldmane-54d579b49d-4dg2h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali073d8fdf5a5", MAC:"ae:2d:d5:98:a9:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:46.287490 containerd[2012]: 2025-09-04 00:08:46.267 [INFO][5121] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" Namespace="calico-system" Pod="goldmane-54d579b49d-4dg2h" WorkloadEndpoint="ip--172--31--29--190-k8s-goldmane--54d579b49d--4dg2h-eth0" Sep 4 00:08:46.360756 systemd-networkd[1879]: calie0be5511ae0: Link UP Sep 4 00:08:46.364902 systemd-networkd[1879]: calie0be5511ae0: Gained carrier Sep 4 00:08:46.399301 containerd[2012]: time="2025-09-04T00:08:46.398402494Z" level=info msg="connecting to shim b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b" address="unix:///run/containerd/s/51be4f8b5487cd2104dfc426611e9d26f23ff5925de7c31824a4d104f8892b8b" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:08:46.415263 containerd[2012]: 2025-09-04 00:08:45.962 [INFO][5119] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0 calico-apiserver-664cb9f68d- calico-apiserver e303bdd1-9653-4113-9d2e-efb6f383697a 821 0 2025-09-04 00:08:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:664cb9f68d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-190 calico-apiserver-664cb9f68d-sjbqq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie0be5511ae0 [] [] }} ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Namespace="calico-apiserver" Pod="calico-apiserver-664cb9f68d-sjbqq" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-" Sep 4 00:08:46.415263 containerd[2012]: 2025-09-04 00:08:45.962 [INFO][5119] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Namespace="calico-apiserver" Pod="calico-apiserver-664cb9f68d-sjbqq" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:08:46.415263 containerd[2012]: 2025-09-04 00:08:46.124 [INFO][5156] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" HandleID="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:08:46.415601 containerd[2012]: 2025-09-04 00:08:46.124 [INFO][5156] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" HandleID="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003897c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-190", "pod":"calico-apiserver-664cb9f68d-sjbqq", "timestamp":"2025-09-04 00:08:46.120216965 +0000 UTC"}, Hostname:"ip-172-31-29-190", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:08:46.415601 containerd[2012]: 2025-09-04 00:08:46.124 [INFO][5156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:08:46.415601 containerd[2012]: 2025-09-04 00:08:46.173 [INFO][5156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:08:46.415601 containerd[2012]: 2025-09-04 00:08:46.174 [INFO][5156] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-190' Sep 4 00:08:46.415601 containerd[2012]: 2025-09-04 00:08:46.209 [INFO][5156] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" host="ip-172-31-29-190" Sep 4 00:08:46.415601 containerd[2012]: 2025-09-04 00:08:46.253 [INFO][5156] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-190" Sep 4 00:08:46.415601 containerd[2012]: 2025-09-04 00:08:46.281 [INFO][5156] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:46.415601 containerd[2012]: 2025-09-04 00:08:46.289 [INFO][5156] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:46.415601 containerd[2012]: 2025-09-04 00:08:46.293 [INFO][5156] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:46.418194 containerd[2012]: 2025-09-04 00:08:46.294 [INFO][5156] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" host="ip-172-31-29-190" Sep 4 00:08:46.418194 containerd[2012]: 2025-09-04 00:08:46.297 [INFO][5156] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec Sep 4 00:08:46.418194 containerd[2012]: 2025-09-04 00:08:46.308 [INFO][5156] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" host="ip-172-31-29-190" Sep 4 00:08:46.418194 containerd[2012]: 2025-09-04 00:08:46.327 [INFO][5156] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.121.199/26] block=192.168.121.192/26 handle="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" host="ip-172-31-29-190" Sep 4 00:08:46.418194 containerd[2012]: 2025-09-04 00:08:46.328 [INFO][5156] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.199/26] handle="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" host="ip-172-31-29-190" Sep 4 00:08:46.418194 containerd[2012]: 2025-09-04 00:08:46.328 [INFO][5156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:08:46.418194 containerd[2012]: 2025-09-04 00:08:46.329 [INFO][5156] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.199/26] IPv6=[] ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" HandleID="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:08:46.418490 containerd[2012]: 2025-09-04 00:08:46.339 [INFO][5119] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Namespace="calico-apiserver" Pod="calico-apiserver-664cb9f68d-sjbqq" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0", GenerateName:"calico-apiserver-664cb9f68d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e303bdd1-9653-4113-9d2e-efb6f383697a", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"664cb9f68d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"", Pod:"calico-apiserver-664cb9f68d-sjbqq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0be5511ae0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:46.418602 containerd[2012]: 2025-09-04 00:08:46.341 [INFO][5119] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.199/32] ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Namespace="calico-apiserver" Pod="calico-apiserver-664cb9f68d-sjbqq" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:08:46.418602 containerd[2012]: 2025-09-04 00:08:46.341 [INFO][5119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0be5511ae0 ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Namespace="calico-apiserver" Pod="calico-apiserver-664cb9f68d-sjbqq" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:08:46.418602 containerd[2012]: 2025-09-04 00:08:46.367 [INFO][5119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Namespace="calico-apiserver" Pod="calico-apiserver-664cb9f68d-sjbqq" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:08:46.418719 containerd[2012]: 2025-09-04 00:08:46.368 [INFO][5119] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Namespace="calico-apiserver" Pod="calico-apiserver-664cb9f68d-sjbqq" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0", GenerateName:"calico-apiserver-664cb9f68d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e303bdd1-9653-4113-9d2e-efb6f383697a", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"664cb9f68d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec", Pod:"calico-apiserver-664cb9f68d-sjbqq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0be5511ae0", MAC:"c6:8c:bb:6a:e3:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:46.419594 containerd[2012]: 2025-09-04 00:08:46.401 [INFO][5119] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Namespace="calico-apiserver" Pod="calico-apiserver-664cb9f68d-sjbqq" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:08:46.497229 containerd[2012]: time="2025-09-04T00:08:46.497131519Z" level=info msg="connecting to shim 2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" address="unix:///run/containerd/s/af844aa5998dcee1bf61b8379202e07e987d3866c19c50326433500a0e59f1c9" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:08:46.586502 systemd[1]: Started cri-containerd-b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b.scope - libcontainer container b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b. Sep 4 00:08:46.667495 systemd[1]: Started cri-containerd-2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec.scope - libcontainer container 2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec. Sep 4 00:08:46.778670 containerd[2012]: time="2025-09-04T00:08:46.778628670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9hw58,Uid:6d71db07-e698-49b3-b244-3fc1673cedef,Namespace:kube-system,Attempt:0,}" Sep 4 00:08:47.191977 containerd[2012]: time="2025-09-04T00:08:47.191934067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-4dg2h,Uid:aa9754da-3cb3-4d5b-886e-a8706c00d845,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b\"" Sep 4 00:08:47.278413 systemd-networkd[1879]: calic1776209a2b: Link UP Sep 4 00:08:47.283577 systemd-networkd[1879]: calic1776209a2b: Gained carrier Sep 4 00:08:47.339412 systemd-networkd[1879]: cali073d8fdf5a5: Gained IPv6LL Sep 4 00:08:47.352830 containerd[2012]: 2025-09-04 00:08:46.985 [INFO][5273] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--190-k8s-coredns--668d6bf9bc--9hw58-eth0 coredns-668d6bf9bc- kube-system 6d71db07-e698-49b3-b244-3fc1673cedef 813 0 2025-09-04 00:08:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-190 coredns-668d6bf9bc-9hw58 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic1776209a2b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" Namespace="kube-system" Pod="coredns-668d6bf9bc-9hw58" WorkloadEndpoint="ip--172--31--29--190-k8s-coredns--668d6bf9bc--9hw58-" Sep 4 00:08:47.352830 containerd[2012]: 2025-09-04 00:08:46.985 [INFO][5273] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" Namespace="kube-system" Pod="coredns-668d6bf9bc-9hw58" WorkloadEndpoint="ip--172--31--29--190-k8s-coredns--668d6bf9bc--9hw58-eth0" Sep 4 00:08:47.352830 containerd[2012]: 2025-09-04 00:08:47.083 [INFO][5286] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" HandleID="k8s-pod-network.b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" Workload="ip--172--31--29--190-k8s-coredns--668d6bf9bc--9hw58-eth0" Sep 4 00:08:47.354324 containerd[2012]: 2025-09-04 00:08:47.085 [INFO][5286] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" HandleID="k8s-pod-network.b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" Workload="ip--172--31--29--190-k8s-coredns--668d6bf9bc--9hw58-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003328b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-190", "pod":"coredns-668d6bf9bc-9hw58", "timestamp":"2025-09-04 00:08:47.083531736 +0000 UTC"}, Hostname:"ip-172-31-29-190", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:08:47.354324 containerd[2012]: 2025-09-04 00:08:47.085 [INFO][5286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:08:47.354324 containerd[2012]: 2025-09-04 00:08:47.085 [INFO][5286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:08:47.354324 containerd[2012]: 2025-09-04 00:08:47.086 [INFO][5286] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-190' Sep 4 00:08:47.354324 containerd[2012]: 2025-09-04 00:08:47.123 [INFO][5286] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" host="ip-172-31-29-190" Sep 4 00:08:47.354324 containerd[2012]: 2025-09-04 00:08:47.142 [INFO][5286] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-190" Sep 4 00:08:47.354324 containerd[2012]: 2025-09-04 00:08:47.171 [INFO][5286] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:47.354324 containerd[2012]: 2025-09-04 00:08:47.183 [INFO][5286] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:47.354324 containerd[2012]: 2025-09-04 00:08:47.197 [INFO][5286] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:47.354757 containerd[2012]: 2025-09-04 00:08:47.198 [INFO][5286] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" host="ip-172-31-29-190" Sep 4 00:08:47.354757 containerd[2012]: 2025-09-04 00:08:47.206 [INFO][5286] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc Sep 4 00:08:47.354757 containerd[2012]: 2025-09-04 00:08:47.220 [INFO][5286] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" host="ip-172-31-29-190" Sep 4 00:08:47.354757 containerd[2012]: 2025-09-04 00:08:47.251 [INFO][5286] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.121.200/26] block=192.168.121.192/26 handle="k8s-pod-network.b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" host="ip-172-31-29-190" Sep 4 00:08:47.354757 containerd[2012]: 2025-09-04 00:08:47.252 [INFO][5286] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.200/26] handle="k8s-pod-network.b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" host="ip-172-31-29-190" Sep 4 00:08:47.354757 containerd[2012]: 2025-09-04 00:08:47.253 [INFO][5286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:08:47.354757 containerd[2012]: 2025-09-04 00:08:47.253 [INFO][5286] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.200/26] IPv6=[] ContainerID="b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" HandleID="k8s-pod-network.b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" Workload="ip--172--31--29--190-k8s-coredns--668d6bf9bc--9hw58-eth0" Sep 4 00:08:47.355063 containerd[2012]: 2025-09-04 00:08:47.269 [INFO][5273] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" Namespace="kube-system" Pod="coredns-668d6bf9bc-9hw58" WorkloadEndpoint="ip--172--31--29--190-k8s-coredns--668d6bf9bc--9hw58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-coredns--668d6bf9bc--9hw58-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6d71db07-e698-49b3-b244-3fc1673cedef", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"", Pod:"coredns-668d6bf9bc-9hw58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1776209a2b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:47.355063 containerd[2012]: 2025-09-04 00:08:47.270 [INFO][5273] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.200/32] ContainerID="b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" Namespace="kube-system" Pod="coredns-668d6bf9bc-9hw58" WorkloadEndpoint="ip--172--31--29--190-k8s-coredns--668d6bf9bc--9hw58-eth0" Sep 4 00:08:47.355063 containerd[2012]: 2025-09-04 00:08:47.270 [INFO][5273] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1776209a2b ContainerID="b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" Namespace="kube-system" Pod="coredns-668d6bf9bc-9hw58" WorkloadEndpoint="ip--172--31--29--190-k8s-coredns--668d6bf9bc--9hw58-eth0" Sep 4 00:08:47.355063 containerd[2012]: 2025-09-04 00:08:47.287 [INFO][5273] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" Namespace="kube-system" Pod="coredns-668d6bf9bc-9hw58" WorkloadEndpoint="ip--172--31--29--190-k8s-coredns--668d6bf9bc--9hw58-eth0" Sep 4 00:08:47.355063 containerd[2012]: 2025-09-04 00:08:47.296 [INFO][5273] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" Namespace="kube-system" Pod="coredns-668d6bf9bc-9hw58" WorkloadEndpoint="ip--172--31--29--190-k8s-coredns--668d6bf9bc--9hw58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-coredns--668d6bf9bc--9hw58-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6d71db07-e698-49b3-b244-3fc1673cedef", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc", Pod:"coredns-668d6bf9bc-9hw58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1776209a2b", MAC:"fe:27:f8:7f:f4:ea", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:47.355063 containerd[2012]: 2025-09-04 00:08:47.344 [INFO][5273] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" Namespace="kube-system" Pod="coredns-668d6bf9bc-9hw58" WorkloadEndpoint="ip--172--31--29--190-k8s-coredns--668d6bf9bc--9hw58-eth0" Sep 4 00:08:47.442857 containerd[2012]: time="2025-09-04T00:08:47.442045835Z" level=info msg="connecting to shim b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc" address="unix:///run/containerd/s/35ccec86ce92172821d0215ef6a91209b8e8baeefb6a563e5a2086f7c4aa11e9" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:08:47.525546 sshd[5164]: Connection closed by 139.178.68.195 port 59532 Sep 4 00:08:47.525439 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:47.528134 systemd[1]: Started cri-containerd-b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc.scope - libcontainer container b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc. Sep 4 00:08:47.548376 systemd-logind[1979]: Session 8 logged out. Waiting for processes to exit. Sep 4 00:08:47.549089 systemd[1]: sshd@7-172.31.29.190:22-139.178.68.195:59532.service: Deactivated successfully. Sep 4 00:08:47.556080 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 00:08:47.565323 systemd-logind[1979]: Removed session 8. Sep 4 00:08:47.660618 systemd-networkd[1879]: calie0be5511ae0: Gained IPv6LL Sep 4 00:08:47.715309 containerd[2012]: time="2025-09-04T00:08:47.714970325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9hw58,Uid:6d71db07-e698-49b3-b244-3fc1673cedef,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc\"" Sep 4 00:08:47.726635 containerd[2012]: time="2025-09-04T00:08:47.726226183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-664cb9f68d-sjbqq,Uid:e303bdd1-9653-4113-9d2e-efb6f383697a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec\"" Sep 4 00:08:47.738590 containerd[2012]: time="2025-09-04T00:08:47.738543574Z" level=info msg="CreateContainer within sandbox \"b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:08:47.773342 containerd[2012]: time="2025-09-04T00:08:47.773277151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rs87l,Uid:785d2c30-0b96-43f5-9b76-56ac0617432c,Namespace:kube-system,Attempt:0,}" Sep 4 00:08:47.954685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3438074230.mount: Deactivated successfully. Sep 4 00:08:47.958426 containerd[2012]: time="2025-09-04T00:08:47.957490018Z" level=info msg="Container f933697e266f45527e0d6a3af3a8547cce57f8dedae7512686a300916341e25f: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:08:47.981652 containerd[2012]: time="2025-09-04T00:08:47.981405556Z" level=info msg="CreateContainer within sandbox \"b6a16047cf3d41f4a2aab9862d5d6b53fc664be5814057d29b6f0a38fe0427fc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f933697e266f45527e0d6a3af3a8547cce57f8dedae7512686a300916341e25f\"" Sep 4 00:08:47.985965 containerd[2012]: time="2025-09-04T00:08:47.985916371Z" level=info msg="StartContainer for \"f933697e266f45527e0d6a3af3a8547cce57f8dedae7512686a300916341e25f\"" Sep 4 00:08:47.994269 containerd[2012]: time="2025-09-04T00:08:47.994203703Z" level=info msg="connecting to shim f933697e266f45527e0d6a3af3a8547cce57f8dedae7512686a300916341e25f" address="unix:///run/containerd/s/35ccec86ce92172821d0215ef6a91209b8e8baeefb6a563e5a2086f7c4aa11e9" protocol=ttrpc version=3 Sep 4 00:08:48.042698 systemd[1]: Started cri-containerd-f933697e266f45527e0d6a3af3a8547cce57f8dedae7512686a300916341e25f.scope - libcontainer container f933697e266f45527e0d6a3af3a8547cce57f8dedae7512686a300916341e25f. Sep 4 00:08:48.188282 containerd[2012]: time="2025-09-04T00:08:48.188135763Z" level=info msg="StartContainer for \"f933697e266f45527e0d6a3af3a8547cce57f8dedae7512686a300916341e25f\" returns successfully" Sep 4 00:08:48.210202 systemd-networkd[1879]: cali04fd8f20bc7: Link UP Sep 4 00:08:48.212637 systemd-networkd[1879]: cali04fd8f20bc7: Gained carrier Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.035 [INFO][5364] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--190-k8s-coredns--668d6bf9bc--rs87l-eth0 coredns-668d6bf9bc- kube-system 785d2c30-0b96-43f5-9b76-56ac0617432c 808 0 2025-09-04 00:08:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-190 coredns-668d6bf9bc-rs87l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali04fd8f20bc7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs87l" WorkloadEndpoint="ip--172--31--29--190-k8s-coredns--668d6bf9bc--rs87l-" Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.035 [INFO][5364] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs87l" WorkloadEndpoint="ip--172--31--29--190-k8s-coredns--668d6bf9bc--rs87l-eth0" Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.106 [INFO][5395] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" HandleID="k8s-pod-network.c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" Workload="ip--172--31--29--190-k8s-coredns--668d6bf9bc--rs87l-eth0" Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.106 [INFO][5395] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" HandleID="k8s-pod-network.c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" Workload="ip--172--31--29--190-k8s-coredns--668d6bf9bc--rs87l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5e80), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-190", "pod":"coredns-668d6bf9bc-rs87l", "timestamp":"2025-09-04 00:08:48.106818215 +0000 UTC"}, Hostname:"ip-172-31-29-190", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.107 [INFO][5395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.107 [INFO][5395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.107 [INFO][5395] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-190' Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.118 [INFO][5395] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" host="ip-172-31-29-190" Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.131 [INFO][5395] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-190" Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.140 [INFO][5395] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.144 [INFO][5395] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.152 [INFO][5395] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-190" Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.152 [INFO][5395] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" host="ip-172-31-29-190" Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.160 [INFO][5395] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04 Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.175 [INFO][5395] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" host="ip-172-31-29-190" Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.195 [INFO][5395] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.121.201/26] block=192.168.121.192/26 handle="k8s-pod-network.c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" host="ip-172-31-29-190" Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.197 [INFO][5395] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.201/26] handle="k8s-pod-network.c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" host="ip-172-31-29-190" Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.197 [INFO][5395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:08:48.252481 containerd[2012]: 2025-09-04 00:08:48.197 [INFO][5395] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.201/26] IPv6=[] ContainerID="c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" HandleID="k8s-pod-network.c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" Workload="ip--172--31--29--190-k8s-coredns--668d6bf9bc--rs87l-eth0" Sep 4 00:08:48.253137 containerd[2012]: 2025-09-04 00:08:48.205 [INFO][5364] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs87l" WorkloadEndpoint="ip--172--31--29--190-k8s-coredns--668d6bf9bc--rs87l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-coredns--668d6bf9bc--rs87l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"785d2c30-0b96-43f5-9b76-56ac0617432c", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"", Pod:"coredns-668d6bf9bc-rs87l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04fd8f20bc7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:48.253137 containerd[2012]: 2025-09-04 00:08:48.206 [INFO][5364] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.201/32] ContainerID="c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs87l" WorkloadEndpoint="ip--172--31--29--190-k8s-coredns--668d6bf9bc--rs87l-eth0" Sep 4 00:08:48.253137 containerd[2012]: 2025-09-04 00:08:48.206 [INFO][5364] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04fd8f20bc7 ContainerID="c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs87l" WorkloadEndpoint="ip--172--31--29--190-k8s-coredns--668d6bf9bc--rs87l-eth0" Sep 4 00:08:48.253137 containerd[2012]: 2025-09-04 00:08:48.210 [INFO][5364] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs87l" WorkloadEndpoint="ip--172--31--29--190-k8s-coredns--668d6bf9bc--rs87l-eth0" Sep 4 00:08:48.253137 containerd[2012]: 2025-09-04 00:08:48.210 [INFO][5364] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs87l" WorkloadEndpoint="ip--172--31--29--190-k8s-coredns--668d6bf9bc--rs87l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--190-k8s-coredns--668d6bf9bc--rs87l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"785d2c30-0b96-43f5-9b76-56ac0617432c", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 8, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-190", ContainerID:"c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04", Pod:"coredns-668d6bf9bc-rs87l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04fd8f20bc7", MAC:"6a:b2:28:76:79:df", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:08:48.253137 containerd[2012]: 2025-09-04 00:08:48.234 [INFO][5364] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs87l" WorkloadEndpoint="ip--172--31--29--190-k8s-coredns--668d6bf9bc--rs87l-eth0" Sep 4 00:08:48.260620 kubelet[3275]: I0904 00:08:48.257591 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9hw58" podStartSLOduration=47.242399787 podStartE2EDuration="47.242399787s" podCreationTimestamp="2025-09-04 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:08:48.241740472 +0000 UTC m=+52.614077966" watchObservedRunningTime="2025-09-04 00:08:48.242399787 +0000 UTC m=+52.614737281" Sep 4 00:08:48.298073 containerd[2012]: time="2025-09-04T00:08:48.297995311Z" level=info msg="connecting to shim c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04" address="unix:///run/containerd/s/8448741eb610f2a6e6c730a30cafba0dd761efc1d0f02b995aa31ae55a34d527" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:08:48.353801 systemd[1]: Started cri-containerd-c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04.scope - libcontainer container c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04. Sep 4 00:08:48.520371 containerd[2012]: time="2025-09-04T00:08:48.518850843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rs87l,Uid:785d2c30-0b96-43f5-9b76-56ac0617432c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04\"" Sep 4 00:08:48.527253 containerd[2012]: time="2025-09-04T00:08:48.527055083Z" level=info msg="CreateContainer within sandbox \"c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:08:48.555895 containerd[2012]: time="2025-09-04T00:08:48.555851224Z" level=info msg="Container 2e912f13a2c6c6e141fe365afe555915ace3f6e92360d9bd6014f1c83f48d46d: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:08:48.574912 containerd[2012]: time="2025-09-04T00:08:48.572223986Z" level=info msg="CreateContainer within sandbox \"c4e8e82d1cbf3873200bbb7dab7e67add5663210745b5439c250b41bd68a4b04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2e912f13a2c6c6e141fe365afe555915ace3f6e92360d9bd6014f1c83f48d46d\"" Sep 4 00:08:48.576775 containerd[2012]: time="2025-09-04T00:08:48.576421396Z" level=info msg="StartContainer for \"2e912f13a2c6c6e141fe365afe555915ace3f6e92360d9bd6014f1c83f48d46d\"" Sep 4 00:08:48.583414 containerd[2012]: time="2025-09-04T00:08:48.583307298Z" level=info msg="connecting to shim 2e912f13a2c6c6e141fe365afe555915ace3f6e92360d9bd6014f1c83f48d46d" address="unix:///run/containerd/s/8448741eb610f2a6e6c730a30cafba0dd761efc1d0f02b995aa31ae55a34d527" protocol=ttrpc version=3 Sep 4 00:08:48.622709 systemd-networkd[1879]: vxlan.calico: Link UP Sep 4 00:08:48.622723 systemd-networkd[1879]: vxlan.calico: Gained carrier Sep 4 00:08:48.669177 systemd[1]: Started cri-containerd-2e912f13a2c6c6e141fe365afe555915ace3f6e92360d9bd6014f1c83f48d46d.scope - libcontainer container 2e912f13a2c6c6e141fe365afe555915ace3f6e92360d9bd6014f1c83f48d46d. Sep 4 00:08:48.737074 containerd[2012]: time="2025-09-04T00:08:48.737039641Z" level=info msg="StartContainer for \"2e912f13a2c6c6e141fe365afe555915ace3f6e92360d9bd6014f1c83f48d46d\" returns successfully" Sep 4 00:08:48.779197 (udev-worker)[4625]: Network interface NamePolicy= disabled on kernel command line. Sep 4 00:08:49.263481 systemd-networkd[1879]: calic1776209a2b: Gained IPv6LL Sep 4 00:08:49.333689 kubelet[3275]: I0904 00:08:49.333599 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rs87l" podStartSLOduration=48.333574504 podStartE2EDuration="48.333574504s" podCreationTimestamp="2025-09-04 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:08:49.332417446 +0000 UTC m=+53.704754959" watchObservedRunningTime="2025-09-04 00:08:49.333574504 +0000 UTC m=+53.705912001" Sep 4 00:08:49.707700 systemd-networkd[1879]: cali04fd8f20bc7: Gained IPv6LL Sep 4 00:08:50.233283 containerd[2012]: time="2025-09-04T00:08:50.233205815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:50.234341 containerd[2012]: time="2025-09-04T00:08:50.234291228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 4 00:08:50.235621 containerd[2012]: time="2025-09-04T00:08:50.235309567Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:50.238151 containerd[2012]: time="2025-09-04T00:08:50.238024379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:50.239067 containerd[2012]: time="2025-09-04T00:08:50.238877020Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.972709468s" Sep 4 00:08:50.239067 containerd[2012]: time="2025-09-04T00:08:50.238905489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 4 00:08:50.242271 containerd[2012]: time="2025-09-04T00:08:50.241760472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 00:08:50.306582 containerd[2012]: time="2025-09-04T00:08:50.306373342Z" level=info msg="CreateContainer within sandbox \"8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 00:08:50.349811 containerd[2012]: time="2025-09-04T00:08:50.349759322Z" level=info msg="Container b513880e85fc398faca53b210ed8f6b82a330d89c721e340840db4491c58f066: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:08:50.386987 containerd[2012]: time="2025-09-04T00:08:50.386948462Z" level=info msg="CreateContainer within sandbox \"8cb436787d916d69636f47093668baf9914cb37fee131828016fb9e33c3e197b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b513880e85fc398faca53b210ed8f6b82a330d89c721e340840db4491c58f066\"" Sep 4 00:08:50.387636 containerd[2012]: time="2025-09-04T00:08:50.387604962Z" level=info msg="StartContainer for \"b513880e85fc398faca53b210ed8f6b82a330d89c721e340840db4491c58f066\"" Sep 4 00:08:50.389988 containerd[2012]: time="2025-09-04T00:08:50.389954756Z" level=info msg="connecting to shim b513880e85fc398faca53b210ed8f6b82a330d89c721e340840db4491c58f066" address="unix:///run/containerd/s/da1321e82c6d31bbce4d7a8d689ec3ce5777dc3d1c2d2c1effaec0bc36da95a5" protocol=ttrpc version=3 Sep 4 00:08:50.415190 systemd[1]: Started cri-containerd-b513880e85fc398faca53b210ed8f6b82a330d89c721e340840db4491c58f066.scope - libcontainer container b513880e85fc398faca53b210ed8f6b82a330d89c721e340840db4491c58f066. Sep 4 00:08:50.490806 containerd[2012]: time="2025-09-04T00:08:50.490766076Z" level=info msg="StartContainer for \"b513880e85fc398faca53b210ed8f6b82a330d89c721e340840db4491c58f066\" returns successfully" Sep 4 00:08:50.541391 systemd-networkd[1879]: vxlan.calico: Gained IPv6LL Sep 4 00:08:51.280781 kubelet[3275]: I0904 00:08:51.280663 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-655448545f-s7m6x" podStartSLOduration=27.361121742 podStartE2EDuration="33.280646315s" podCreationTimestamp="2025-09-04 00:08:18 +0000 UTC" firstStartedPulling="2025-09-04 00:08:44.32186704 +0000 UTC m=+48.694204513" lastFinishedPulling="2025-09-04 00:08:50.241391601 +0000 UTC m=+54.613729086" observedRunningTime="2025-09-04 00:08:51.279938737 +0000 UTC m=+55.652276210" watchObservedRunningTime="2025-09-04 00:08:51.280646315 +0000 UTC m=+55.652983808" Sep 4 00:08:51.411209 containerd[2012]: time="2025-09-04T00:08:51.411163478Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b513880e85fc398faca53b210ed8f6b82a330d89c721e340840db4491c58f066\" id:\"4b5d38b200b3b7ea93c08ec002ae4410ef8751f7f204cdb666cd3be9ad2df651\" pid:5637 exited_at:{seconds:1756944531 nanos:410657681}" Sep 4 00:08:52.563934 systemd[1]: Started sshd@8-172.31.29.190:22-139.178.68.195:46420.service - OpenSSH per-connection server daemon (139.178.68.195:46420). Sep 4 00:08:52.923621 sshd[5652]: Accepted publickey for core from 139.178.68.195 port 46420 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:08:52.928614 sshd-session[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:52.938839 systemd-logind[1979]: New session 9 of user core. Sep 4 00:08:52.944474 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 00:08:53.487520 ntpd[1972]: Listen normally on 8 vxlan.calico 192.168.121.192:123 Sep 4 00:08:53.491525 ntpd[1972]: 4 Sep 00:08:53 ntpd[1972]: Listen normally on 8 vxlan.calico 192.168.121.192:123 Sep 4 00:08:53.491525 ntpd[1972]: 4 Sep 00:08:53 ntpd[1972]: Listen normally on 9 caliae5cdee3f83 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 4 00:08:53.491525 ntpd[1972]: 4 Sep 00:08:53 ntpd[1972]: Listen normally on 10 cali741ece4d514 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 4 00:08:53.491525 ntpd[1972]: 4 Sep 00:08:53 ntpd[1972]: Listen normally on 11 calie1ef844bac6 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 4 00:08:53.491525 ntpd[1972]: 4 Sep 00:08:53 ntpd[1972]: Listen normally on 12 cali0ef2561a35a [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 00:08:53.491525 ntpd[1972]: 4 Sep 00:08:53 ntpd[1972]: Listen normally on 13 cali8bd8d344aea [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 00:08:53.491525 ntpd[1972]: 4 Sep 00:08:53 ntpd[1972]: Listen normally on 14 cali073d8fdf5a5 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 00:08:53.491525 ntpd[1972]: 4 Sep 00:08:53 ntpd[1972]: Listen normally on 15 calie0be5511ae0 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 00:08:53.491525 ntpd[1972]: 4 Sep 00:08:53 ntpd[1972]: Listen normally on 16 calic1776209a2b [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 00:08:53.491525 ntpd[1972]: 4 Sep 00:08:53 ntpd[1972]: Listen normally on 17 cali04fd8f20bc7 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 4 00:08:53.491525 ntpd[1972]: 4 Sep 00:08:53 ntpd[1972]: Listen normally on 18 vxlan.calico [fe80::64bf:3eff:fe43:2059%13]:123 Sep 4 00:08:53.487594 ntpd[1972]: Listen normally on 9 caliae5cdee3f83 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 4 00:08:53.525618 containerd[2012]: time="2025-09-04T00:08:53.493391365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:53.525618 containerd[2012]: time="2025-09-04T00:08:53.495278188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 4 00:08:53.525618 containerd[2012]: time="2025-09-04T00:08:53.496507816Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:53.525618 containerd[2012]: time="2025-09-04T00:08:53.499216151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:53.525618 containerd[2012]: time="2025-09-04T00:08:53.500555808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.257392417s" Sep 4 00:08:53.525618 containerd[2012]: time="2025-09-04T00:08:53.500589115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 00:08:53.487636 ntpd[1972]: Listen normally on 10 cali741ece4d514 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 4 00:08:53.487663 ntpd[1972]: Listen normally on 11 calie1ef844bac6 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 4 00:08:53.487691 ntpd[1972]: Listen normally on 12 cali0ef2561a35a [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 00:08:53.487717 ntpd[1972]: Listen normally on 13 cali8bd8d344aea [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 00:08:53.487741 ntpd[1972]: Listen normally on 14 cali073d8fdf5a5 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 00:08:53.487766 ntpd[1972]: Listen normally on 15 calie0be5511ae0 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 00:08:53.487792 ntpd[1972]: Listen normally on 16 calic1776209a2b [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 00:08:53.487821 ntpd[1972]: Listen normally on 17 cali04fd8f20bc7 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 4 00:08:53.487847 ntpd[1972]: Listen normally on 18 vxlan.calico [fe80::64bf:3eff:fe43:2059%13]:123 Sep 4 00:08:53.536329 containerd[2012]: time="2025-09-04T00:08:53.536066078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 4 00:08:53.539523 containerd[2012]: time="2025-09-04T00:08:53.539354549Z" level=info msg="CreateContainer within sandbox \"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 00:08:53.555103 containerd[2012]: time="2025-09-04T00:08:53.554450415Z" level=info msg="Container bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:08:53.606054 containerd[2012]: time="2025-09-04T00:08:53.606009170Z" level=info msg="CreateContainer within sandbox \"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10\"" Sep 4 00:08:53.607172 containerd[2012]: time="2025-09-04T00:08:53.607126197Z" level=info msg="StartContainer for \"bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10\"" Sep 4 00:08:53.610370 containerd[2012]: time="2025-09-04T00:08:53.610333103Z" level=info msg="connecting to shim bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10" address="unix:///run/containerd/s/0e9d2d07bf2e0ae223fb55c87c2403ad5984b94b97f1258ec7772079db8622e6" protocol=ttrpc version=3 Sep 4 00:08:53.706503 systemd[1]: Started cri-containerd-bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10.scope - libcontainer container bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10. Sep 4 00:08:53.900091 containerd[2012]: time="2025-09-04T00:08:53.899048551Z" level=info msg="StartContainer for \"bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10\" returns successfully" Sep 4 00:08:54.133198 sshd[5654]: Connection closed by 139.178.68.195 port 46420 Sep 4 00:08:54.134512 sshd-session[5652]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:54.138739 systemd[1]: sshd@8-172.31.29.190:22-139.178.68.195:46420.service: Deactivated successfully. Sep 4 00:08:54.143482 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 00:08:54.155153 systemd-logind[1979]: Session 9 logged out. Waiting for processes to exit. Sep 4 00:08:54.157811 systemd-logind[1979]: Removed session 9. Sep 4 00:08:55.212387 containerd[2012]: time="2025-09-04T00:08:55.212334291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:55.214709 containerd[2012]: time="2025-09-04T00:08:55.214667387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 4 00:08:55.217117 containerd[2012]: time="2025-09-04T00:08:55.216856818Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:55.220698 containerd[2012]: time="2025-09-04T00:08:55.220091397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:55.221836 containerd[2012]: time="2025-09-04T00:08:55.221794004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.685213971s" Sep 4 00:08:55.221836 containerd[2012]: time="2025-09-04T00:08:55.221838959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 4 00:08:55.225652 containerd[2012]: time="2025-09-04T00:08:55.225615131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 00:08:55.262901 containerd[2012]: time="2025-09-04T00:08:55.262854674Z" level=info msg="CreateContainer within sandbox \"07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 00:08:55.382267 containerd[2012]: time="2025-09-04T00:08:55.380827307Z" level=info msg="Container 643483d4778598be533952ed380e38a05edd4b34ceaecc13d221a78f03ce78cc: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:08:55.450377 kubelet[3275]: I0904 00:08:55.450317 3275 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:08:55.463674 containerd[2012]: time="2025-09-04T00:08:55.463504034Z" level=info msg="CreateContainer within sandbox \"07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"643483d4778598be533952ed380e38a05edd4b34ceaecc13d221a78f03ce78cc\"" Sep 4 00:08:55.465871 containerd[2012]: time="2025-09-04T00:08:55.464643324Z" level=info msg="StartContainer for \"643483d4778598be533952ed380e38a05edd4b34ceaecc13d221a78f03ce78cc\"" Sep 4 00:08:55.468584 containerd[2012]: time="2025-09-04T00:08:55.468551398Z" level=info msg="connecting to shim 643483d4778598be533952ed380e38a05edd4b34ceaecc13d221a78f03ce78cc" address="unix:///run/containerd/s/58a6a15d1035bb50c7708a456e618a7cf6de8345c9c48db4226cc391260e05df" protocol=ttrpc version=3 Sep 4 00:08:55.506816 systemd[1]: Started cri-containerd-643483d4778598be533952ed380e38a05edd4b34ceaecc13d221a78f03ce78cc.scope - libcontainer container 643483d4778598be533952ed380e38a05edd4b34ceaecc13d221a78f03ce78cc. Sep 4 00:08:55.596526 containerd[2012]: time="2025-09-04T00:08:55.596481830Z" level=info msg="StartContainer for \"643483d4778598be533952ed380e38a05edd4b34ceaecc13d221a78f03ce78cc\" returns successfully" Sep 4 00:08:55.705134 containerd[2012]: time="2025-09-04T00:08:55.705066226Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:55.707892 containerd[2012]: time="2025-09-04T00:08:55.707416247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 4 00:08:55.709229 containerd[2012]: time="2025-09-04T00:08:55.709174547Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 483.498426ms" Sep 4 00:08:55.709229 containerd[2012]: time="2025-09-04T00:08:55.709212084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 00:08:55.711361 containerd[2012]: time="2025-09-04T00:08:55.711332279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 4 00:08:55.712680 containerd[2012]: time="2025-09-04T00:08:55.712651433Z" level=info msg="CreateContainer within sandbox \"acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 00:08:55.725481 containerd[2012]: time="2025-09-04T00:08:55.725381134Z" level=info msg="Container 4bd9b350a60db7470af457de096a69c7bbfb3705146383d31d4c4114ecefc469: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:08:55.748369 containerd[2012]: time="2025-09-04T00:08:55.748262440Z" level=info msg="CreateContainer within sandbox \"acfae293dd25f2b9d04b5c80e18be42fd734cfeff875158e36a7f9abff1907df\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4bd9b350a60db7470af457de096a69c7bbfb3705146383d31d4c4114ecefc469\"" Sep 4 00:08:55.749276 containerd[2012]: time="2025-09-04T00:08:55.749153112Z" level=info msg="StartContainer for \"4bd9b350a60db7470af457de096a69c7bbfb3705146383d31d4c4114ecefc469\"" Sep 4 00:08:55.750839 containerd[2012]: time="2025-09-04T00:08:55.750732748Z" level=info msg="connecting to shim 4bd9b350a60db7470af457de096a69c7bbfb3705146383d31d4c4114ecefc469" address="unix:///run/containerd/s/39e56294e0cf25eaeebf9f8d26bc8755890e5e2baa938ee25eb254e9ae8f93b1" protocol=ttrpc version=3 Sep 4 00:08:55.779462 systemd[1]: Started cri-containerd-4bd9b350a60db7470af457de096a69c7bbfb3705146383d31d4c4114ecefc469.scope - libcontainer container 4bd9b350a60db7470af457de096a69c7bbfb3705146383d31d4c4114ecefc469. Sep 4 00:08:55.892258 containerd[2012]: time="2025-09-04T00:08:55.892113117Z" level=info msg="StartContainer for \"4bd9b350a60db7470af457de096a69c7bbfb3705146383d31d4c4114ecefc469\" returns successfully" Sep 4 00:08:56.486262 kubelet[3275]: I0904 00:08:56.486164 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-664cb9f68d-t5fls" podStartSLOduration=34.677491153 podStartE2EDuration="43.474915542s" podCreationTimestamp="2025-09-04 00:08:13 +0000 UTC" firstStartedPulling="2025-09-04 00:08:44.737920726 +0000 UTC m=+49.110258211" lastFinishedPulling="2025-09-04 00:08:53.535345117 +0000 UTC m=+57.907682600" observedRunningTime="2025-09-04 00:08:54.36034672 +0000 UTC m=+58.732684251" watchObservedRunningTime="2025-09-04 00:08:56.474915542 +0000 UTC m=+60.847253035" Sep 4 00:08:57.364686 kubelet[3275]: I0904 00:08:57.364040 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d5d477df-hxgst" podStartSLOduration=31.844896449 podStartE2EDuration="42.364020765s" podCreationTimestamp="2025-09-04 00:08:15 +0000 UTC" firstStartedPulling="2025-09-04 00:08:45.191001228 +0000 UTC m=+49.563338713" lastFinishedPulling="2025-09-04 00:08:55.710125546 +0000 UTC m=+60.082463029" observedRunningTime="2025-09-04 00:08:56.486441588 +0000 UTC m=+60.858779064" watchObservedRunningTime="2025-09-04 00:08:57.364020765 +0000 UTC m=+61.736358258" Sep 4 00:08:59.171558 systemd[1]: Started sshd@9-172.31.29.190:22-139.178.68.195:46422.service - OpenSSH per-connection server daemon (139.178.68.195:46422). Sep 4 00:08:59.424971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311038777.mount: Deactivated successfully. Sep 4 00:08:59.469970 containerd[2012]: time="2025-09-04T00:08:59.468765019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:59.473269 containerd[2012]: time="2025-09-04T00:08:59.472799345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 4 00:08:59.475661 containerd[2012]: time="2025-09-04T00:08:59.475546836Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:59.480145 containerd[2012]: time="2025-09-04T00:08:59.479347140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:08:59.480145 containerd[2012]: time="2025-09-04T00:08:59.480082245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.768720156s" Sep 4 00:08:59.480145 containerd[2012]: time="2025-09-04T00:08:59.480105847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 4 00:08:59.487765 sshd[5796]: Accepted publickey for core from 139.178.68.195 port 46422 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:08:59.492428 sshd-session[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:59.499863 containerd[2012]: time="2025-09-04T00:08:59.499655426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 4 00:08:59.503732 systemd-logind[1979]: New session 10 of user core. Sep 4 00:08:59.509454 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 00:08:59.557276 containerd[2012]: time="2025-09-04T00:08:59.556381986Z" level=info msg="CreateContainer within sandbox \"d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 4 00:08:59.570557 containerd[2012]: time="2025-09-04T00:08:59.570503388Z" level=info msg="Container f969d38a72c4f9c096296ffd84301b76261da74303c03e16e1c93ed0378b63ac: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:08:59.591784 containerd[2012]: time="2025-09-04T00:08:59.591741560Z" level=info msg="CreateContainer within sandbox \"d01a8e0df8b89edf1a7e17fd31e2a663a4bd65c0662b2ddf45eaf4fd7fece43c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f969d38a72c4f9c096296ffd84301b76261da74303c03e16e1c93ed0378b63ac\"" Sep 4 00:08:59.598045 containerd[2012]: time="2025-09-04T00:08:59.597992346Z" level=info msg="StartContainer for \"f969d38a72c4f9c096296ffd84301b76261da74303c03e16e1c93ed0378b63ac\"" Sep 4 00:08:59.606303 containerd[2012]: time="2025-09-04T00:08:59.605510301Z" level=info msg="connecting to shim f969d38a72c4f9c096296ffd84301b76261da74303c03e16e1c93ed0378b63ac" address="unix:///run/containerd/s/890d3ec27061bf1a509a242f88ee2a53d81b9ba62d6a967696b7c1ed273e9a75" protocol=ttrpc version=3 Sep 4 00:08:59.691461 systemd[1]: Started cri-containerd-f969d38a72c4f9c096296ffd84301b76261da74303c03e16e1c93ed0378b63ac.scope - libcontainer container f969d38a72c4f9c096296ffd84301b76261da74303c03e16e1c93ed0378b63ac. Sep 4 00:08:59.790468 containerd[2012]: time="2025-09-04T00:08:59.790421022Z" level=info msg="StartContainer for \"f969d38a72c4f9c096296ffd84301b76261da74303c03e16e1c93ed0378b63ac\" returns successfully" Sep 4 00:09:00.873673 sshd[5802]: Connection closed by 139.178.68.195 port 46422 Sep 4 00:09:00.875218 sshd-session[5796]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:00.882401 systemd-logind[1979]: Session 10 logged out. Waiting for processes to exit. Sep 4 00:09:00.882950 systemd[1]: sshd@9-172.31.29.190:22-139.178.68.195:46422.service: Deactivated successfully. Sep 4 00:09:00.886796 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 00:09:00.891166 systemd-logind[1979]: Removed session 10. Sep 4 00:09:00.907627 systemd[1]: Started sshd@10-172.31.29.190:22-139.178.68.195:52362.service - OpenSSH per-connection server daemon (139.178.68.195:52362). Sep 4 00:09:01.131490 sshd[5862]: Accepted publickey for core from 139.178.68.195 port 52362 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:09:01.131580 sshd-session[5862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:01.140206 systemd-logind[1979]: New session 11 of user core. Sep 4 00:09:01.149223 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 00:09:01.366584 kubelet[3275]: I0904 00:09:01.349395 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6d54dfb589-htm7w" podStartSLOduration=4.232917329 podStartE2EDuration="20.349368507s" podCreationTimestamp="2025-09-04 00:08:41 +0000 UTC" firstStartedPulling="2025-09-04 00:08:43.372883924 +0000 UTC m=+47.745221409" lastFinishedPulling="2025-09-04 00:08:59.489335099 +0000 UTC m=+63.861672587" observedRunningTime="2025-09-04 00:09:00.522275987 +0000 UTC m=+64.894613479" watchObservedRunningTime="2025-09-04 00:09:01.349368507 +0000 UTC m=+65.721706000" Sep 4 00:09:02.148365 sshd[5864]: Connection closed by 139.178.68.195 port 52362 Sep 4 00:09:02.150172 sshd-session[5862]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:02.162256 systemd-logind[1979]: Session 11 logged out. Waiting for processes to exit. Sep 4 00:09:02.163123 systemd[1]: sshd@10-172.31.29.190:22-139.178.68.195:52362.service: Deactivated successfully. Sep 4 00:09:02.173676 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 00:09:02.205887 systemd-logind[1979]: Removed session 11. Sep 4 00:09:02.206712 systemd[1]: Started sshd@11-172.31.29.190:22-139.178.68.195:52370.service - OpenSSH per-connection server daemon (139.178.68.195:52370). Sep 4 00:09:02.498856 sshd[5882]: Accepted publickey for core from 139.178.68.195 port 52370 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:09:02.504131 sshd-session[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:02.526616 systemd-logind[1979]: New session 12 of user core. Sep 4 00:09:02.529497 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 00:09:03.223094 sshd[5884]: Connection closed by 139.178.68.195 port 52370 Sep 4 00:09:03.224134 sshd-session[5882]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:03.245859 systemd[1]: sshd@11-172.31.29.190:22-139.178.68.195:52370.service: Deactivated successfully. Sep 4 00:09:03.251840 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 00:09:03.256851 systemd-logind[1979]: Session 12 logged out. Waiting for processes to exit. Sep 4 00:09:03.261069 systemd-logind[1979]: Removed session 12. Sep 4 00:09:03.526849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3567499345.mount: Deactivated successfully. Sep 4 00:09:03.838809 kubelet[3275]: I0904 00:09:03.837810 3275 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:09:04.923170 containerd[2012]: time="2025-09-04T00:09:04.923081488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:09:04.926363 containerd[2012]: time="2025-09-04T00:09:04.926319271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 4 00:09:05.078348 containerd[2012]: time="2025-09-04T00:09:05.078301020Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:09:05.081363 containerd[2012]: time="2025-09-04T00:09:05.081296199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:09:05.084869 containerd[2012]: time="2025-09-04T00:09:05.084782571Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.584687195s" Sep 4 00:09:05.084869 containerd[2012]: time="2025-09-04T00:09:05.084821642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 4 00:09:05.101486 containerd[2012]: time="2025-09-04T00:09:05.101370273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 00:09:05.132425 containerd[2012]: time="2025-09-04T00:09:05.131914095Z" level=info msg="CreateContainer within sandbox \"b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 4 00:09:05.193801 containerd[2012]: time="2025-09-04T00:09:05.193664042Z" level=info msg="Container 81e10d7da6b806f1c4215951661c22e308e02e3e25a22eff593ecc1f91e60d8f: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:09:05.265450 containerd[2012]: time="2025-09-04T00:09:05.265379590Z" level=info msg="CreateContainer within sandbox \"b7c8161a114fe1e22d5ddad1cb81219abaf46bb9f72ff07cb3fc3198295e600b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"81e10d7da6b806f1c4215951661c22e308e02e3e25a22eff593ecc1f91e60d8f\"" Sep 4 00:09:05.266372 containerd[2012]: time="2025-09-04T00:09:05.266146985Z" level=info msg="StartContainer for \"81e10d7da6b806f1c4215951661c22e308e02e3e25a22eff593ecc1f91e60d8f\"" Sep 4 00:09:05.277790 containerd[2012]: time="2025-09-04T00:09:05.277696918Z" level=info msg="connecting to shim 81e10d7da6b806f1c4215951661c22e308e02e3e25a22eff593ecc1f91e60d8f" address="unix:///run/containerd/s/51be4f8b5487cd2104dfc426611e9d26f23ff5925de7c31824a4d104f8892b8b" protocol=ttrpc version=3 Sep 4 00:09:05.377121 systemd[1]: Started cri-containerd-81e10d7da6b806f1c4215951661c22e308e02e3e25a22eff593ecc1f91e60d8f.scope - libcontainer container 81e10d7da6b806f1c4215951661c22e308e02e3e25a22eff593ecc1f91e60d8f. Sep 4 00:09:05.673427 containerd[2012]: time="2025-09-04T00:09:05.673378645Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:09:05.674545 containerd[2012]: time="2025-09-04T00:09:05.674440348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 4 00:09:05.695909 containerd[2012]: time="2025-09-04T00:09:05.695766532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 594.283184ms" Sep 4 00:09:05.695909 containerd[2012]: time="2025-09-04T00:09:05.695842866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 00:09:05.698801 containerd[2012]: time="2025-09-04T00:09:05.698171177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 4 00:09:05.703896 containerd[2012]: time="2025-09-04T00:09:05.703161678Z" level=info msg="CreateContainer within sandbox \"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 00:09:05.733937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2833748604.mount: Deactivated successfully. Sep 4 00:09:05.739192 containerd[2012]: time="2025-09-04T00:09:05.738429081Z" level=info msg="Container 2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:09:05.750190 containerd[2012]: time="2025-09-04T00:09:05.750077117Z" level=info msg="StartContainer for \"81e10d7da6b806f1c4215951661c22e308e02e3e25a22eff593ecc1f91e60d8f\" returns successfully" Sep 4 00:09:05.753883 containerd[2012]: time="2025-09-04T00:09:05.753840145Z" level=info msg="CreateContainer within sandbox \"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5\"" Sep 4 00:09:05.756335 containerd[2012]: time="2025-09-04T00:09:05.755658876Z" level=info msg="StartContainer for \"2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5\"" Sep 4 00:09:05.757796 containerd[2012]: time="2025-09-04T00:09:05.757746471Z" level=info msg="connecting to shim 2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5" address="unix:///run/containerd/s/af844aa5998dcee1bf61b8379202e07e987d3866c19c50326433500a0e59f1c9" protocol=ttrpc version=3 Sep 4 00:09:05.815625 systemd[1]: Started cri-containerd-2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5.scope - libcontainer container 2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5. Sep 4 00:09:06.100648 containerd[2012]: time="2025-09-04T00:09:06.100459535Z" level=info msg="StartContainer for \"2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5\" returns successfully" Sep 4 00:09:06.961317 kubelet[3275]: I0904 00:09:06.920986 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-4dg2h" podStartSLOduration=31.016726875 podStartE2EDuration="48.920962095s" podCreationTimestamp="2025-09-04 00:08:18 +0000 UTC" firstStartedPulling="2025-09-04 00:08:47.196727413 +0000 UTC m=+51.569064899" lastFinishedPulling="2025-09-04 00:09:05.100962632 +0000 UTC m=+69.473300119" observedRunningTime="2025-09-04 00:09:06.868101853 +0000 UTC m=+71.240439347" watchObservedRunningTime="2025-09-04 00:09:06.920962095 +0000 UTC m=+71.293299589" Sep 4 00:09:06.964407 kubelet[3275]: I0904 00:09:06.962231 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-664cb9f68d-sjbqq" podStartSLOduration=35.999439179 podStartE2EDuration="53.962203939s" podCreationTimestamp="2025-09-04 00:08:13 +0000 UTC" firstStartedPulling="2025-09-04 00:08:47.735130835 +0000 UTC m=+52.107468309" lastFinishedPulling="2025-09-04 00:09:05.697895583 +0000 UTC m=+70.070233069" observedRunningTime="2025-09-04 00:09:06.960530912 +0000 UTC m=+71.332868403" watchObservedRunningTime="2025-09-04 00:09:06.962203939 +0000 UTC m=+71.334541454" Sep 4 00:09:06.995457 containerd[2012]: time="2025-09-04T00:09:06.995385814Z" level=info msg="StopContainer for \"2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5\" with timeout 30 (s)" Sep 4 00:09:07.025479 containerd[2012]: time="2025-09-04T00:09:07.025362195Z" level=info msg="Stop container \"2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5\" with signal terminated" Sep 4 00:09:07.086614 systemd[1]: cri-containerd-2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5.scope: Deactivated successfully. Sep 4 00:09:07.087021 systemd[1]: cri-containerd-2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5.scope: Consumed 669ms CPU time, 15.4M memory peak, 1.7M read from disk. Sep 4 00:09:07.143485 containerd[2012]: time="2025-09-04T00:09:07.143404318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5\" id:\"2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5\" pid:5959 exit_status:1 exited_at:{seconds:1756944547 nanos:120286839}" Sep 4 00:09:07.144081 containerd[2012]: time="2025-09-04T00:09:07.143989309Z" level=info msg="received exit event container_id:\"2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5\" id:\"2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5\" pid:5959 exit_status:1 exited_at:{seconds:1756944547 nanos:120286839}" Sep 4 00:09:07.220350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5-rootfs.mount: Deactivated successfully. Sep 4 00:09:07.309422 containerd[2012]: time="2025-09-04T00:09:07.309318622Z" level=info msg="StopContainer for \"2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5\" returns successfully" Sep 4 00:09:07.328522 containerd[2012]: time="2025-09-04T00:09:07.328484687Z" level=info msg="StopPodSandbox for \"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec\"" Sep 4 00:09:07.345078 containerd[2012]: time="2025-09-04T00:09:07.344990982Z" level=info msg="Container to stop \"2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:09:07.398136 containerd[2012]: time="2025-09-04T00:09:07.397758043Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e10d7da6b806f1c4215951661c22e308e02e3e25a22eff593ecc1f91e60d8f\" id:\"83fcc6137fe09a44d209f664a7b92025890de356530bafe6bad0aad9e00cf715\" pid:6008 exit_status:1 exited_at:{seconds:1756944547 nanos:395980014}" Sep 4 00:09:07.411002 systemd[1]: cri-containerd-2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec.scope: Deactivated successfully. Sep 4 00:09:07.447757 containerd[2012]: time="2025-09-04T00:09:07.428092481Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec\" id:\"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec\" pid:5268 exit_status:137 exited_at:{seconds:1756944547 nanos:426427924}" Sep 4 00:09:07.510482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec-rootfs.mount: Deactivated successfully. Sep 4 00:09:07.542302 containerd[2012]: time="2025-09-04T00:09:07.541099306Z" level=info msg="shim disconnected" id=2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec namespace=k8s.io Sep 4 00:09:07.542302 containerd[2012]: time="2025-09-04T00:09:07.541153108Z" level=warning msg="cleaning up after shim disconnected" id=2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec namespace=k8s.io Sep 4 00:09:07.562085 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec-shm.mount: Deactivated successfully. Sep 4 00:09:07.575038 containerd[2012]: time="2025-09-04T00:09:07.541169021Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 00:09:07.575777 containerd[2012]: time="2025-09-04T00:09:07.575734969Z" level=info msg="received exit event sandbox_id:\"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec\" exit_status:137 exited_at:{seconds:1756944547 nanos:426427924}" Sep 4 00:09:07.712964 kubelet[3275]: I0904 00:09:07.712880 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Sep 4 00:09:08.030771 containerd[2012]: time="2025-09-04T00:09:08.030719355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e10d7da6b806f1c4215951661c22e308e02e3e25a22eff593ecc1f91e60d8f\" id:\"bb05a64fe1dfa22ccfee205bc94ab205ad5c9711627927a05dd1de329a8a5006\" pid:6088 exit_status:1 exited_at:{seconds:1756944548 nanos:29300582}" Sep 4 00:09:08.148078 containerd[2012]: time="2025-09-04T00:09:08.148003413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:09:08.163984 containerd[2012]: time="2025-09-04T00:09:08.163904050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 4 00:09:08.178256 containerd[2012]: time="2025-09-04T00:09:08.177940996Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:09:08.181764 containerd[2012]: time="2025-09-04T00:09:08.181731483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:09:08.183989 containerd[2012]: time="2025-09-04T00:09:08.183955910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.483989836s" Sep 4 00:09:08.183989 containerd[2012]: time="2025-09-04T00:09:08.183993439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 4 00:09:08.258032 containerd[2012]: time="2025-09-04T00:09:08.256215013Z" level=info msg="CreateContainer within sandbox \"07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 00:09:08.304744 systemd[1]: Started sshd@12-172.31.29.190:22-139.178.68.195:52386.service - OpenSSH per-connection server daemon (139.178.68.195:52386). Sep 4 00:09:08.353489 systemd-networkd[1879]: calie0be5511ae0: Link DOWN Sep 4 00:09:08.353500 systemd-networkd[1879]: calie0be5511ae0: Lost carrier Sep 4 00:09:08.500404 containerd[2012]: time="2025-09-04T00:09:08.500028823Z" level=info msg="Container b1557d7e0e83e92fc93c40e601f4510388824724f2f182e04efd120514b7ae83: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:09:08.517826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4270784740.mount: Deactivated successfully. Sep 4 00:09:08.644410 containerd[2012]: time="2025-09-04T00:09:08.644128282Z" level=info msg="CreateContainer within sandbox \"07291a4a08b973494ba4aeb647a9517d3759426d080cac5f8a35582365d6880d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b1557d7e0e83e92fc93c40e601f4510388824724f2f182e04efd120514b7ae83\"" Sep 4 00:09:08.647022 containerd[2012]: time="2025-09-04T00:09:08.646083169Z" level=info msg="StartContainer for \"b1557d7e0e83e92fc93c40e601f4510388824724f2f182e04efd120514b7ae83\"" Sep 4 00:09:08.650434 containerd[2012]: time="2025-09-04T00:09:08.650403122Z" level=info msg="connecting to shim b1557d7e0e83e92fc93c40e601f4510388824724f2f182e04efd120514b7ae83" address="unix:///run/containerd/s/58a6a15d1035bb50c7708a456e618a7cf6de8345c9c48db4226cc391260e05df" protocol=ttrpc version=3 Sep 4 00:09:08.762542 systemd[1]: Started cri-containerd-b1557d7e0e83e92fc93c40e601f4510388824724f2f182e04efd120514b7ae83.scope - libcontainer container b1557d7e0e83e92fc93c40e601f4510388824724f2f182e04efd120514b7ae83. Sep 4 00:09:08.916628 containerd[2012]: time="2025-09-04T00:09:08.916500758Z" level=info msg="StartContainer for \"b1557d7e0e83e92fc93c40e601f4510388824724f2f182e04efd120514b7ae83\" returns successfully" Sep 4 00:09:08.945180 sshd[6115]: Accepted publickey for core from 139.178.68.195 port 52386 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:09:08.951780 sshd-session[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:08.971027 systemd-logind[1979]: New session 13 of user core. Sep 4 00:09:08.978500 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 00:09:08.981654 containerd[2012]: time="2025-09-04T00:09:08.980973985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e10d7da6b806f1c4215951661c22e308e02e3e25a22eff593ecc1f91e60d8f\" id:\"2e60d056b07af5cda99fdceaf215b93b8afd6c83e9e9c55d36c93c5efa45f9c7\" pid:6161 exit_status:1 exited_at:{seconds:1756944548 nanos:979229935}" Sep 4 00:09:09.224698 containerd[2012]: 2025-09-04 00:09:08.244 [INFO][6075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Sep 4 00:09:09.224698 containerd[2012]: 2025-09-04 00:09:08.285 [INFO][6075] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" iface="eth0" netns="/var/run/netns/cni-16e2589d-c7e8-e292-ca91-d0f91b36e701" Sep 4 00:09:09.224698 containerd[2012]: 2025-09-04 00:09:08.285 [INFO][6075] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" iface="eth0" netns="/var/run/netns/cni-16e2589d-c7e8-e292-ca91-d0f91b36e701" Sep 4 00:09:09.224698 containerd[2012]: 2025-09-04 00:09:08.333 [INFO][6075] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" after=48.199445ms iface="eth0" netns="/var/run/netns/cni-16e2589d-c7e8-e292-ca91-d0f91b36e701" Sep 4 00:09:09.224698 containerd[2012]: 2025-09-04 00:09:08.333 [INFO][6075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Sep 4 00:09:09.224698 containerd[2012]: 2025-09-04 00:09:08.333 [INFO][6075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Sep 4 00:09:09.224698 containerd[2012]: 2025-09-04 00:09:09.078 [INFO][6118] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" HandleID="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:09:09.224698 containerd[2012]: 2025-09-04 00:09:09.086 [INFO][6118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:09:09.224698 containerd[2012]: 2025-09-04 00:09:09.087 [INFO][6118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:09:09.224698 containerd[2012]: 2025-09-04 00:09:09.214 [INFO][6118] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" HandleID="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:09:09.224698 containerd[2012]: 2025-09-04 00:09:09.214 [INFO][6118] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" HandleID="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:09:09.224698 containerd[2012]: 2025-09-04 00:09:09.217 [INFO][6118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:09:09.224698 containerd[2012]: 2025-09-04 00:09:09.221 [INFO][6075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Sep 4 00:09:09.227474 containerd[2012]: time="2025-09-04T00:09:09.227338545Z" level=info msg="TearDown network for sandbox \"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec\" successfully" Sep 4 00:09:09.227474 containerd[2012]: time="2025-09-04T00:09:09.227388338Z" level=info msg="StopPodSandbox for \"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec\" returns successfully" Sep 4 00:09:09.233168 systemd[1]: run-netns-cni\x2d16e2589d\x2dc7e8\x2de292\x2dca91\x2dd0f91b36e701.mount: Deactivated successfully. Sep 4 00:09:09.497584 kubelet[3275]: I0904 00:09:09.496732 3275 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e303bdd1-9653-4113-9d2e-efb6f383697a-calico-apiserver-certs\") pod \"e303bdd1-9653-4113-9d2e-efb6f383697a\" (UID: \"e303bdd1-9653-4113-9d2e-efb6f383697a\") " Sep 4 00:09:09.497584 kubelet[3275]: I0904 00:09:09.496838 3275 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vdtv\" (UniqueName: \"kubernetes.io/projected/e303bdd1-9653-4113-9d2e-efb6f383697a-kube-api-access-5vdtv\") pod \"e303bdd1-9653-4113-9d2e-efb6f383697a\" (UID: \"e303bdd1-9653-4113-9d2e-efb6f383697a\") " Sep 4 00:09:09.557523 systemd[1]: var-lib-kubelet-pods-e303bdd1\x2d9653\x2d4113\x2d9d2e\x2defb6f383697a-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 4 00:09:09.565092 systemd[1]: var-lib-kubelet-pods-e303bdd1\x2d9653\x2d4113\x2d9d2e\x2defb6f383697a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5vdtv.mount: Deactivated successfully. Sep 4 00:09:09.583452 kubelet[3275]: I0904 00:09:09.573702 3275 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e303bdd1-9653-4113-9d2e-efb6f383697a-kube-api-access-5vdtv" (OuterVolumeSpecName: "kube-api-access-5vdtv") pod "e303bdd1-9653-4113-9d2e-efb6f383697a" (UID: "e303bdd1-9653-4113-9d2e-efb6f383697a"). InnerVolumeSpecName "kube-api-access-5vdtv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 00:09:09.583452 kubelet[3275]: I0904 00:09:09.563942 3275 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e303bdd1-9653-4113-9d2e-efb6f383697a-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "e303bdd1-9653-4113-9d2e-efb6f383697a" (UID: "e303bdd1-9653-4113-9d2e-efb6f383697a"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 00:09:09.604318 kubelet[3275]: I0904 00:09:09.603459 3275 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e303bdd1-9653-4113-9d2e-efb6f383697a-calico-apiserver-certs\") on node \"ip-172-31-29-190\" DevicePath \"\"" Sep 4 00:09:09.604318 kubelet[3275]: I0904 00:09:09.603520 3275 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5vdtv\" (UniqueName: \"kubernetes.io/projected/e303bdd1-9653-4113-9d2e-efb6f383697a-kube-api-access-5vdtv\") on node \"ip-172-31-29-190\" DevicePath \"\"" Sep 4 00:09:09.768026 kubelet[3275]: I0904 00:09:09.760955 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4csm7" podStartSLOduration=28.38913718 podStartE2EDuration="51.760926407s" podCreationTimestamp="2025-09-04 00:08:18 +0000 UTC" firstStartedPulling="2025-09-04 00:08:44.822067331 +0000 UTC m=+49.194404803" lastFinishedPulling="2025-09-04 00:09:08.193856558 +0000 UTC m=+72.566194030" observedRunningTime="2025-09-04 00:09:09.758193503 +0000 UTC m=+74.130531018" watchObservedRunningTime="2025-09-04 00:09:09.760926407 +0000 UTC m=+74.133263899" Sep 4 00:09:09.823636 systemd[1]: Removed slice kubepods-besteffort-pode303bdd1_9653_4113_9d2e_efb6f383697a.slice - libcontainer container kubepods-besteffort-pode303bdd1_9653_4113_9d2e_efb6f383697a.slice. Sep 4 00:09:09.823753 systemd[1]: kubepods-besteffort-pode303bdd1_9653_4113_9d2e_efb6f383697a.slice: Consumed 730ms CPU time, 15.7M memory peak, 1.7M read from disk. Sep 4 00:09:10.033035 sshd[6184]: Connection closed by 139.178.68.195 port 52386 Sep 4 00:09:10.033492 sshd-session[6115]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:10.041522 systemd[1]: sshd@12-172.31.29.190:22-139.178.68.195:52386.service: Deactivated successfully. Sep 4 00:09:10.045119 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 00:09:10.048166 systemd-logind[1979]: Session 13 logged out. Waiting for processes to exit. Sep 4 00:09:10.051333 systemd-logind[1979]: Removed session 13. Sep 4 00:09:10.148767 kubelet[3275]: I0904 00:09:10.148710 3275 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 00:09:10.148767 kubelet[3275]: I0904 00:09:10.148766 3275 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 00:09:10.486544 ntpd[1972]: Deleting interface #15 calie0be5511ae0, fe80::ecee:eeff:feee:eeee%10#123, interface stats: received=0, sent=0, dropped=0, active_time=17 secs Sep 4 00:09:10.487698 ntpd[1972]: 4 Sep 00:09:10 ntpd[1972]: Deleting interface #15 calie0be5511ae0, fe80::ecee:eeff:feee:eeee%10#123, interface stats: received=0, sent=0, dropped=0, active_time=17 secs Sep 4 00:09:11.784093 kubelet[3275]: I0904 00:09:11.784044 3275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e303bdd1-9653-4113-9d2e-efb6f383697a" path="/var/lib/kubelet/pods/e303bdd1-9653-4113-9d2e-efb6f383697a/volumes" Sep 4 00:09:13.867504 containerd[2012]: time="2025-09-04T00:09:13.867461849Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3c2a2e541f0330bea7895fbccd6459a88be193e0becaa32c6aa88d6011eaa05\" id:\"8b7c29ff06ec21495c00bbdbd287ccb793baf725f14f563dbdf57b8f09b1ef7a\" pid:6224 exited_at:{seconds:1756944553 nanos:867036314}" Sep 4 00:09:15.073568 systemd[1]: Started sshd@13-172.31.29.190:22-139.178.68.195:58056.service - OpenSSH per-connection server daemon (139.178.68.195:58056). Sep 4 00:09:15.399671 sshd[6241]: Accepted publickey for core from 139.178.68.195 port 58056 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:09:15.404417 sshd-session[6241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:15.433096 systemd-logind[1979]: New session 14 of user core. Sep 4 00:09:15.436531 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 00:09:16.260887 sshd[6243]: Connection closed by 139.178.68.195 port 58056 Sep 4 00:09:16.262394 sshd-session[6241]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:16.268306 systemd-logind[1979]: Session 14 logged out. Waiting for processes to exit. Sep 4 00:09:16.269015 systemd[1]: sshd@13-172.31.29.190:22-139.178.68.195:58056.service: Deactivated successfully. Sep 4 00:09:16.272869 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 00:09:16.276139 systemd-logind[1979]: Removed session 14. Sep 4 00:09:21.313266 systemd[1]: Started sshd@14-172.31.29.190:22-139.178.68.195:36178.service - OpenSSH per-connection server daemon (139.178.68.195:36178). Sep 4 00:09:21.581946 containerd[2012]: time="2025-09-04T00:09:21.581821196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b513880e85fc398faca53b210ed8f6b82a330d89c721e340840db4491c58f066\" id:\"751fe1623589e1075d6b1aa3bee4f8e16bea87472c37d2d63479bb98482e2f94\" pid:6275 exited_at:{seconds:1756944561 nanos:565692683}" Sep 4 00:09:21.589962 sshd[6269]: Accepted publickey for core from 139.178.68.195 port 36178 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:09:21.594216 sshd-session[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:21.603533 systemd-logind[1979]: New session 15 of user core. Sep 4 00:09:21.610450 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 00:09:22.151056 sshd[6284]: Connection closed by 139.178.68.195 port 36178 Sep 4 00:09:22.151776 sshd-session[6269]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:22.157400 systemd[1]: sshd@14-172.31.29.190:22-139.178.68.195:36178.service: Deactivated successfully. Sep 4 00:09:22.160078 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 00:09:22.161727 systemd-logind[1979]: Session 15 logged out. Waiting for processes to exit. Sep 4 00:09:22.164920 systemd-logind[1979]: Removed session 15. Sep 4 00:09:22.188716 systemd[1]: Started sshd@15-172.31.29.190:22-139.178.68.195:36186.service - OpenSSH per-connection server daemon (139.178.68.195:36186). Sep 4 00:09:22.390115 sshd[6296]: Accepted publickey for core from 139.178.68.195 port 36186 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:09:22.392964 sshd-session[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:22.399151 systemd-logind[1979]: New session 16 of user core. Sep 4 00:09:22.405476 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 00:09:25.477492 sshd[6298]: Connection closed by 139.178.68.195 port 36186 Sep 4 00:09:25.481527 sshd-session[6296]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:25.493043 systemd[1]: sshd@15-172.31.29.190:22-139.178.68.195:36186.service: Deactivated successfully. Sep 4 00:09:25.496019 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 00:09:25.499002 systemd-logind[1979]: Session 16 logged out. Waiting for processes to exit. Sep 4 00:09:25.508331 systemd[1]: Started sshd@16-172.31.29.190:22-139.178.68.195:36192.service - OpenSSH per-connection server daemon (139.178.68.195:36192). Sep 4 00:09:25.509848 systemd-logind[1979]: Removed session 16. Sep 4 00:09:25.766508 sshd[6308]: Accepted publickey for core from 139.178.68.195 port 36192 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:09:25.769140 sshd-session[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:25.780086 systemd-logind[1979]: New session 17 of user core. Sep 4 00:09:25.788045 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 00:09:26.715507 sshd[6310]: Connection closed by 139.178.68.195 port 36192 Sep 4 00:09:26.718551 sshd-session[6308]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:26.726199 systemd[1]: sshd@16-172.31.29.190:22-139.178.68.195:36192.service: Deactivated successfully. Sep 4 00:09:26.729096 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 00:09:26.731229 systemd-logind[1979]: Session 17 logged out. Waiting for processes to exit. Sep 4 00:09:26.733332 systemd-logind[1979]: Removed session 17. Sep 4 00:09:26.751792 systemd[1]: Started sshd@17-172.31.29.190:22-139.178.68.195:36198.service - OpenSSH per-connection server daemon (139.178.68.195:36198). Sep 4 00:09:26.944420 sshd[6331]: Accepted publickey for core from 139.178.68.195 port 36198 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:09:26.945933 sshd-session[6331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:26.951474 systemd-logind[1979]: New session 18 of user core. Sep 4 00:09:26.957461 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 00:09:27.846832 sshd[6333]: Connection closed by 139.178.68.195 port 36198 Sep 4 00:09:27.850359 sshd-session[6331]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:27.860320 systemd-logind[1979]: Session 18 logged out. Waiting for processes to exit. Sep 4 00:09:27.861044 systemd[1]: sshd@17-172.31.29.190:22-139.178.68.195:36198.service: Deactivated successfully. Sep 4 00:09:27.864476 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 00:09:27.869871 systemd-logind[1979]: Removed session 18. Sep 4 00:09:27.885560 systemd[1]: Started sshd@18-172.31.29.190:22-139.178.68.195:36206.service - OpenSSH per-connection server daemon (139.178.68.195:36206). Sep 4 00:09:28.136038 sshd[6343]: Accepted publickey for core from 139.178.68.195 port 36206 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:09:28.137351 sshd-session[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:28.143986 systemd-logind[1979]: New session 19 of user core. Sep 4 00:09:28.148441 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 00:09:28.457295 sshd[6345]: Connection closed by 139.178.68.195 port 36206 Sep 4 00:09:28.458157 sshd-session[6343]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:28.464673 systemd[1]: sshd@18-172.31.29.190:22-139.178.68.195:36206.service: Deactivated successfully. Sep 4 00:09:28.467431 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 00:09:28.469595 systemd-logind[1979]: Session 19 logged out. Waiting for processes to exit. Sep 4 00:09:28.473503 systemd-logind[1979]: Removed session 19. Sep 4 00:09:32.866022 containerd[2012]: time="2025-09-04T00:09:32.865970623Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b513880e85fc398faca53b210ed8f6b82a330d89c721e340840db4491c58f066\" id:\"9cd44e065db01df3ff99195c4faf7e4aee23a09ac904ec8254eb3754d8f422a4\" pid:6377 exited_at:{seconds:1756944572 nanos:865529390}" Sep 4 00:09:33.493512 systemd[1]: Started sshd@19-172.31.29.190:22-139.178.68.195:36820.service - OpenSSH per-connection server daemon (139.178.68.195:36820). Sep 4 00:09:33.720734 sshd[6389]: Accepted publickey for core from 139.178.68.195 port 36820 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:09:33.723981 sshd-session[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:33.730735 systemd-logind[1979]: New session 20 of user core. Sep 4 00:09:33.737401 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 00:09:34.186648 sshd[6391]: Connection closed by 139.178.68.195 port 36820 Sep 4 00:09:34.188570 sshd-session[6389]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:34.192709 systemd-logind[1979]: Session 20 logged out. Waiting for processes to exit. Sep 4 00:09:34.193488 systemd[1]: sshd@19-172.31.29.190:22-139.178.68.195:36820.service: Deactivated successfully. Sep 4 00:09:34.195548 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 00:09:34.197277 systemd-logind[1979]: Removed session 20. Sep 4 00:09:39.154117 containerd[2012]: time="2025-09-04T00:09:39.153992884Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e10d7da6b806f1c4215951661c22e308e02e3e25a22eff593ecc1f91e60d8f\" id:\"f7d2edd50051af097b857d97a53ba2dade47cff379cae3f95112d01abaed58c2\" pid:6418 exited_at:{seconds:1756944579 nanos:26418765}" Sep 4 00:09:39.222668 systemd[1]: Started sshd@20-172.31.29.190:22-139.178.68.195:36832.service - OpenSSH per-connection server daemon (139.178.68.195:36832). Sep 4 00:09:39.477201 sshd[6432]: Accepted publickey for core from 139.178.68.195 port 36832 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:09:39.482507 sshd-session[6432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:39.490830 systemd-logind[1979]: New session 21 of user core. Sep 4 00:09:39.497430 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 00:09:39.560375 containerd[2012]: time="2025-09-04T00:09:39.560332335Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e10d7da6b806f1c4215951661c22e308e02e3e25a22eff593ecc1f91e60d8f\" id:\"0665bb31297d236c2a3482f6b0052f048351a877ea442082ffefa9937178a5e9\" pid:6447 exited_at:{seconds:1756944579 nanos:560020726}" Sep 4 00:09:40.414740 sshd[6453]: Connection closed by 139.178.68.195 port 36832 Sep 4 00:09:40.417177 sshd-session[6432]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:40.423038 systemd-logind[1979]: Session 21 logged out. Waiting for processes to exit. Sep 4 00:09:40.423336 systemd[1]: sshd@20-172.31.29.190:22-139.178.68.195:36832.service: Deactivated successfully. Sep 4 00:09:40.426518 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 00:09:40.428644 systemd-logind[1979]: Removed session 21. Sep 4 00:09:44.319503 containerd[2012]: time="2025-09-04T00:09:44.284941569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3c2a2e541f0330bea7895fbccd6459a88be193e0becaa32c6aa88d6011eaa05\" id:\"4acdf7175ea77f2b3ce8f2059df5df9b10674f9cb7547fbd8c260c0c2ace64ac\" pid:6483 exited_at:{seconds:1756944584 nanos:283542203}" Sep 4 00:09:45.447979 systemd[1]: Started sshd@21-172.31.29.190:22-139.178.68.195:34494.service - OpenSSH per-connection server daemon (139.178.68.195:34494). Sep 4 00:09:45.757488 sshd[6495]: Accepted publickey for core from 139.178.68.195 port 34494 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:09:45.761661 sshd-session[6495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:45.776362 systemd-logind[1979]: New session 22 of user core. Sep 4 00:09:45.777460 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 00:09:47.254356 sshd[6497]: Connection closed by 139.178.68.195 port 34494 Sep 4 00:09:47.256602 sshd-session[6495]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:47.262752 systemd-logind[1979]: Session 22 logged out. Waiting for processes to exit. Sep 4 00:09:47.266182 systemd[1]: sshd@21-172.31.29.190:22-139.178.68.195:34494.service: Deactivated successfully. Sep 4 00:09:47.271935 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 00:09:47.276601 systemd-logind[1979]: Removed session 22. Sep 4 00:09:51.383143 containerd[2012]: time="2025-09-04T00:09:51.383090024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b513880e85fc398faca53b210ed8f6b82a330d89c721e340840db4491c58f066\" id:\"6ef8afea72b8dfef7b7a36fd0fb65046454fe6fcd80bf8fe835c95e54e09c490\" pid:6524 exited_at:{seconds:1756944591 nanos:382314381}" Sep 4 00:09:51.729359 containerd[2012]: time="2025-09-04T00:09:51.728653146Z" level=info msg="StopContainer for \"bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10\" with timeout 30 (s)" Sep 4 00:09:51.799647 containerd[2012]: time="2025-09-04T00:09:51.799542139Z" level=info msg="Stop container \"bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10\" with signal terminated" Sep 4 00:09:51.948874 systemd[1]: cri-containerd-bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10.scope: Deactivated successfully. Sep 4 00:09:51.949289 systemd[1]: cri-containerd-bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10.scope: Consumed 860ms CPU time, 47.6M memory peak, 1.8M read from disk. Sep 4 00:09:52.015711 containerd[2012]: time="2025-09-04T00:09:52.015667249Z" level=info msg="received exit event container_id:\"bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10\" id:\"bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10\" pid:5681 exit_status:1 exited_at:{seconds:1756944591 nanos:970448887}" Sep 4 00:09:52.059980 containerd[2012]: time="2025-09-04T00:09:52.059940196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10\" id:\"bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10\" pid:5681 exit_status:1 exited_at:{seconds:1756944591 nanos:970448887}" Sep 4 00:09:52.209376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10-rootfs.mount: Deactivated successfully. Sep 4 00:09:52.302872 systemd[1]: Started sshd@22-172.31.29.190:22-139.178.68.195:42350.service - OpenSSH per-connection server daemon (139.178.68.195:42350). Sep 4 00:09:52.358920 containerd[2012]: time="2025-09-04T00:09:52.358676427Z" level=info msg="StopContainer for \"bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10\" returns successfully" Sep 4 00:09:52.425613 containerd[2012]: time="2025-09-04T00:09:52.425359170Z" level=info msg="StopPodSandbox for \"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421\"" Sep 4 00:09:52.430298 containerd[2012]: time="2025-09-04T00:09:52.428510225Z" level=info msg="Container to stop \"bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:09:52.533717 containerd[2012]: time="2025-09-04T00:09:52.533676547Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421\" id:\"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421\" pid:4921 exit_status:137 exited_at:{seconds:1756944592 nanos:533081970}" Sep 4 00:09:52.534028 systemd[1]: cri-containerd-c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421.scope: Deactivated successfully. Sep 4 00:09:52.610571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421-rootfs.mount: Deactivated successfully. Sep 4 00:09:52.636172 sshd[6558]: Accepted publickey for core from 139.178.68.195 port 42350 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:09:52.639074 sshd-session[6558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:52.663646 systemd-logind[1979]: New session 23 of user core. Sep 4 00:09:52.667966 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 00:09:52.727676 containerd[2012]: time="2025-09-04T00:09:52.727434997Z" level=info msg="shim disconnected" id=c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421 namespace=k8s.io Sep 4 00:09:52.727676 containerd[2012]: time="2025-09-04T00:09:52.727485396Z" level=warning msg="cleaning up after shim disconnected" id=c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421 namespace=k8s.io Sep 4 00:09:52.730143 containerd[2012]: time="2025-09-04T00:09:52.727496836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 00:09:53.052882 containerd[2012]: time="2025-09-04T00:09:53.049616148Z" level=info msg="received exit event sandbox_id:\"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421\" exit_status:137 exited_at:{seconds:1756944592 nanos:533081970}" Sep 4 00:09:53.069831 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421-shm.mount: Deactivated successfully. Sep 4 00:09:53.631069 sshd[6585]: Connection closed by 139.178.68.195 port 42350 Sep 4 00:09:53.633991 sshd-session[6558]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:53.644282 systemd[1]: sshd@22-172.31.29.190:22-139.178.68.195:42350.service: Deactivated successfully. Sep 4 00:09:53.647978 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 00:09:53.650949 systemd-logind[1979]: Session 23 logged out. Waiting for processes to exit. Sep 4 00:09:53.653783 systemd-logind[1979]: Removed session 23. Sep 4 00:09:53.700709 systemd-networkd[1879]: calie1ef844bac6: Link DOWN Sep 4 00:09:53.700721 systemd-networkd[1879]: calie1ef844bac6: Lost carrier Sep 4 00:09:54.095269 containerd[2012]: 2025-09-04 00:09:53.669 [INFO][6617] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Sep 4 00:09:54.095269 containerd[2012]: 2025-09-04 00:09:53.678 [INFO][6617] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" iface="eth0" netns="/var/run/netns/cni-782cd2db-dbf3-e46e-556f-c2bf7f0df14a" Sep 4 00:09:54.095269 containerd[2012]: 2025-09-04 00:09:53.679 [INFO][6617] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" iface="eth0" netns="/var/run/netns/cni-782cd2db-dbf3-e46e-556f-c2bf7f0df14a" Sep 4 00:09:54.095269 containerd[2012]: 2025-09-04 00:09:53.704 [INFO][6617] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" after=25.416419ms iface="eth0" netns="/var/run/netns/cni-782cd2db-dbf3-e46e-556f-c2bf7f0df14a" Sep 4 00:09:54.095269 containerd[2012]: 2025-09-04 00:09:53.713 [INFO][6617] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Sep 4 00:09:54.095269 containerd[2012]: 2025-09-04 00:09:53.713 [INFO][6617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Sep 4 00:09:54.095269 containerd[2012]: 2025-09-04 00:09:54.011 [INFO][6629] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" HandleID="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:09:54.095269 containerd[2012]: 2025-09-04 00:09:54.017 [INFO][6629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:09:54.095269 containerd[2012]: 2025-09-04 00:09:54.018 [INFO][6629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:09:54.095269 containerd[2012]: 2025-09-04 00:09:54.086 [INFO][6629] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" HandleID="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:09:54.095269 containerd[2012]: 2025-09-04 00:09:54.086 [INFO][6629] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" HandleID="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:09:54.095269 containerd[2012]: 2025-09-04 00:09:54.088 [INFO][6629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:09:54.095269 containerd[2012]: 2025-09-04 00:09:54.090 [INFO][6617] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Sep 4 00:09:54.096818 containerd[2012]: time="2025-09-04T00:09:54.096412791Z" level=info msg="TearDown network for sandbox \"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421\" successfully" Sep 4 00:09:54.096818 containerd[2012]: time="2025-09-04T00:09:54.096455218Z" level=info msg="StopPodSandbox for \"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421\" returns successfully" Sep 4 00:09:54.099738 systemd[1]: run-netns-cni\x2d782cd2db\x2ddbf3\x2de46e\x2d556f\x2dc2bf7f0df14a.mount: Deactivated successfully. Sep 4 00:09:54.306066 kubelet[3275]: I0904 00:09:54.305988 3275 scope.go:117] "RemoveContainer" containerID="bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10" Sep 4 00:09:54.377988 kubelet[3275]: I0904 00:09:54.377870 3275 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/85b1ac46-1183-4716-965a-da420e906863-calico-apiserver-certs\") pod \"85b1ac46-1183-4716-965a-da420e906863\" (UID: \"85b1ac46-1183-4716-965a-da420e906863\") " Sep 4 00:09:54.378579 kubelet[3275]: I0904 00:09:54.378183 3275 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbmth\" (UniqueName: \"kubernetes.io/projected/85b1ac46-1183-4716-965a-da420e906863-kube-api-access-zbmth\") pod \"85b1ac46-1183-4716-965a-da420e906863\" (UID: \"85b1ac46-1183-4716-965a-da420e906863\") " Sep 4 00:09:54.511892 systemd[1]: var-lib-kubelet-pods-85b1ac46\x2d1183\x2d4716\x2d965a\x2dda420e906863-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzbmth.mount: Deactivated successfully. Sep 4 00:09:54.512022 systemd[1]: var-lib-kubelet-pods-85b1ac46\x2d1183\x2d4716\x2d965a\x2dda420e906863-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 4 00:09:54.533358 kubelet[3275]: I0904 00:09:54.529857 3275 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b1ac46-1183-4716-965a-da420e906863-kube-api-access-zbmth" (OuterVolumeSpecName: "kube-api-access-zbmth") pod "85b1ac46-1183-4716-965a-da420e906863" (UID: "85b1ac46-1183-4716-965a-da420e906863"). InnerVolumeSpecName "kube-api-access-zbmth". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 00:09:54.533358 kubelet[3275]: I0904 00:09:54.533315 3275 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85b1ac46-1183-4716-965a-da420e906863-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "85b1ac46-1183-4716-965a-da420e906863" (UID: "85b1ac46-1183-4716-965a-da420e906863"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 00:09:54.568019 systemd[1]: Removed slice kubepods-besteffort-pod85b1ac46_1183_4716_965a_da420e906863.slice - libcontainer container kubepods-besteffort-pod85b1ac46_1183_4716_965a_da420e906863.slice. Sep 4 00:09:54.568369 systemd[1]: kubepods-besteffort-pod85b1ac46_1183_4716_965a_da420e906863.slice: Consumed 903ms CPU time, 47.9M memory peak, 1.8M read from disk. Sep 4 00:09:54.622710 containerd[2012]: time="2025-09-04T00:09:54.622662178Z" level=info msg="RemoveContainer for \"bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10\"" Sep 4 00:09:54.625643 kubelet[3275]: I0904 00:09:54.625430 3275 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/85b1ac46-1183-4716-965a-da420e906863-calico-apiserver-certs\") on node \"ip-172-31-29-190\" DevicePath \"\"" Sep 4 00:09:54.625643 kubelet[3275]: I0904 00:09:54.625466 3275 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zbmth\" (UniqueName: \"kubernetes.io/projected/85b1ac46-1183-4716-965a-da420e906863-kube-api-access-zbmth\") on node \"ip-172-31-29-190\" DevicePath \"\"" Sep 4 00:09:54.712640 containerd[2012]: time="2025-09-04T00:09:54.712525426Z" level=info msg="RemoveContainer for \"bd1d5f92eea85842abe0c10e9c0765ad8508219cac1fb1b56fc2558c0a0fac10\" returns successfully" Sep 4 00:09:56.052130 kubelet[3275]: I0904 00:09:56.050080 3275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85b1ac46-1183-4716-965a-da420e906863" path="/var/lib/kubelet/pods/85b1ac46-1183-4716-965a-da420e906863/volumes" Sep 4 00:09:56.109544 kubelet[3275]: I0904 00:09:56.109512 3275 scope.go:117] "RemoveContainer" containerID="2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5" Sep 4 00:09:56.132560 containerd[2012]: time="2025-09-04T00:09:56.132525803Z" level=info msg="RemoveContainer for \"2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5\"" Sep 4 00:09:56.146559 containerd[2012]: time="2025-09-04T00:09:56.146511305Z" level=info msg="RemoveContainer for \"2c7903964f6e61ff66b2f25de4517e30489fb2b7a6a1e3f638b8867f5ab248a5\" returns successfully" Sep 4 00:09:56.154800 containerd[2012]: time="2025-09-04T00:09:56.154723742Z" level=info msg="StopPodSandbox for \"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec\"" Sep 4 00:09:56.487126 ntpd[1972]: Deleting interface #11 calie1ef844bac6, fe80::ecee:eeff:feee:eeee%6#123, interface stats: received=0, sent=0, dropped=0, active_time=63 secs Sep 4 00:09:56.493745 ntpd[1972]: 4 Sep 00:09:56 ntpd[1972]: Deleting interface #11 calie1ef844bac6, fe80::ecee:eeff:feee:eeee%6#123, interface stats: received=0, sent=0, dropped=0, active_time=63 secs Sep 4 00:09:56.500723 containerd[2012]: 2025-09-04 00:09:56.404 [WARNING][6657] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:09:56.500723 containerd[2012]: 2025-09-04 00:09:56.404 [INFO][6657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Sep 4 00:09:56.500723 containerd[2012]: 2025-09-04 00:09:56.404 [INFO][6657] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" iface="eth0" netns="" Sep 4 00:09:56.500723 containerd[2012]: 2025-09-04 00:09:56.404 [INFO][6657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Sep 4 00:09:56.500723 containerd[2012]: 2025-09-04 00:09:56.404 [INFO][6657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Sep 4 00:09:56.500723 containerd[2012]: 2025-09-04 00:09:56.465 [INFO][6664] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" HandleID="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:09:56.500723 containerd[2012]: 2025-09-04 00:09:56.466 [INFO][6664] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:09:56.500723 containerd[2012]: 2025-09-04 00:09:56.466 [INFO][6664] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:09:56.500723 containerd[2012]: 2025-09-04 00:09:56.480 [WARNING][6664] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" HandleID="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:09:56.500723 containerd[2012]: 2025-09-04 00:09:56.480 [INFO][6664] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" HandleID="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:09:56.500723 containerd[2012]: 2025-09-04 00:09:56.482 [INFO][6664] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:09:56.500723 containerd[2012]: 2025-09-04 00:09:56.493 [INFO][6657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Sep 4 00:09:56.501908 containerd[2012]: time="2025-09-04T00:09:56.501045609Z" level=info msg="TearDown network for sandbox \"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec\" successfully" Sep 4 00:09:56.501908 containerd[2012]: time="2025-09-04T00:09:56.501075799Z" level=info msg="StopPodSandbox for \"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec\" returns successfully" Sep 4 00:09:56.536304 containerd[2012]: time="2025-09-04T00:09:56.535376062Z" level=info msg="RemovePodSandbox for \"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec\"" Sep 4 00:09:56.536304 containerd[2012]: time="2025-09-04T00:09:56.535908799Z" level=info msg="Forcibly stopping sandbox \"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec\"" Sep 4 00:09:56.718228 containerd[2012]: 2025-09-04 00:09:56.637 [WARNING][6678] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:09:56.718228 containerd[2012]: 2025-09-04 00:09:56.638 [INFO][6678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Sep 4 00:09:56.718228 containerd[2012]: 2025-09-04 00:09:56.638 [INFO][6678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" iface="eth0" netns="" Sep 4 00:09:56.718228 containerd[2012]: 2025-09-04 00:09:56.638 [INFO][6678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Sep 4 00:09:56.718228 containerd[2012]: 2025-09-04 00:09:56.638 [INFO][6678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Sep 4 00:09:56.718228 containerd[2012]: 2025-09-04 00:09:56.690 [INFO][6686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" HandleID="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:09:56.718228 containerd[2012]: 2025-09-04 00:09:56.691 [INFO][6686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:09:56.718228 containerd[2012]: 2025-09-04 00:09:56.692 [INFO][6686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:09:56.718228 containerd[2012]: 2025-09-04 00:09:56.709 [WARNING][6686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" HandleID="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:09:56.718228 containerd[2012]: 2025-09-04 00:09:56.709 [INFO][6686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" HandleID="k8s-pod-network.2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--sjbqq-eth0" Sep 4 00:09:56.718228 containerd[2012]: 2025-09-04 00:09:56.711 [INFO][6686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:09:56.718228 containerd[2012]: 2025-09-04 00:09:56.715 [INFO][6678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec" Sep 4 00:09:56.720140 containerd[2012]: time="2025-09-04T00:09:56.719688867Z" level=info msg="TearDown network for sandbox \"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec\" successfully" Sep 4 00:09:56.749708 containerd[2012]: time="2025-09-04T00:09:56.749449788Z" level=info msg="Ensure that sandbox 2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec in task-service has been cleanup successfully" Sep 4 00:09:56.801803 containerd[2012]: time="2025-09-04T00:09:56.801744788Z" level=info msg="RemovePodSandbox \"2590edfa1f5f689974eb3814d47768d994fc36b4bd3c9486836cb98f854849ec\" returns successfully" Sep 4 00:09:56.802303 containerd[2012]: time="2025-09-04T00:09:56.802278303Z" level=info msg="StopPodSandbox for \"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421\"" Sep 4 00:09:56.908532 containerd[2012]: 2025-09-04 00:09:56.850 [WARNING][6700] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:09:56.908532 containerd[2012]: 2025-09-04 00:09:56.850 [INFO][6700] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Sep 4 00:09:56.908532 containerd[2012]: 2025-09-04 00:09:56.850 [INFO][6700] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" iface="eth0" netns="" Sep 4 00:09:56.908532 containerd[2012]: 2025-09-04 00:09:56.850 [INFO][6700] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Sep 4 00:09:56.908532 containerd[2012]: 2025-09-04 00:09:56.851 [INFO][6700] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Sep 4 00:09:56.908532 containerd[2012]: 2025-09-04 00:09:56.892 [INFO][6707] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" HandleID="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:09:56.908532 containerd[2012]: 2025-09-04 00:09:56.893 [INFO][6707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:09:56.908532 containerd[2012]: 2025-09-04 00:09:56.893 [INFO][6707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:09:56.908532 containerd[2012]: 2025-09-04 00:09:56.899 [WARNING][6707] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" HandleID="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:09:56.908532 containerd[2012]: 2025-09-04 00:09:56.899 [INFO][6707] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" HandleID="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:09:56.908532 containerd[2012]: 2025-09-04 00:09:56.901 [INFO][6707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:09:56.908532 containerd[2012]: 2025-09-04 00:09:56.904 [INFO][6700] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Sep 4 00:09:56.908532 containerd[2012]: time="2025-09-04T00:09:56.908403279Z" level=info msg="TearDown network for sandbox \"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421\" successfully" Sep 4 00:09:56.908532 containerd[2012]: time="2025-09-04T00:09:56.908427344Z" level=info msg="StopPodSandbox for \"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421\" returns successfully" Sep 4 00:09:56.909922 containerd[2012]: time="2025-09-04T00:09:56.909091799Z" level=info msg="RemovePodSandbox for \"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421\"" Sep 4 00:09:56.909922 containerd[2012]: time="2025-09-04T00:09:56.909147089Z" level=info msg="Forcibly stopping sandbox \"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421\"" Sep 4 00:09:56.994081 containerd[2012]: 2025-09-04 00:09:56.952 [WARNING][6721] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" WorkloadEndpoint="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:09:56.994081 containerd[2012]: 2025-09-04 00:09:56.952 [INFO][6721] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Sep 4 00:09:56.994081 containerd[2012]: 2025-09-04 00:09:56.952 [INFO][6721] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" iface="eth0" netns="" Sep 4 00:09:56.994081 containerd[2012]: 2025-09-04 00:09:56.952 [INFO][6721] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Sep 4 00:09:56.994081 containerd[2012]: 2025-09-04 00:09:56.952 [INFO][6721] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Sep 4 00:09:56.994081 containerd[2012]: 2025-09-04 00:09:56.981 [INFO][6728] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" HandleID="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:09:56.994081 containerd[2012]: 2025-09-04 00:09:56.981 [INFO][6728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:09:56.994081 containerd[2012]: 2025-09-04 00:09:56.981 [INFO][6728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:09:56.994081 containerd[2012]: 2025-09-04 00:09:56.987 [WARNING][6728] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" HandleID="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:09:56.994081 containerd[2012]: 2025-09-04 00:09:56.988 [INFO][6728] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" HandleID="k8s-pod-network.c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Workload="ip--172--31--29--190-k8s-calico--apiserver--664cb9f68d--t5fls-eth0" Sep 4 00:09:56.994081 containerd[2012]: 2025-09-04 00:09:56.989 [INFO][6728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:09:56.994081 containerd[2012]: 2025-09-04 00:09:56.991 [INFO][6721] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421" Sep 4 00:09:56.994675 containerd[2012]: time="2025-09-04T00:09:56.994116780Z" level=info msg="TearDown network for sandbox \"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421\" successfully" Sep 4 00:09:56.996797 containerd[2012]: time="2025-09-04T00:09:56.996764095Z" level=info msg="Ensure that sandbox c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421 in task-service has been cleanup successfully" Sep 4 00:09:57.003687 containerd[2012]: time="2025-09-04T00:09:57.003530890Z" level=info msg="RemovePodSandbox \"c4726990a4775f022c31c0a023d2a71c364217fe590ef46dfad352465964d421\" returns successfully" Sep 4 00:09:58.672465 systemd[1]: Started sshd@23-172.31.29.190:22-139.178.68.195:42364.service - OpenSSH per-connection server daemon (139.178.68.195:42364). Sep 4 00:09:58.956928 sshd[6735]: Accepted publickey for core from 139.178.68.195 port 42364 ssh2: RSA SHA256:tViyCdIgW9q95ae4GHOYXF/RJWOGt5Js4waMaBkQ7BM Sep 4 00:09:58.961366 sshd-session[6735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:58.973308 systemd-logind[1979]: New session 24 of user core. Sep 4 00:09:58.980855 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 00:10:00.191514 sshd[6739]: Connection closed by 139.178.68.195 port 42364 Sep 4 00:10:00.196112 sshd-session[6735]: pam_unix(sshd:session): session closed for user core Sep 4 00:10:00.215151 systemd[1]: sshd@23-172.31.29.190:22-139.178.68.195:42364.service: Deactivated successfully. Sep 4 00:10:00.220661 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 00:10:00.227556 systemd-logind[1979]: Session 24 logged out. Waiting for processes to exit. Sep 4 00:10:00.231782 systemd-logind[1979]: Removed session 24. Sep 4 00:10:09.180855 containerd[2012]: time="2025-09-04T00:10:09.180778919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e10d7da6b806f1c4215951661c22e308e02e3e25a22eff593ecc1f91e60d8f\" id:\"d5a79546cedaee8bd61301edbc621f6149755a9fa049e1871ee6d4faab60d315\" pid:6767 exited_at:{seconds:1756944609 nanos:109860009}" Sep 4 00:10:13.820643 containerd[2012]: time="2025-09-04T00:10:13.820586646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3c2a2e541f0330bea7895fbccd6459a88be193e0becaa32c6aa88d6011eaa05\" id:\"b559824dfa350369fb0d9bec7023c626067e0e24b2bc22766b2c06ebe11afd34\" pid:6796 exited_at:{seconds:1756944613 nanos:820112358}" Sep 4 00:10:16.721860 systemd[1]: cri-containerd-9f0a1f7be2afc788c2460550fd1c189f5c472afcb87277c1545e54053d316bb8.scope: Deactivated successfully. Sep 4 00:10:16.732741 systemd[1]: cri-containerd-9f0a1f7be2afc788c2460550fd1c189f5c472afcb87277c1545e54053d316bb8.scope: Consumed 3.973s CPU time, 82M memory peak, 119.1M read from disk. Sep 4 00:10:16.861846 containerd[2012]: time="2025-09-04T00:10:16.861795748Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f0a1f7be2afc788c2460550fd1c189f5c472afcb87277c1545e54053d316bb8\" id:\"9f0a1f7be2afc788c2460550fd1c189f5c472afcb87277c1545e54053d316bb8\" pid:3113 exit_status:1 exited_at:{seconds:1756944616 nanos:829666240}" Sep 4 00:10:16.872179 containerd[2012]: time="2025-09-04T00:10:16.872022843Z" level=info msg="received exit event container_id:\"9f0a1f7be2afc788c2460550fd1c189f5c472afcb87277c1545e54053d316bb8\" id:\"9f0a1f7be2afc788c2460550fd1c189f5c472afcb87277c1545e54053d316bb8\" pid:3113 exit_status:1 exited_at:{seconds:1756944616 nanos:829666240}" Sep 4 00:10:16.962798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f0a1f7be2afc788c2460550fd1c189f5c472afcb87277c1545e54053d316bb8-rootfs.mount: Deactivated successfully. Sep 4 00:10:17.132673 systemd[1]: cri-containerd-68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505.scope: Deactivated successfully. Sep 4 00:10:17.133350 systemd[1]: cri-containerd-68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505.scope: Consumed 11.871s CPU time, 107.9M memory peak, 93.6M read from disk. Sep 4 00:10:17.135781 containerd[2012]: time="2025-09-04T00:10:17.135738104Z" level=info msg="received exit event container_id:\"68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505\" id:\"68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505\" pid:3593 exit_status:1 exited_at:{seconds:1756944617 nanos:135252486}" Sep 4 00:10:17.143979 containerd[2012]: time="2025-09-04T00:10:17.135634307Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505\" id:\"68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505\" pid:3593 exit_status:1 exited_at:{seconds:1756944617 nanos:135252486}" Sep 4 00:10:17.185183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505-rootfs.mount: Deactivated successfully. Sep 4 00:10:17.453179 kubelet[3275]: I0904 00:10:17.452855 3275 scope.go:117] "RemoveContainer" containerID="9f0a1f7be2afc788c2460550fd1c189f5c472afcb87277c1545e54053d316bb8" Sep 4 00:10:17.453886 kubelet[3275]: I0904 00:10:17.453280 3275 scope.go:117] "RemoveContainer" containerID="68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505" Sep 4 00:10:17.553354 containerd[2012]: time="2025-09-04T00:10:17.553114725Z" level=info msg="CreateContainer within sandbox \"6296b6ea7dc53d87c8cff92ee8b7b1a72445b47bf350e2ecbafcd2095f4101d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 00:10:17.554384 containerd[2012]: time="2025-09-04T00:10:17.554021686Z" level=info msg="CreateContainer within sandbox \"92b34ed66844a4266ca7f265ffb3e221c56dfde65e22ba92d76cab032c2e917b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 4 00:10:17.717607 containerd[2012]: time="2025-09-04T00:10:17.716480720Z" level=info msg="Container 25a06aa6797f1b5ab4c9af07b85a66cfdc37ced4754f2ccb6012bd86c66f6943: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:10:17.726476 containerd[2012]: time="2025-09-04T00:10:17.726441449Z" level=info msg="Container 4fc5704ffe64a0882489c500f34ef45011c33ef7674fe647291f2f1e1388cec6: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:10:17.726522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094816439.mount: Deactivated successfully. Sep 4 00:10:17.744661 containerd[2012]: time="2025-09-04T00:10:17.744623074Z" level=info msg="CreateContainer within sandbox \"92b34ed66844a4266ca7f265ffb3e221c56dfde65e22ba92d76cab032c2e917b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"25a06aa6797f1b5ab4c9af07b85a66cfdc37ced4754f2ccb6012bd86c66f6943\"" Sep 4 00:10:17.745196 containerd[2012]: time="2025-09-04T00:10:17.745161910Z" level=info msg="StartContainer for \"25a06aa6797f1b5ab4c9af07b85a66cfdc37ced4754f2ccb6012bd86c66f6943\"" Sep 4 00:10:17.747744 containerd[2012]: time="2025-09-04T00:10:17.747653999Z" level=info msg="CreateContainer within sandbox \"6296b6ea7dc53d87c8cff92ee8b7b1a72445b47bf350e2ecbafcd2095f4101d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4fc5704ffe64a0882489c500f34ef45011c33ef7674fe647291f2f1e1388cec6\"" Sep 4 00:10:17.748304 containerd[2012]: time="2025-09-04T00:10:17.748114772Z" level=info msg="StartContainer for \"4fc5704ffe64a0882489c500f34ef45011c33ef7674fe647291f2f1e1388cec6\"" Sep 4 00:10:17.748894 containerd[2012]: time="2025-09-04T00:10:17.748865592Z" level=info msg="connecting to shim 25a06aa6797f1b5ab4c9af07b85a66cfdc37ced4754f2ccb6012bd86c66f6943" address="unix:///run/containerd/s/9450c386a5700a314714e9c0bcb7fca553d3de75d75b2935b4d23809f0944ba4" protocol=ttrpc version=3 Sep 4 00:10:17.749256 containerd[2012]: time="2025-09-04T00:10:17.749215138Z" level=info msg="connecting to shim 4fc5704ffe64a0882489c500f34ef45011c33ef7674fe647291f2f1e1388cec6" address="unix:///run/containerd/s/4f202568cf365149ab74a74c1b2b88fe2176236f7bd457d9bfd7c47966d77899" protocol=ttrpc version=3 Sep 4 00:10:17.823625 systemd[1]: Started cri-containerd-25a06aa6797f1b5ab4c9af07b85a66cfdc37ced4754f2ccb6012bd86c66f6943.scope - libcontainer container 25a06aa6797f1b5ab4c9af07b85a66cfdc37ced4754f2ccb6012bd86c66f6943. Sep 4 00:10:17.834808 systemd[1]: Started cri-containerd-4fc5704ffe64a0882489c500f34ef45011c33ef7674fe647291f2f1e1388cec6.scope - libcontainer container 4fc5704ffe64a0882489c500f34ef45011c33ef7674fe647291f2f1e1388cec6. Sep 4 00:10:17.923602 containerd[2012]: time="2025-09-04T00:10:17.923563507Z" level=info msg="StartContainer for \"25a06aa6797f1b5ab4c9af07b85a66cfdc37ced4754f2ccb6012bd86c66f6943\" returns successfully" Sep 4 00:10:17.955638 containerd[2012]: time="2025-09-04T00:10:17.954902282Z" level=info msg="StartContainer for \"4fc5704ffe64a0882489c500f34ef45011c33ef7674fe647291f2f1e1388cec6\" returns successfully" Sep 4 00:10:18.794958 kubelet[3275]: E0904 00:10:18.794913 3275 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-190?timeout=10s\": context deadline exceeded" Sep 4 00:10:21.340996 containerd[2012]: time="2025-09-04T00:10:21.340924169Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b513880e85fc398faca53b210ed8f6b82a330d89c721e340840db4491c58f066\" id:\"e66e65ea31220e72400514c0049b60c4053bd9d144970549907c1bf7f9b05d17\" pid:6917 exit_status:1 exited_at:{seconds:1756944621 nanos:340650848}" Sep 4 00:10:21.474598 systemd[1]: cri-containerd-7559a6e72f5120c6954ef7af7e7cb35286b2e54d0f7470cf827a806600711e5c.scope: Deactivated successfully. Sep 4 00:10:21.474914 systemd[1]: cri-containerd-7559a6e72f5120c6954ef7af7e7cb35286b2e54d0f7470cf827a806600711e5c.scope: Consumed 2.133s CPU time, 36.8M memory peak, 75.2M read from disk. Sep 4 00:10:21.477095 containerd[2012]: time="2025-09-04T00:10:21.477061551Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7559a6e72f5120c6954ef7af7e7cb35286b2e54d0f7470cf827a806600711e5c\" id:\"7559a6e72f5120c6954ef7af7e7cb35286b2e54d0f7470cf827a806600711e5c\" pid:3122 exit_status:1 exited_at:{seconds:1756944621 nanos:476725590}" Sep 4 00:10:21.477469 containerd[2012]: time="2025-09-04T00:10:21.477420827Z" level=info msg="received exit event container_id:\"7559a6e72f5120c6954ef7af7e7cb35286b2e54d0f7470cf827a806600711e5c\" id:\"7559a6e72f5120c6954ef7af7e7cb35286b2e54d0f7470cf827a806600711e5c\" pid:3122 exit_status:1 exited_at:{seconds:1756944621 nanos:476725590}" Sep 4 00:10:21.512727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7559a6e72f5120c6954ef7af7e7cb35286b2e54d0f7470cf827a806600711e5c-rootfs.mount: Deactivated successfully. Sep 4 00:10:22.463087 kubelet[3275]: I0904 00:10:22.462980 3275 scope.go:117] "RemoveContainer" containerID="7559a6e72f5120c6954ef7af7e7cb35286b2e54d0f7470cf827a806600711e5c" Sep 4 00:10:22.465704 containerd[2012]: time="2025-09-04T00:10:22.465660756Z" level=info msg="CreateContainer within sandbox \"2965fcbba414b3e562c03253892b9938c88eb5cf656101ad54b46ffd6974ab5f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 00:10:22.481259 containerd[2012]: time="2025-09-04T00:10:22.480368028Z" level=info msg="Container 5b138f703c0e97e2bbe00d9fa2d49a8576d1b2d29b3b14a4fdfdb21e01a69c0c: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:10:22.491617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1567270265.mount: Deactivated successfully. Sep 4 00:10:22.497623 containerd[2012]: time="2025-09-04T00:10:22.497576647Z" level=info msg="CreateContainer within sandbox \"2965fcbba414b3e562c03253892b9938c88eb5cf656101ad54b46ffd6974ab5f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5b138f703c0e97e2bbe00d9fa2d49a8576d1b2d29b3b14a4fdfdb21e01a69c0c\"" Sep 4 00:10:22.498104 containerd[2012]: time="2025-09-04T00:10:22.498079101Z" level=info msg="StartContainer for \"5b138f703c0e97e2bbe00d9fa2d49a8576d1b2d29b3b14a4fdfdb21e01a69c0c\"" Sep 4 00:10:22.499322 containerd[2012]: time="2025-09-04T00:10:22.499219525Z" level=info msg="connecting to shim 5b138f703c0e97e2bbe00d9fa2d49a8576d1b2d29b3b14a4fdfdb21e01a69c0c" address="unix:///run/containerd/s/ff8529acb6ba0da36a398432786a8a905488addc4683b8bc9ae8060ca98299f7" protocol=ttrpc version=3 Sep 4 00:10:22.520424 systemd[1]: Started cri-containerd-5b138f703c0e97e2bbe00d9fa2d49a8576d1b2d29b3b14a4fdfdb21e01a69c0c.scope - libcontainer container 5b138f703c0e97e2bbe00d9fa2d49a8576d1b2d29b3b14a4fdfdb21e01a69c0c. Sep 4 00:10:22.575486 containerd[2012]: time="2025-09-04T00:10:22.575431380Z" level=info msg="StartContainer for \"5b138f703c0e97e2bbe00d9fa2d49a8576d1b2d29b3b14a4fdfdb21e01a69c0c\" returns successfully" Sep 4 00:10:28.805298 kubelet[3275]: E0904 00:10:28.805175 3275 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-29-190)" Sep 4 00:10:30.370830 systemd[1]: cri-containerd-25a06aa6797f1b5ab4c9af07b85a66cfdc37ced4754f2ccb6012bd86c66f6943.scope: Deactivated successfully. Sep 4 00:10:30.371704 systemd[1]: cri-containerd-25a06aa6797f1b5ab4c9af07b85a66cfdc37ced4754f2ccb6012bd86c66f6943.scope: Consumed 358ms CPU time, 70.5M memory peak, 37.9M read from disk. Sep 4 00:10:30.372593 containerd[2012]: time="2025-09-04T00:10:30.372360695Z" level=info msg="received exit event container_id:\"25a06aa6797f1b5ab4c9af07b85a66cfdc37ced4754f2ccb6012bd86c66f6943\" id:\"25a06aa6797f1b5ab4c9af07b85a66cfdc37ced4754f2ccb6012bd86c66f6943\" pid:6858 exit_status:1 exited_at:{seconds:1756944630 nanos:371345723}" Sep 4 00:10:30.391668 containerd[2012]: time="2025-09-04T00:10:30.391589802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"25a06aa6797f1b5ab4c9af07b85a66cfdc37ced4754f2ccb6012bd86c66f6943\" id:\"25a06aa6797f1b5ab4c9af07b85a66cfdc37ced4754f2ccb6012bd86c66f6943\" pid:6858 exit_status:1 exited_at:{seconds:1756944630 nanos:371345723}" Sep 4 00:10:30.406030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25a06aa6797f1b5ab4c9af07b85a66cfdc37ced4754f2ccb6012bd86c66f6943-rootfs.mount: Deactivated successfully. Sep 4 00:10:30.492849 kubelet[3275]: I0904 00:10:30.492756 3275 scope.go:117] "RemoveContainer" containerID="68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505" Sep 4 00:10:30.493713 kubelet[3275]: I0904 00:10:30.493533 3275 scope.go:117] "RemoveContainer" containerID="25a06aa6797f1b5ab4c9af07b85a66cfdc37ced4754f2ccb6012bd86c66f6943" Sep 4 00:10:30.500806 containerd[2012]: time="2025-09-04T00:10:30.500620484Z" level=info msg="RemoveContainer for \"68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505\"" Sep 4 00:10:30.507735 containerd[2012]: time="2025-09-04T00:10:30.507695122Z" level=info msg="RemoveContainer for \"68f8ff0fd1ad9d7fc3b5b38f35a048639dc2fd548516b2091c145e22b7618505\" returns successfully" Sep 4 00:10:30.511363 kubelet[3275]: E0904 00:10:30.511306 3275 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-755d956888-smsnw_tigera-operator(674905ca-2d68-4c09-a78a-d2ad8711f8e8)\"" pod="tigera-operator/tigera-operator-755d956888-smsnw" podUID="674905ca-2d68-4c09-a78a-d2ad8711f8e8" Sep 4 00:10:32.826355 containerd[2012]: time="2025-09-04T00:10:32.826309917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b513880e85fc398faca53b210ed8f6b82a330d89c721e340840db4491c58f066\" id:\"9b5ac32e5a221863ca6c73d039b1e37ff8885f04fd4c42607c6e0b07fa923773\" pid:7000 exit_status:1 exited_at:{seconds:1756944632 nanos:826011987}"