Mar 3 13:40:24.896671 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 10:59:45 -00 2026 Mar 3 13:40:24.896697 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:40:24.896709 kernel: BIOS-provided physical RAM map: Mar 3 13:40:24.896716 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 3 13:40:24.896723 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 3 13:40:24.896729 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 3 13:40:24.896737 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 3 13:40:24.896744 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 3 13:40:24.896751 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 3 13:40:24.896758 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 3 13:40:24.896765 kernel: NX (Execute Disable) protection: active Mar 3 13:40:24.896774 kernel: APIC: Static calls initialized Mar 3 13:40:24.896781 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Mar 3 13:40:24.896788 kernel: extended physical RAM map: Mar 3 13:40:24.896796 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 3 13:40:24.896804 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Mar 3 13:40:24.896814 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Mar 3 13:40:24.896821 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Mar 3 13:40:24.896829 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 3 13:40:24.896837 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 3 13:40:24.896845 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 3 13:40:24.896862 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 3 13:40:24.896873 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 3 13:40:24.896884 kernel: efi: EFI v2.7 by EDK II Mar 3 13:40:24.896895 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Mar 3 13:40:24.896906 kernel: secureboot: Secure boot disabled Mar 3 13:40:24.896917 kernel: SMBIOS 2.7 present. Mar 3 13:40:24.896931 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 3 13:40:24.896939 kernel: DMI: Memory slots populated: 1/1 Mar 3 13:40:24.896946 kernel: Hypervisor detected: KVM Mar 3 13:40:24.896954 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 3 13:40:24.896961 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 3 13:40:24.896969 kernel: kvm-clock: using sched offset of 5172871154 cycles Mar 3 13:40:24.896977 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 3 13:40:24.896985 kernel: tsc: Detected 2499.998 MHz processor Mar 3 13:40:24.896993 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 3 13:40:24.897001 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 3 13:40:24.897011 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 3 13:40:24.897018 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 3 13:40:24.897026 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 3 13:40:24.897038 kernel: Using GB pages for direct mapping Mar 3 13:40:24.897046 kernel: ACPI: Early table checksum verification disabled Mar 3 13:40:24.897054 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 3 13:40:24.897062 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 3 13:40:24.897073 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 3 13:40:24.897081 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 3 13:40:24.897089 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 3 13:40:24.897097 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 3 13:40:24.897105 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 3 13:40:24.897113 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 3 13:40:24.897121 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 3 13:40:24.897129 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 3 13:40:24.897140 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 3 13:40:24.897148 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 3 13:40:24.897156 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 3 13:40:24.897164 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 3 13:40:24.897172 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 3 13:40:24.897180 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 3 13:40:24.897188 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 3 13:40:24.897196 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 3 13:40:24.897206 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 3 13:40:24.897214 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 3 13:40:24.897222 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 3 13:40:24.897230 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 3 13:40:24.897238 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 3 13:40:24.897246 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 3 13:40:24.897254 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 3 13:40:24.897262 kernel: NUMA: Initialized distance table, cnt=1 Mar 3 13:40:24.897270 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Mar 3 13:40:24.897278 kernel: Zone ranges: Mar 3 13:40:24.897288 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 3 13:40:24.897296 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 3 13:40:24.899127 kernel: Normal empty Mar 3 13:40:24.899138 kernel: Device empty Mar 3 13:40:24.899147 kernel: Movable zone start for each node Mar 3 13:40:24.899155 kernel: Early memory node ranges Mar 3 13:40:24.899164 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 3 13:40:24.899172 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 3 13:40:24.899181 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 3 13:40:24.899193 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 3 13:40:24.899202 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 3 13:40:24.899210 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 3 13:40:24.899219 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 3 13:40:24.899227 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 3 13:40:24.899236 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 3 13:40:24.899244 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 3 13:40:24.899253 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 3 13:40:24.899261 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 3 13:40:24.899272 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 3 13:40:24.899280 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 3 13:40:24.899289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 3 13:40:24.899297 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 3 13:40:24.899320 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 3 13:40:24.899328 kernel: TSC deadline timer available Mar 3 13:40:24.899337 kernel: CPU topo: Max. logical packages: 1 Mar 3 13:40:24.899345 kernel: CPU topo: Max. logical dies: 1 Mar 3 13:40:24.899353 kernel: CPU topo: Max. dies per package: 1 Mar 3 13:40:24.899361 kernel: CPU topo: Max. threads per core: 2 Mar 3 13:40:24.899372 kernel: CPU topo: Num. cores per package: 1 Mar 3 13:40:24.899380 kernel: CPU topo: Num. threads per package: 2 Mar 3 13:40:24.899388 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Mar 3 13:40:24.899396 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 3 13:40:24.899405 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 3 13:40:24.899413 kernel: Booting paravirtualized kernel on KVM Mar 3 13:40:24.899422 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 3 13:40:24.899430 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 3 13:40:24.899439 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Mar 3 13:40:24.899449 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Mar 3 13:40:24.899457 kernel: pcpu-alloc: [0] 0 1 Mar 3 13:40:24.899466 kernel: kvm-guest: PV spinlocks enabled Mar 3 13:40:24.899474 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 3 13:40:24.899486 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:40:24.899494 kernel: random: crng init done Mar 3 13:40:24.899503 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 3 13:40:24.899511 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 3 13:40:24.899522 kernel: Fallback order for Node 0: 0 Mar 3 13:40:24.899530 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Mar 3 13:40:24.899539 kernel: Policy zone: DMA32 Mar 3 13:40:24.899555 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 3 13:40:24.899566 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 3 13:40:24.899575 kernel: Kernel/User page tables isolation: enabled Mar 3 13:40:24.899584 kernel: ftrace: allocating 40099 entries in 157 pages Mar 3 13:40:24.899593 kernel: ftrace: allocated 157 pages with 5 groups Mar 3 13:40:24.899602 kernel: Dynamic Preempt: voluntary Mar 3 13:40:24.899610 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 3 13:40:24.899620 kernel: rcu: RCU event tracing is enabled. Mar 3 13:40:24.899629 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 3 13:40:24.899641 kernel: Trampoline variant of Tasks RCU enabled. Mar 3 13:40:24.899650 kernel: Rude variant of Tasks RCU enabled. Mar 3 13:40:24.899658 kernel: Tracing variant of Tasks RCU enabled. Mar 3 13:40:24.899667 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 3 13:40:24.899676 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 3 13:40:24.899688 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 3 13:40:24.899696 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 3 13:40:24.899706 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 3 13:40:24.899714 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 3 13:40:24.899723 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 3 13:40:24.899732 kernel: Console: colour dummy device 80x25 Mar 3 13:40:24.899741 kernel: printk: legacy console [tty0] enabled Mar 3 13:40:24.899750 kernel: printk: legacy console [ttyS0] enabled Mar 3 13:40:24.899761 kernel: ACPI: Core revision 20240827 Mar 3 13:40:24.899770 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 3 13:40:24.899779 kernel: APIC: Switch to symmetric I/O mode setup Mar 3 13:40:24.899788 kernel: x2apic enabled Mar 3 13:40:24.899796 kernel: APIC: Switched APIC routing to: physical x2apic Mar 3 13:40:24.899806 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 3 13:40:24.899815 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 3 13:40:24.899823 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 3 13:40:24.899832 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 3 13:40:24.899841 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 3 13:40:24.899851 kernel: Spectre V2 : Mitigation: Retpolines Mar 3 13:40:24.899874 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 3 13:40:24.899883 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 3 13:40:24.899892 kernel: RETBleed: Vulnerable Mar 3 13:40:24.899900 kernel: Speculative Store Bypass: Vulnerable Mar 3 13:40:24.899909 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 3 13:40:24.899917 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 3 13:40:24.899926 kernel: GDS: Unknown: Dependent on hypervisor status Mar 3 13:40:24.899934 kernel: active return thunk: its_return_thunk Mar 3 13:40:24.899943 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 3 13:40:24.899952 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 3 13:40:24.899963 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 3 13:40:24.899971 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 3 13:40:24.899980 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 3 13:40:24.899988 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 3 13:40:24.899997 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 3 13:40:24.900005 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 3 13:40:24.900014 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 3 13:40:24.900023 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 3 13:40:24.900031 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 3 13:40:24.900040 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 3 13:40:24.900051 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 3 13:40:24.900060 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 3 13:40:24.900068 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 3 13:40:24.900077 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 3 13:40:24.900085 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 3 13:40:24.900094 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 3 13:40:24.900102 kernel: Freeing SMP alternatives memory: 32K Mar 3 13:40:24.900111 kernel: pid_max: default: 32768 minimum: 301 Mar 3 13:40:24.900119 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 3 13:40:24.900128 kernel: landlock: Up and running. Mar 3 13:40:24.900137 kernel: SELinux: Initializing. Mar 3 13:40:24.900145 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 3 13:40:24.900156 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 3 13:40:24.900165 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 3 13:40:24.900174 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 3 13:40:24.900183 kernel: signal: max sigframe size: 3632 Mar 3 13:40:24.900192 kernel: rcu: Hierarchical SRCU implementation. Mar 3 13:40:24.900201 kernel: rcu: Max phase no-delay instances is 400. Mar 3 13:40:24.900209 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 3 13:40:24.900218 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 3 13:40:24.900227 kernel: smp: Bringing up secondary CPUs ... Mar 3 13:40:24.900239 kernel: smpboot: x86: Booting SMP configuration: Mar 3 13:40:24.900247 kernel: .... node #0, CPUs: #1 Mar 3 13:40:24.900257 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 3 13:40:24.900266 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 3 13:40:24.900275 kernel: smp: Brought up 1 node, 2 CPUs Mar 3 13:40:24.900284 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 3 13:40:24.900293 kernel: Memory: 1899856K/2037804K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 133384K reserved, 0K cma-reserved) Mar 3 13:40:24.900387 kernel: devtmpfs: initialized Mar 3 13:40:24.900397 kernel: x86/mm: Memory block size: 128MB Mar 3 13:40:24.900409 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 3 13:40:24.900418 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 3 13:40:24.900427 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 3 13:40:24.900436 kernel: pinctrl core: initialized pinctrl subsystem Mar 3 13:40:24.900445 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 3 13:40:24.900454 kernel: audit: initializing netlink subsys (disabled) Mar 3 13:40:24.900463 kernel: audit: type=2000 audit(1772545222.336:1): state=initialized audit_enabled=0 res=1 Mar 3 13:40:24.900472 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 3 13:40:24.900481 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 3 13:40:24.900492 kernel: cpuidle: using governor menu Mar 3 13:40:24.900501 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 3 13:40:24.900510 kernel: dca service started, version 1.12.1 Mar 3 13:40:24.900519 kernel: PCI: Using configuration type 1 for base access Mar 3 13:40:24.900528 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 3 13:40:24.900537 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 3 13:40:24.900546 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 3 13:40:24.900554 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 3 13:40:24.900566 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 3 13:40:24.900574 kernel: ACPI: Added _OSI(Module Device) Mar 3 13:40:24.900583 kernel: ACPI: Added _OSI(Processor Device) Mar 3 13:40:24.900592 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 3 13:40:24.900600 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 3 13:40:24.900609 kernel: ACPI: Interpreter enabled Mar 3 13:40:24.900618 kernel: ACPI: PM: (supports S0 S5) Mar 3 13:40:24.900627 kernel: ACPI: Using IOAPIC for interrupt routing Mar 3 13:40:24.900636 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 3 13:40:24.900644 kernel: PCI: Using E820 reservations for host bridge windows Mar 3 13:40:24.900656 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 3 13:40:24.900665 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 3 13:40:24.900833 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 3 13:40:24.900966 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 3 13:40:24.901059 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 3 13:40:24.901070 kernel: acpiphp: Slot [3] registered Mar 3 13:40:24.901079 kernel: acpiphp: Slot [4] registered Mar 3 13:40:24.901092 kernel: acpiphp: Slot [5] registered Mar 3 13:40:24.901101 kernel: acpiphp: Slot [6] registered Mar 3 13:40:24.901110 kernel: acpiphp: Slot [7] registered Mar 3 13:40:24.901119 kernel: acpiphp: Slot [8] registered Mar 3 13:40:24.901128 kernel: acpiphp: Slot [9] registered Mar 3 13:40:24.901137 kernel: acpiphp: Slot [10] registered Mar 3 13:40:24.901145 kernel: acpiphp: Slot [11] registered Mar 3 13:40:24.901154 kernel: acpiphp: Slot [12] registered Mar 3 13:40:24.901163 kernel: acpiphp: Slot [13] registered Mar 3 13:40:24.901174 kernel: acpiphp: Slot [14] registered Mar 3 13:40:24.901183 kernel: acpiphp: Slot [15] registered Mar 3 13:40:24.901192 kernel: acpiphp: Slot [16] registered Mar 3 13:40:24.901200 kernel: acpiphp: Slot [17] registered Mar 3 13:40:24.901209 kernel: acpiphp: Slot [18] registered Mar 3 13:40:24.901218 kernel: acpiphp: Slot [19] registered Mar 3 13:40:24.901227 kernel: acpiphp: Slot [20] registered Mar 3 13:40:24.901235 kernel: acpiphp: Slot [21] registered Mar 3 13:40:24.901244 kernel: acpiphp: Slot [22] registered Mar 3 13:40:24.901252 kernel: acpiphp: Slot [23] registered Mar 3 13:40:24.901263 kernel: acpiphp: Slot [24] registered Mar 3 13:40:24.901272 kernel: acpiphp: Slot [25] registered Mar 3 13:40:24.901280 kernel: acpiphp: Slot [26] registered Mar 3 13:40:24.901289 kernel: acpiphp: Slot [27] registered Mar 3 13:40:24.901298 kernel: acpiphp: Slot [28] registered Mar 3 13:40:24.901320 kernel: acpiphp: Slot [29] registered Mar 3 13:40:24.901329 kernel: acpiphp: Slot [30] registered Mar 3 13:40:24.901338 kernel: acpiphp: Slot [31] registered Mar 3 13:40:24.901347 kernel: PCI host bridge to bus 0000:00 Mar 3 13:40:24.901446 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 3 13:40:24.901529 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 3 13:40:24.901610 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 3 13:40:24.901691 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 3 13:40:24.901779 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 3 13:40:24.901860 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 3 13:40:24.901968 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Mar 3 13:40:24.902073 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Mar 3 13:40:24.902170 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Mar 3 13:40:24.902260 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 3 13:40:24.902367 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 3 13:40:24.902457 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 3 13:40:24.902545 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 3 13:40:24.902638 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 3 13:40:24.902726 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 3 13:40:24.902815 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 3 13:40:24.902910 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Mar 3 13:40:24.902999 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Mar 3 13:40:24.903087 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 3 13:40:24.903175 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 3 13:40:24.903276 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Mar 3 13:40:24.903377 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Mar 3 13:40:24.903478 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Mar 3 13:40:24.903569 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Mar 3 13:40:24.903582 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 3 13:40:24.903591 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 3 13:40:24.903599 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 3 13:40:24.903612 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 3 13:40:24.903621 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 3 13:40:24.903630 kernel: iommu: Default domain type: Translated Mar 3 13:40:24.903638 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 3 13:40:24.903647 kernel: efivars: Registered efivars operations Mar 3 13:40:24.903656 kernel: PCI: Using ACPI for IRQ routing Mar 3 13:40:24.904347 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 3 13:40:24.904363 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Mar 3 13:40:24.904374 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 3 13:40:24.904387 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 3 13:40:24.904531 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 3 13:40:24.904636 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 3 13:40:24.904730 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 3 13:40:24.904742 kernel: vgaarb: loaded Mar 3 13:40:24.904751 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 3 13:40:24.904760 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 3 13:40:24.904770 kernel: clocksource: Switched to clocksource kvm-clock Mar 3 13:40:24.904779 kernel: VFS: Disk quotas dquot_6.6.0 Mar 3 13:40:24.904791 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 3 13:40:24.904800 kernel: pnp: PnP ACPI init Mar 3 13:40:24.904809 kernel: pnp: PnP ACPI: found 5 devices Mar 3 13:40:24.904818 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 3 13:40:24.904827 kernel: NET: Registered PF_INET protocol family Mar 3 13:40:24.904836 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 3 13:40:24.904845 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 3 13:40:24.904862 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 3 13:40:24.904871 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 3 13:40:24.904882 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 3 13:40:24.904891 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 3 13:40:24.904900 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 3 13:40:24.904909 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 3 13:40:24.904918 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 3 13:40:24.904927 kernel: NET: Registered PF_XDP protocol family Mar 3 13:40:24.905016 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 3 13:40:24.905097 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 3 13:40:24.905177 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 3 13:40:24.905259 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 3 13:40:24.906712 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 3 13:40:24.906834 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 3 13:40:24.906847 kernel: PCI: CLS 0 bytes, default 64 Mar 3 13:40:24.906857 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 3 13:40:24.906867 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 3 13:40:24.906876 kernel: clocksource: Switched to clocksource tsc Mar 3 13:40:24.906885 kernel: Initialise system trusted keyrings Mar 3 13:40:24.906898 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 3 13:40:24.906907 kernel: Key type asymmetric registered Mar 3 13:40:24.906916 kernel: Asymmetric key parser 'x509' registered Mar 3 13:40:24.906925 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 3 13:40:24.906935 kernel: io scheduler mq-deadline registered Mar 3 13:40:24.906943 kernel: io scheduler kyber registered Mar 3 13:40:24.906952 kernel: io scheduler bfq registered Mar 3 13:40:24.906961 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 3 13:40:24.906970 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 3 13:40:24.906982 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 3 13:40:24.906990 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 3 13:40:24.906999 kernel: i8042: Warning: Keylock active Mar 3 13:40:24.907008 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 3 13:40:24.907017 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 3 13:40:24.907117 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 3 13:40:24.907205 kernel: rtc_cmos 00:00: registered as rtc0 Mar 3 13:40:24.907289 kernel: rtc_cmos 00:00: setting system clock to 2026-03-03T13:40:24 UTC (1772545224) Mar 3 13:40:24.907393 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 3 13:40:24.907422 kernel: intel_pstate: CPU model not supported Mar 3 13:40:24.907434 kernel: efifb: probing for efifb Mar 3 13:40:24.907444 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Mar 3 13:40:24.907453 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 3 13:40:24.907465 kernel: efifb: scrolling: redraw Mar 3 13:40:24.907474 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 3 13:40:24.907484 kernel: Console: switching to colour frame buffer device 100x37 Mar 3 13:40:24.907495 kernel: fb0: EFI VGA frame buffer device Mar 3 13:40:24.907505 kernel: pstore: Using crash dump compression: deflate Mar 3 13:40:24.907514 kernel: pstore: Registered efi_pstore as persistent store backend Mar 3 13:40:24.907523 kernel: NET: Registered PF_INET6 protocol family Mar 3 13:40:24.907532 kernel: Segment Routing with IPv6 Mar 3 13:40:24.907542 kernel: In-situ OAM (IOAM) with IPv6 Mar 3 13:40:24.907551 kernel: NET: Registered PF_PACKET protocol family Mar 3 13:40:24.907561 kernel: Key type dns_resolver registered Mar 3 13:40:24.907570 kernel: IPI shorthand broadcast: enabled Mar 3 13:40:24.907579 kernel: sched_clock: Marking stable (2569002869, 142232528)->(2780072601, -68837204) Mar 3 13:40:24.907591 kernel: registered taskstats version 1 Mar 3 13:40:24.907600 kernel: Loading compiled-in X.509 certificates Mar 3 13:40:24.907610 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: bf135b2a3d3664cc6742f4e1848867384c1e52f1' Mar 3 13:40:24.907619 kernel: Demotion targets for Node 0: null Mar 3 13:40:24.907628 kernel: Key type .fscrypt registered Mar 3 13:40:24.907637 kernel: Key type fscrypt-provisioning registered Mar 3 13:40:24.907646 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 3 13:40:24.907655 kernel: ima: Allocated hash algorithm: sha1 Mar 3 13:40:24.907664 kernel: ima: No architecture policies found Mar 3 13:40:24.907676 kernel: clk: Disabling unused clocks Mar 3 13:40:24.907685 kernel: Warning: unable to open an initial console. Mar 3 13:40:24.907694 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 3 13:40:24.907704 kernel: Write protecting the kernel read-only data: 40960k Mar 3 13:40:24.907716 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 3 13:40:24.907727 kernel: Run /init as init process Mar 3 13:40:24.907737 kernel: with arguments: Mar 3 13:40:24.907746 kernel: /init Mar 3 13:40:24.907755 kernel: with environment: Mar 3 13:40:24.907764 kernel: HOME=/ Mar 3 13:40:24.907773 kernel: TERM=linux Mar 3 13:40:24.907784 systemd[1]: Successfully made /usr/ read-only. Mar 3 13:40:24.907797 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 13:40:24.907810 systemd[1]: Detected virtualization amazon. Mar 3 13:40:24.907820 systemd[1]: Detected architecture x86-64. Mar 3 13:40:24.907829 systemd[1]: Running in initrd. Mar 3 13:40:24.907839 systemd[1]: No hostname configured, using default hostname. Mar 3 13:40:24.907849 systemd[1]: Hostname set to . Mar 3 13:40:24.907858 systemd[1]: Initializing machine ID from VM UUID. Mar 3 13:40:24.907868 systemd[1]: Queued start job for default target initrd.target. Mar 3 13:40:24.907878 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:40:24.907890 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:40:24.907901 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 3 13:40:24.907911 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 13:40:24.907921 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 3 13:40:24.907931 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 3 13:40:24.907942 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 3 13:40:24.907954 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 3 13:40:24.907964 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:40:24.907974 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:40:24.907984 systemd[1]: Reached target paths.target - Path Units. Mar 3 13:40:24.907993 systemd[1]: Reached target slices.target - Slice Units. Mar 3 13:40:24.908003 systemd[1]: Reached target swap.target - Swaps. Mar 3 13:40:24.908013 systemd[1]: Reached target timers.target - Timer Units. Mar 3 13:40:24.908023 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 13:40:24.908032 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 13:40:24.908045 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 3 13:40:24.908057 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 3 13:40:24.908067 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:40:24.908077 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 13:40:24.908087 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:40:24.908096 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 13:40:24.908106 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 3 13:40:24.908116 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 13:40:24.908126 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 3 13:40:24.908139 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 3 13:40:24.908149 systemd[1]: Starting systemd-fsck-usr.service... Mar 3 13:40:24.908158 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 13:40:24.908168 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 13:40:24.908178 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:40:24.908188 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 3 13:40:24.908201 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:40:24.908211 systemd[1]: Finished systemd-fsck-usr.service. Mar 3 13:40:24.908221 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 3 13:40:24.908255 systemd-journald[188]: Collecting audit messages is disabled. Mar 3 13:40:24.908286 systemd-journald[188]: Journal started Mar 3 13:40:24.908408 systemd-journald[188]: Runtime Journal (/run/log/journal/ec28a25d50e1a4ed0b20a2ae19aa4acb) is 4.7M, max 38.1M, 33.3M free. Mar 3 13:40:24.877578 systemd-modules-load[189]: Inserted module 'overlay' Mar 3 13:40:24.910824 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 13:40:24.916423 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 13:40:24.923207 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 3 13:40:24.923233 kernel: Bridge firewalling registered Mar 3 13:40:24.922335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:40:24.922683 systemd-modules-load[189]: Inserted module 'br_netfilter' Mar 3 13:40:24.924728 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 13:40:24.926223 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 13:40:24.929855 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 3 13:40:24.933420 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:40:24.937506 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 13:40:24.937661 systemd-tmpfiles[203]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 3 13:40:24.942598 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:40:24.952258 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:40:24.954988 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:40:24.958443 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 13:40:24.962361 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 13:40:24.964452 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 3 13:40:24.990285 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:40:25.008353 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 3 13:40:25.022180 systemd-resolved[224]: Positive Trust Anchors: Mar 3 13:40:25.023335 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 13:40:25.023404 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 13:40:25.030587 systemd-resolved[224]: Defaulting to hostname 'linux'. Mar 3 13:40:25.034043 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 13:40:25.034803 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:40:25.089344 kernel: SCSI subsystem initialized Mar 3 13:40:25.099336 kernel: Loading iSCSI transport class v2.0-870. Mar 3 13:40:25.110407 kernel: iscsi: registered transport (tcp) Mar 3 13:40:25.131482 kernel: iscsi: registered transport (qla4xxx) Mar 3 13:40:25.131556 kernel: QLogic iSCSI HBA Driver Mar 3 13:40:25.152545 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 13:40:25.168824 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:40:25.169945 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 13:40:25.216146 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 3 13:40:25.218426 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 3 13:40:25.270362 kernel: raid6: avx512x4 gen() 17885 MB/s Mar 3 13:40:25.288335 kernel: raid6: avx512x2 gen() 17828 MB/s Mar 3 13:40:25.306358 kernel: raid6: avx512x1 gen() 17676 MB/s Mar 3 13:40:25.324334 kernel: raid6: avx2x4 gen() 17644 MB/s Mar 3 13:40:25.342347 kernel: raid6: avx2x2 gen() 17670 MB/s Mar 3 13:40:25.360586 kernel: raid6: avx2x1 gen() 13732 MB/s Mar 3 13:40:25.360644 kernel: raid6: using algorithm avx512x4 gen() 17885 MB/s Mar 3 13:40:25.379520 kernel: raid6: .... xor() 7770 MB/s, rmw enabled Mar 3 13:40:25.379585 kernel: raid6: using avx512x2 recovery algorithm Mar 3 13:40:25.400339 kernel: xor: automatically using best checksumming function avx Mar 3 13:40:25.567344 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 3 13:40:25.574520 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 3 13:40:25.576591 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:40:25.604033 systemd-udevd[436]: Using default interface naming scheme 'v255'. Mar 3 13:40:25.610954 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:40:25.617432 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 3 13:40:25.638781 dracut-pre-trigger[442]: rd.md=0: removing MD RAID activation Mar 3 13:40:25.666758 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 13:40:25.668715 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 13:40:25.732213 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:40:25.736451 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 3 13:40:25.827373 kernel: cryptd: max_cpu_qlen set to 1000 Mar 3 13:40:25.841332 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Mar 3 13:40:25.848782 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 3 13:40:25.849062 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 3 13:40:25.856329 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 3 13:40:25.874355 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:24:b2:7b:74:4b Mar 3 13:40:25.879336 kernel: AES CTR mode by8 optimization enabled Mar 3 13:40:25.884324 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 3 13:40:25.890337 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 3 13:40:25.889989 (udev-worker)[479]: Network interface NamePolicy= disabled on kernel command line. Mar 3 13:40:25.896743 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:40:25.897032 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:40:25.900290 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:40:25.905656 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:40:25.907955 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:40:25.921691 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 3 13:40:25.922228 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:40:25.924608 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:40:25.927767 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:40:25.931509 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 3 13:40:25.931551 kernel: GPT:9289727 != 33554431 Mar 3 13:40:25.931570 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 3 13:40:25.931589 kernel: GPT:9289727 != 33554431 Mar 3 13:40:25.935578 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 3 13:40:25.935646 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 3 13:40:25.959558 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:40:25.977388 kernel: nvme nvme0: using unchecked data buffer Mar 3 13:40:26.126115 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 3 13:40:26.126974 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 3 13:40:26.139228 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 3 13:40:26.150744 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 3 13:40:26.160189 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 3 13:40:26.160910 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 3 13:40:26.162323 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 13:40:26.163374 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:40:26.164487 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 13:40:26.166254 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 3 13:40:26.170448 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 3 13:40:26.188032 disk-uuid[669]: Primary Header is updated. Mar 3 13:40:26.188032 disk-uuid[669]: Secondary Entries is updated. Mar 3 13:40:26.188032 disk-uuid[669]: Secondary Header is updated. Mar 3 13:40:26.196677 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 3 13:40:26.198205 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 3 13:40:27.209188 disk-uuid[671]: The operation has completed successfully. Mar 3 13:40:27.209956 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 3 13:40:27.342622 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 3 13:40:27.342759 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 3 13:40:27.383527 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 3 13:40:27.407841 sh[937]: Success Mar 3 13:40:27.435921 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 3 13:40:27.435991 kernel: device-mapper: uevent: version 1.0.3 Mar 3 13:40:27.436665 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 3 13:40:27.448327 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Mar 3 13:40:27.542047 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 3 13:40:27.544767 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 3 13:40:27.554062 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 3 13:40:27.570365 kernel: BTRFS: device fsid f550cb98-648e-4600-9237-4b15eb09827b devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (960) Mar 3 13:40:27.575091 kernel: BTRFS info (device dm-0): first mount of filesystem f550cb98-648e-4600-9237-4b15eb09827b Mar 3 13:40:27.575159 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:40:27.700460 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 3 13:40:27.700534 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 3 13:40:27.700549 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 3 13:40:27.716141 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 3 13:40:27.717440 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 3 13:40:27.718183 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 3 13:40:27.719635 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 3 13:40:27.721979 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 3 13:40:27.759326 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (993) Mar 3 13:40:27.763746 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:40:27.763815 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:40:27.784122 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 3 13:40:27.784208 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 3 13:40:27.792334 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:40:27.793643 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 3 13:40:27.797320 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 3 13:40:27.832571 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 13:40:27.836365 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 13:40:27.881019 systemd-networkd[1129]: lo: Link UP Mar 3 13:40:27.881033 systemd-networkd[1129]: lo: Gained carrier Mar 3 13:40:27.882779 systemd-networkd[1129]: Enumeration completed Mar 3 13:40:27.882912 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 13:40:27.883877 systemd-networkd[1129]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:40:27.883883 systemd-networkd[1129]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 13:40:27.886708 systemd[1]: Reached target network.target - Network. Mar 3 13:40:27.886785 systemd-networkd[1129]: eth0: Link UP Mar 3 13:40:27.886791 systemd-networkd[1129]: eth0: Gained carrier Mar 3 13:40:27.886807 systemd-networkd[1129]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:40:27.897588 systemd-networkd[1129]: eth0: DHCPv4 address 172.31.29.215/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 3 13:40:28.193965 ignition[1084]: Ignition 2.22.0 Mar 3 13:40:28.194362 ignition[1084]: Stage: fetch-offline Mar 3 13:40:28.194563 ignition[1084]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:40:28.194571 ignition[1084]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 13:40:28.194982 ignition[1084]: Ignition finished successfully Mar 3 13:40:28.196725 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 13:40:28.198178 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 3 13:40:28.230050 ignition[1138]: Ignition 2.22.0 Mar 3 13:40:28.230066 ignition[1138]: Stage: fetch Mar 3 13:40:28.230391 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:40:28.230400 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 13:40:28.230483 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 13:40:28.250504 ignition[1138]: PUT result: OK Mar 3 13:40:28.252225 ignition[1138]: parsed url from cmdline: "" Mar 3 13:40:28.252232 ignition[1138]: no config URL provided Mar 3 13:40:28.252239 ignition[1138]: reading system config file "/usr/lib/ignition/user.ign" Mar 3 13:40:28.252251 ignition[1138]: no config at "/usr/lib/ignition/user.ign" Mar 3 13:40:28.252267 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 13:40:28.252933 ignition[1138]: PUT result: OK Mar 3 13:40:28.253004 ignition[1138]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 3 13:40:28.253578 ignition[1138]: GET result: OK Mar 3 13:40:28.253649 ignition[1138]: parsing config with SHA512: a78912ec994eeada73810d45450f0f8d05819e8744c838fcc5f1294beced027e11b83fc82398dfa05462b3f9d41b64cb92ddb72a7004d47f9e1e8b21c5edb9e6 Mar 3 13:40:28.258906 unknown[1138]: fetched base config from "system" Mar 3 13:40:28.259239 ignition[1138]: fetch: fetch complete Mar 3 13:40:28.258916 unknown[1138]: fetched base config from "system" Mar 3 13:40:28.259244 ignition[1138]: fetch: fetch passed Mar 3 13:40:28.258921 unknown[1138]: fetched user config from "aws" Mar 3 13:40:28.259284 ignition[1138]: Ignition finished successfully Mar 3 13:40:28.261716 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 3 13:40:28.264036 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 3 13:40:28.296936 ignition[1145]: Ignition 2.22.0 Mar 3 13:40:28.296968 ignition[1145]: Stage: kargs Mar 3 13:40:28.297386 ignition[1145]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:40:28.297398 ignition[1145]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 13:40:28.297513 ignition[1145]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 13:40:28.298810 ignition[1145]: PUT result: OK Mar 3 13:40:28.301053 ignition[1145]: kargs: kargs passed Mar 3 13:40:28.301128 ignition[1145]: Ignition finished successfully Mar 3 13:40:28.303356 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 3 13:40:28.304782 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 3 13:40:28.339567 ignition[1152]: Ignition 2.22.0 Mar 3 13:40:28.339582 ignition[1152]: Stage: disks Mar 3 13:40:28.339973 ignition[1152]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:40:28.339985 ignition[1152]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 13:40:28.340098 ignition[1152]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 13:40:28.340958 ignition[1152]: PUT result: OK Mar 3 13:40:28.343295 ignition[1152]: disks: disks passed Mar 3 13:40:28.343391 ignition[1152]: Ignition finished successfully Mar 3 13:40:28.345100 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 3 13:40:28.346082 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 3 13:40:28.346743 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 3 13:40:28.347075 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 13:40:28.347626 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 13:40:28.348172 systemd[1]: Reached target basic.target - Basic System. Mar 3 13:40:28.349898 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 3 13:40:28.399444 systemd-fsck[1160]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 3 13:40:28.402848 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 3 13:40:28.404210 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 3 13:40:28.560333 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f0c751de-febc-4e57-b330-c926d38ed5ec r/w with ordered data mode. Quota mode: none. Mar 3 13:40:28.561448 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 3 13:40:28.562512 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 3 13:40:28.565013 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 13:40:28.568390 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 3 13:40:28.570143 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 3 13:40:28.570968 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 3 13:40:28.572847 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 13:40:28.574817 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 3 13:40:28.576817 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 3 13:40:28.590365 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1179) Mar 3 13:40:28.593450 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:40:28.593509 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:40:28.601996 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 3 13:40:28.602091 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 3 13:40:28.603736 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 13:40:28.891895 initrd-setup-root[1203]: cut: /sysroot/etc/passwd: No such file or directory Mar 3 13:40:28.908667 initrd-setup-root[1210]: cut: /sysroot/etc/group: No such file or directory Mar 3 13:40:28.938555 initrd-setup-root[1217]: cut: /sysroot/etc/shadow: No such file or directory Mar 3 13:40:28.943081 initrd-setup-root[1224]: cut: /sysroot/etc/gshadow: No such file or directory Mar 3 13:40:29.235061 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 3 13:40:29.237244 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 3 13:40:29.240442 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 3 13:40:29.253786 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 3 13:40:29.256421 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:40:29.290936 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 3 13:40:29.292215 ignition[1291]: INFO : Ignition 2.22.0 Mar 3 13:40:29.292215 ignition[1291]: INFO : Stage: mount Mar 3 13:40:29.293529 ignition[1291]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:40:29.293529 ignition[1291]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 13:40:29.293529 ignition[1291]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 13:40:29.295168 ignition[1291]: INFO : PUT result: OK Mar 3 13:40:29.297230 ignition[1291]: INFO : mount: mount passed Mar 3 13:40:29.298387 ignition[1291]: INFO : Ignition finished successfully Mar 3 13:40:29.299010 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 3 13:40:29.301039 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 3 13:40:29.322151 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 13:40:29.349416 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1304) Mar 3 13:40:29.353283 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:40:29.353378 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:40:29.360252 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 3 13:40:29.360353 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 3 13:40:29.362383 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 13:40:29.394149 ignition[1320]: INFO : Ignition 2.22.0 Mar 3 13:40:29.394149 ignition[1320]: INFO : Stage: files Mar 3 13:40:29.395835 ignition[1320]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:40:29.395835 ignition[1320]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 13:40:29.395835 ignition[1320]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 13:40:29.397216 ignition[1320]: INFO : PUT result: OK Mar 3 13:40:29.398782 ignition[1320]: DEBUG : files: compiled without relabeling support, skipping Mar 3 13:40:29.400577 ignition[1320]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 3 13:40:29.400577 ignition[1320]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 3 13:40:29.404143 ignition[1320]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 3 13:40:29.405138 ignition[1320]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 3 13:40:29.405138 ignition[1320]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 3 13:40:29.404711 unknown[1320]: wrote ssh authorized keys file for user: core Mar 3 13:40:29.407386 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 3 13:40:29.407386 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 3 13:40:29.493288 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 3 13:40:29.690613 systemd-networkd[1129]: eth0: Gained IPv6LL Mar 3 13:40:29.695070 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 3 13:40:29.696676 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 3 13:40:29.696676 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 3 13:40:29.696676 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 3 13:40:29.696676 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 3 13:40:29.696676 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 13:40:29.696676 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 13:40:29.696676 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 13:40:29.696676 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 13:40:29.702955 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 13:40:29.702955 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 13:40:29.702955 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 3 13:40:29.706059 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 3 13:40:29.706059 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 3 13:40:29.706059 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 3 13:40:30.189112 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 3 13:40:32.689527 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 3 13:40:32.689527 ignition[1320]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 3 13:40:32.703608 ignition[1320]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 13:40:32.709120 ignition[1320]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 13:40:32.709120 ignition[1320]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 3 13:40:32.709120 ignition[1320]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 3 13:40:32.711600 ignition[1320]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 3 13:40:32.711600 ignition[1320]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 3 13:40:32.711600 ignition[1320]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 3 13:40:32.711600 ignition[1320]: INFO : files: files passed Mar 3 13:40:32.711600 ignition[1320]: INFO : Ignition finished successfully Mar 3 13:40:32.712669 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 3 13:40:32.715486 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 3 13:40:32.717138 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 3 13:40:32.726450 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 3 13:40:32.727140 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 3 13:40:32.741921 initrd-setup-root-after-ignition[1350]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:40:32.741921 initrd-setup-root-after-ignition[1350]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:40:32.745034 initrd-setup-root-after-ignition[1354]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:40:32.746249 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 13:40:32.747122 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 3 13:40:32.748592 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 3 13:40:32.794110 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 3 13:40:32.794232 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 3 13:40:32.795075 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 3 13:40:32.795811 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 3 13:40:32.797177 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 3 13:40:32.798402 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 3 13:40:32.823967 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 13:40:32.826036 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 3 13:40:32.850577 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:40:32.851247 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:40:32.852272 systemd[1]: Stopped target timers.target - Timer Units. Mar 3 13:40:32.853272 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 3 13:40:32.853515 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 13:40:32.854621 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 3 13:40:32.855487 systemd[1]: Stopped target basic.target - Basic System. Mar 3 13:40:32.856207 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 3 13:40:32.857171 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 13:40:32.857934 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 3 13:40:32.858717 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 3 13:40:32.859467 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 3 13:40:32.860252 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 13:40:32.861196 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 3 13:40:32.862283 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 3 13:40:32.863070 systemd[1]: Stopped target swap.target - Swaps. Mar 3 13:40:32.863830 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 3 13:40:32.864054 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 3 13:40:32.865231 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:40:32.866070 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:40:32.866739 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 3 13:40:32.867443 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:40:32.868036 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 3 13:40:32.868252 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 3 13:40:32.869780 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 3 13:40:32.870023 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 13:40:32.870733 systemd[1]: ignition-files.service: Deactivated successfully. Mar 3 13:40:32.870891 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 3 13:40:32.873430 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 3 13:40:32.878568 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 3 13:40:32.879240 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 3 13:40:32.879461 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:40:32.880665 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 3 13:40:32.880905 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 13:40:32.888496 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 3 13:40:32.890378 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 3 13:40:32.912680 ignition[1374]: INFO : Ignition 2.22.0 Mar 3 13:40:32.912680 ignition[1374]: INFO : Stage: umount Mar 3 13:40:32.916045 ignition[1374]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:40:32.916045 ignition[1374]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 13:40:32.916045 ignition[1374]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 13:40:32.916045 ignition[1374]: INFO : PUT result: OK Mar 3 13:40:32.918804 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 3 13:40:32.920677 ignition[1374]: INFO : umount: umount passed Mar 3 13:40:32.920677 ignition[1374]: INFO : Ignition finished successfully Mar 3 13:40:32.923263 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 3 13:40:32.923426 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 3 13:40:32.924425 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 3 13:40:32.924495 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 3 13:40:32.925072 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 3 13:40:32.925135 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 3 13:40:32.925754 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 3 13:40:32.925812 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 3 13:40:32.926531 systemd[1]: Stopped target network.target - Network. Mar 3 13:40:32.927129 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 3 13:40:32.927196 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 13:40:32.927810 systemd[1]: Stopped target paths.target - Path Units. Mar 3 13:40:32.928393 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 3 13:40:32.930362 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:40:32.930771 systemd[1]: Stopped target slices.target - Slice Units. Mar 3 13:40:32.931694 systemd[1]: Stopped target sockets.target - Socket Units. Mar 3 13:40:32.932382 systemd[1]: iscsid.socket: Deactivated successfully. Mar 3 13:40:32.932439 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 13:40:32.933116 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 3 13:40:32.933161 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 13:40:32.933742 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 3 13:40:32.933818 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 3 13:40:32.934405 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 3 13:40:32.934462 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 3 13:40:32.935192 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 3 13:40:32.935849 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 3 13:40:32.938473 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 3 13:40:32.939033 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 3 13:40:32.942279 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 3 13:40:32.943714 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 3 13:40:32.943820 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:40:32.945911 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:40:32.946711 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 3 13:40:32.946845 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 3 13:40:32.949509 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 3 13:40:32.950156 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 3 13:40:32.951819 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 3 13:40:32.951876 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:40:32.953706 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 3 13:40:32.954843 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 3 13:40:32.954917 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 13:40:32.955534 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 3 13:40:32.955597 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:40:32.958978 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 3 13:40:32.959032 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 3 13:40:32.960177 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:40:32.964113 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 3 13:40:32.971711 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 3 13:40:32.971906 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:40:32.975281 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 3 13:40:32.975451 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 3 13:40:32.976434 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 3 13:40:32.976486 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:40:32.978011 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 3 13:40:32.978085 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 3 13:40:32.979457 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 3 13:40:32.979532 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 3 13:40:32.980547 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 3 13:40:32.980618 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 13:40:32.985396 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 3 13:40:32.986187 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 3 13:40:32.986277 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:40:32.989610 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 3 13:40:32.989693 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:40:32.990730 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 3 13:40:32.990800 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 13:40:32.991997 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 3 13:40:32.992068 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:40:32.992879 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:40:32.992942 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:40:32.994682 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 3 13:40:33.000750 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 3 13:40:33.008929 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 3 13:40:33.009550 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 3 13:40:33.063576 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 3 13:40:33.063694 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 3 13:40:33.065198 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 3 13:40:33.065724 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 3 13:40:33.065800 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 3 13:40:33.067267 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 3 13:40:33.084449 systemd[1]: Switching root. Mar 3 13:40:33.126172 systemd-journald[188]: Journal stopped Mar 3 13:40:35.044746 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Mar 3 13:40:35.044852 kernel: SELinux: policy capability network_peer_controls=1 Mar 3 13:40:35.044879 kernel: SELinux: policy capability open_perms=1 Mar 3 13:40:35.044899 kernel: SELinux: policy capability extended_socket_class=1 Mar 3 13:40:35.044919 kernel: SELinux: policy capability always_check_network=0 Mar 3 13:40:35.044939 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 3 13:40:35.044960 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 3 13:40:35.044987 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 3 13:40:35.045018 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 3 13:40:35.045038 kernel: SELinux: policy capability userspace_initial_context=0 Mar 3 13:40:35.045059 kernel: audit: type=1403 audit(1772545233.537:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 3 13:40:35.045080 systemd[1]: Successfully loaded SELinux policy in 80.141ms. Mar 3 13:40:35.045124 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.736ms. Mar 3 13:40:35.045148 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 13:40:35.045170 systemd[1]: Detected virtualization amazon. Mar 3 13:40:35.045192 systemd[1]: Detected architecture x86-64. Mar 3 13:40:35.045217 systemd[1]: Detected first boot. Mar 3 13:40:35.045238 systemd[1]: Initializing machine ID from VM UUID. Mar 3 13:40:35.045259 zram_generator::config[1418]: No configuration found. Mar 3 13:40:35.045281 kernel: Guest personality initialized and is inactive Mar 3 13:40:35.048345 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 3 13:40:35.048396 kernel: Initialized host personality Mar 3 13:40:35.048417 kernel: NET: Registered PF_VSOCK protocol family Mar 3 13:40:35.048440 systemd[1]: Populated /etc with preset unit settings. Mar 3 13:40:35.048473 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 3 13:40:35.048493 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 3 13:40:35.048519 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 3 13:40:35.048540 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 3 13:40:35.048561 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 3 13:40:35.048581 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 3 13:40:35.048602 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 3 13:40:35.048622 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 3 13:40:35.048643 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 3 13:40:35.048677 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 3 13:40:35.048699 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 3 13:40:35.048719 systemd[1]: Created slice user.slice - User and Session Slice. Mar 3 13:40:35.048741 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:40:35.048763 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:40:35.048796 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 3 13:40:35.048816 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 3 13:40:35.048838 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 3 13:40:35.048862 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 13:40:35.048883 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 3 13:40:35.048901 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:40:35.048925 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:40:35.048945 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 3 13:40:35.048963 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 3 13:40:35.048981 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 3 13:40:35.049000 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 3 13:40:35.049024 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:40:35.049045 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 13:40:35.049068 systemd[1]: Reached target slices.target - Slice Units. Mar 3 13:40:35.049089 systemd[1]: Reached target swap.target - Swaps. Mar 3 13:40:35.049108 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 3 13:40:35.049126 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 3 13:40:35.049144 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 3 13:40:35.049163 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:40:35.049180 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 13:40:35.049198 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:40:35.049220 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 3 13:40:35.049238 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 3 13:40:35.049258 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 3 13:40:35.049277 systemd[1]: Mounting media.mount - External Media Directory... Mar 3 13:40:35.049296 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:40:35.057032 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 3 13:40:35.057062 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 3 13:40:35.057083 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 3 13:40:35.057114 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 3 13:40:35.057136 systemd[1]: Reached target machines.target - Containers. Mar 3 13:40:35.057156 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 3 13:40:35.057181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:40:35.057200 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 13:40:35.057220 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 3 13:40:35.057238 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:40:35.057255 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 13:40:35.057273 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:40:35.057295 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 3 13:40:35.057330 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:40:35.057352 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 3 13:40:35.057374 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 3 13:40:35.057397 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 3 13:40:35.057418 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 3 13:40:35.057442 systemd[1]: Stopped systemd-fsck-usr.service. Mar 3 13:40:35.057465 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:40:35.057492 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 13:40:35.057513 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 13:40:35.057533 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 13:40:35.057552 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 3 13:40:35.057569 kernel: loop: module loaded Mar 3 13:40:35.057584 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 3 13:40:35.057596 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 13:40:35.057612 systemd[1]: verity-setup.service: Deactivated successfully. Mar 3 13:40:35.057625 systemd[1]: Stopped verity-setup.service. Mar 3 13:40:35.057639 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:40:35.057655 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 3 13:40:35.057667 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 3 13:40:35.057680 systemd[1]: Mounted media.mount - External Media Directory. Mar 3 13:40:35.057692 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 3 13:40:35.057704 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 3 13:40:35.057717 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 3 13:40:35.057730 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:40:35.057742 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 3 13:40:35.057754 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 3 13:40:35.057769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:40:35.057781 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:40:35.057793 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:40:35.057806 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:40:35.057819 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:40:35.057831 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:40:35.057844 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 13:40:35.057858 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 3 13:40:35.057870 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 3 13:40:35.057885 kernel: ACPI: bus type drm_connector registered Mar 3 13:40:35.057897 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 13:40:35.057910 kernel: fuse: init (API version 7.41) Mar 3 13:40:35.057922 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:40:35.057934 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 3 13:40:35.057952 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 13:40:35.057975 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 13:40:35.057997 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 3 13:40:35.058017 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 3 13:40:35.058036 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:40:35.058056 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 3 13:40:35.058075 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 3 13:40:35.058094 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 13:40:35.058166 systemd-journald[1501]: Collecting audit messages is disabled. Mar 3 13:40:35.058207 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 3 13:40:35.058229 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 3 13:40:35.058247 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 13:40:35.058269 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 3 13:40:35.058290 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 3 13:40:35.058324 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:40:35.058345 systemd-journald[1501]: Journal started Mar 3 13:40:35.058387 systemd-journald[1501]: Runtime Journal (/run/log/journal/ec28a25d50e1a4ed0b20a2ae19aa4acb) is 4.7M, max 38.1M, 33.3M free. Mar 3 13:40:34.546021 systemd[1]: Queued start job for default target multi-user.target. Mar 3 13:40:34.558736 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 3 13:40:34.559250 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 3 13:40:35.070652 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 3 13:40:35.070727 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 13:40:35.077334 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 3 13:40:35.086432 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 3 13:40:35.095323 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 13:40:35.095574 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Mar 3 13:40:35.095604 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Mar 3 13:40:35.099703 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 3 13:40:35.100919 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:40:35.102375 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:40:35.104296 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 3 13:40:35.106965 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 13:40:35.112226 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 3 13:40:35.131442 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 3 13:40:35.132721 kernel: loop0: detected capacity change from 0 to 219192 Mar 3 13:40:35.133929 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 3 13:40:35.138018 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 3 13:40:35.141857 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 3 13:40:35.170632 systemd-journald[1501]: Time spent on flushing to /var/log/journal/ec28a25d50e1a4ed0b20a2ae19aa4acb is 56.553ms for 1027 entries. Mar 3 13:40:35.170632 systemd-journald[1501]: System Journal (/var/log/journal/ec28a25d50e1a4ed0b20a2ae19aa4acb) is 8M, max 195.6M, 187.6M free. Mar 3 13:40:35.240241 systemd-journald[1501]: Received client request to flush runtime journal. Mar 3 13:40:35.240319 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 3 13:40:35.180161 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 3 13:40:35.241943 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 3 13:40:35.260085 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 3 13:40:35.270444 kernel: loop1: detected capacity change from 0 to 110984 Mar 3 13:40:35.265411 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 13:40:35.318201 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. Mar 3 13:40:35.318357 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. Mar 3 13:40:35.324252 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:40:35.398685 kernel: loop2: detected capacity change from 0 to 128560 Mar 3 13:40:35.504333 kernel: loop3: detected capacity change from 0 to 72368 Mar 3 13:40:35.561410 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 3 13:40:35.616336 kernel: loop4: detected capacity change from 0 to 219192 Mar 3 13:40:35.650333 kernel: loop5: detected capacity change from 0 to 110984 Mar 3 13:40:35.681467 kernel: loop6: detected capacity change from 0 to 128560 Mar 3 13:40:35.705324 kernel: loop7: detected capacity change from 0 to 72368 Mar 3 13:40:35.723011 (sd-merge)[1581]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 3 13:40:35.724349 (sd-merge)[1581]: Merged extensions into '/usr'. Mar 3 13:40:35.736923 systemd[1]: Reload requested from client PID 1537 ('systemd-sysext') (unit systemd-sysext.service)... Mar 3 13:40:35.737095 systemd[1]: Reloading... Mar 3 13:40:35.836328 zram_generator::config[1603]: No configuration found. Mar 3 13:40:36.153731 systemd[1]: Reloading finished in 415 ms. Mar 3 13:40:36.174855 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 3 13:40:36.175625 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 3 13:40:36.183399 systemd[1]: Starting ensure-sysext.service... Mar 3 13:40:36.185040 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 13:40:36.188434 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:40:36.215474 systemd[1]: Reload requested from client PID 1659 ('systemctl') (unit ensure-sysext.service)... Mar 3 13:40:36.215493 systemd[1]: Reloading... Mar 3 13:40:36.222062 systemd-tmpfiles[1660]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 3 13:40:36.222098 systemd-tmpfiles[1660]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 3 13:40:36.222410 systemd-tmpfiles[1660]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 3 13:40:36.222666 systemd-tmpfiles[1660]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 3 13:40:36.223520 systemd-tmpfiles[1660]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 3 13:40:36.223885 systemd-tmpfiles[1660]: ACLs are not supported, ignoring. Mar 3 13:40:36.223999 systemd-tmpfiles[1660]: ACLs are not supported, ignoring. Mar 3 13:40:36.232376 systemd-tmpfiles[1660]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 13:40:36.233162 systemd-tmpfiles[1660]: Skipping /boot Mar 3 13:40:36.246374 systemd-udevd[1661]: Using default interface naming scheme 'v255'. Mar 3 13:40:36.250555 systemd-tmpfiles[1660]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 13:40:36.250971 systemd-tmpfiles[1660]: Skipping /boot Mar 3 13:40:36.289331 zram_generator::config[1688]: No configuration found. Mar 3 13:40:36.530841 (udev-worker)[1749]: Network interface NamePolicy= disabled on kernel command line. Mar 3 13:40:36.604334 kernel: mousedev: PS/2 mouse device common for all mice Mar 3 13:40:36.611352 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 3 13:40:36.611993 systemd[1]: Reloading finished in 396 ms. Mar 3 13:40:36.623504 kernel: ACPI: button: Power Button [PWRF] Mar 3 13:40:36.623564 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Mar 3 13:40:36.622423 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:40:36.631662 kernel: ACPI: button: Sleep Button [SLPF] Mar 3 13:40:36.631266 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:40:36.640995 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 3 13:40:36.645331 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 3 13:40:36.648499 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 13:40:36.658475 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 3 13:40:36.667222 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 3 13:40:36.671483 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 13:40:36.681507 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 13:40:36.692990 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 3 13:40:36.703180 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:40:36.703668 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:40:36.706524 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:40:36.714996 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:40:36.718374 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:40:36.718844 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:40:36.718956 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:40:36.719045 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:40:36.719962 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:40:36.720119 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:40:36.725597 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:40:36.725810 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:40:36.733105 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:40:36.733597 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:40:36.734175 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:40:36.736987 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 3 13:40:36.737724 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:40:36.749805 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:40:36.751354 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:40:36.753918 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 13:40:36.755112 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:40:36.755507 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:40:36.755769 systemd[1]: Reached target time-set.target - System Time Set. Mar 3 13:40:36.757006 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:40:36.768743 systemd[1]: Finished ensure-sysext.service. Mar 3 13:40:36.770550 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 3 13:40:36.771889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:40:36.772557 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:40:36.775090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 13:40:36.784411 ldconfig[1532]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 3 13:40:36.789038 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 3 13:40:36.800616 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 3 13:40:36.802494 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:40:36.803376 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:40:36.805042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:40:36.805664 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:40:36.807421 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 13:40:36.807794 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 13:40:36.813949 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 13:40:36.817476 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 3 13:40:36.847047 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 3 13:40:36.850125 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 3 13:40:36.850944 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 3 13:40:36.861849 augenrules[1851]: No rules Mar 3 13:40:36.863478 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 13:40:36.863697 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 13:40:36.929027 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 3 13:40:36.977636 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:40:37.076814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:40:37.077094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:40:37.082920 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:40:37.100546 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:40:37.144011 systemd-networkd[1790]: lo: Link UP Mar 3 13:40:37.144028 systemd-networkd[1790]: lo: Gained carrier Mar 3 13:40:37.146916 systemd-networkd[1790]: Enumeration completed Mar 3 13:40:37.147086 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 13:40:37.147396 systemd-networkd[1790]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:40:37.147402 systemd-networkd[1790]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 13:40:37.151824 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 3 13:40:37.153651 systemd-networkd[1790]: eth0: Link UP Mar 3 13:40:37.153831 systemd-networkd[1790]: eth0: Gained carrier Mar 3 13:40:37.153862 systemd-networkd[1790]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:40:37.157557 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 3 13:40:37.166388 systemd-networkd[1790]: eth0: DHCPv4 address 172.31.29.215/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 3 13:40:37.196602 systemd-resolved[1791]: Positive Trust Anchors: Mar 3 13:40:37.196619 systemd-resolved[1791]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 13:40:37.196678 systemd-resolved[1791]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 13:40:37.199872 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 3 13:40:37.203163 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 3 13:40:37.205645 systemd-resolved[1791]: Defaulting to hostname 'linux'. Mar 3 13:40:37.210033 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 3 13:40:37.210837 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 13:40:37.211788 systemd[1]: Reached target network.target - Network. Mar 3 13:40:37.212610 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:40:37.230258 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 3 13:40:37.240682 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:40:37.241453 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 13:40:37.241974 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 3 13:40:37.242503 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 3 13:40:37.242866 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 3 13:40:37.243424 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 3 13:40:37.243837 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 3 13:40:37.244151 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 3 13:40:37.244467 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 3 13:40:37.244500 systemd[1]: Reached target paths.target - Path Units. Mar 3 13:40:37.244946 systemd[1]: Reached target timers.target - Timer Units. Mar 3 13:40:37.246239 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 3 13:40:37.247846 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 3 13:40:37.250340 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 3 13:40:37.250835 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 3 13:40:37.251184 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 3 13:40:37.253774 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 3 13:40:37.254522 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 3 13:40:37.255546 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 3 13:40:37.256734 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 13:40:37.257224 systemd[1]: Reached target basic.target - Basic System. Mar 3 13:40:37.257596 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 3 13:40:37.257629 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 3 13:40:37.258663 systemd[1]: Starting containerd.service - containerd container runtime... Mar 3 13:40:37.260216 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 3 13:40:37.263498 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 3 13:40:37.266568 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 3 13:40:37.270035 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 3 13:40:37.273989 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 3 13:40:37.274679 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 3 13:40:37.278013 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 3 13:40:37.283491 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 3 13:40:37.289040 systemd[1]: Started ntpd.service - Network Time Service. Mar 3 13:40:37.289835 jq[1952]: false Mar 3 13:40:37.294473 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 3 13:40:37.300607 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 3 13:40:37.302492 oslogin_cache_refresh[1954]: Refreshing passwd entry cache Mar 3 13:40:37.303422 google_oslogin_nss_cache[1954]: oslogin_cache_refresh[1954]: Refreshing passwd entry cache Mar 3 13:40:37.305870 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 3 13:40:37.312589 google_oslogin_nss_cache[1954]: oslogin_cache_refresh[1954]: Failure getting users, quitting Mar 3 13:40:37.312589 google_oslogin_nss_cache[1954]: oslogin_cache_refresh[1954]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 3 13:40:37.312589 google_oslogin_nss_cache[1954]: oslogin_cache_refresh[1954]: Refreshing group entry cache Mar 3 13:40:37.310295 oslogin_cache_refresh[1954]: Failure getting users, quitting Mar 3 13:40:37.310335 oslogin_cache_refresh[1954]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 3 13:40:37.310383 oslogin_cache_refresh[1954]: Refreshing group entry cache Mar 3 13:40:37.318467 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 3 13:40:37.325380 google_oslogin_nss_cache[1954]: oslogin_cache_refresh[1954]: Failure getting groups, quitting Mar 3 13:40:37.325380 google_oslogin_nss_cache[1954]: oslogin_cache_refresh[1954]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 3 13:40:37.322821 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 3 13:40:37.321911 oslogin_cache_refresh[1954]: Failure getting groups, quitting Mar 3 13:40:37.321923 oslogin_cache_refresh[1954]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 3 13:40:37.332269 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 3 13:40:37.333052 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 3 13:40:37.336534 systemd[1]: Starting update-engine.service - Update Engine... Mar 3 13:40:37.340477 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 3 13:40:37.342145 extend-filesystems[1953]: Found /dev/nvme0n1p6 Mar 3 13:40:37.359874 extend-filesystems[1953]: Found /dev/nvme0n1p9 Mar 3 13:40:37.351461 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 3 13:40:37.352184 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 3 13:40:37.352446 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 3 13:40:37.352703 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 3 13:40:37.354373 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 3 13:40:37.373271 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 3 13:40:37.374128 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 3 13:40:37.388267 extend-filesystems[1953]: Checking size of /dev/nvme0n1p9 Mar 3 13:40:37.400095 tar[1982]: linux-amd64/LICENSE Mar 3 13:40:37.400095 tar[1982]: linux-amd64/helm Mar 3 13:40:37.410289 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 3 13:40:37.417632 systemd[1]: motdgen.service: Deactivated successfully. Mar 3 13:40:37.417850 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 3 13:40:37.424334 jq[1968]: true Mar 3 13:40:37.425531 extend-filesystems[1953]: Resized partition /dev/nvme0n1p9 Mar 3 13:40:37.429942 (ntainerd)[1977]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 3 13:40:37.433148 update_engine[1967]: I20260303 13:40:37.432786 1967 main.cc:92] Flatcar Update Engine starting Mar 3 13:40:37.439399 ntpd[1956]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:22:55 UTC 2026 (1): Starting Mar 3 13:40:37.440570 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:22:55 UTC 2026 (1): Starting Mar 3 13:40:37.440570 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 13:40:37.440570 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: ---------------------------------------------------- Mar 3 13:40:37.440570 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: ntp-4 is maintained by Network Time Foundation, Mar 3 13:40:37.440570 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 13:40:37.440570 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: corporation. Support and training for ntp-4 are Mar 3 13:40:37.440570 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: available at https://www.nwtime.org/support Mar 3 13:40:37.440570 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: ---------------------------------------------------- Mar 3 13:40:37.439460 ntpd[1956]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 13:40:37.439468 ntpd[1956]: ---------------------------------------------------- Mar 3 13:40:37.439475 ntpd[1956]: ntp-4 is maintained by Network Time Foundation, Mar 3 13:40:37.439481 ntpd[1956]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 13:40:37.439487 ntpd[1956]: corporation. Support and training for ntp-4 are Mar 3 13:40:37.439494 ntpd[1956]: available at https://www.nwtime.org/support Mar 3 13:40:37.439501 ntpd[1956]: ---------------------------------------------------- Mar 3 13:40:37.444996 ntpd[1956]: proto: precision = 0.056 usec (-24) Mar 3 13:40:37.445591 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: proto: precision = 0.056 usec (-24) Mar 3 13:40:37.457668 kernel: ntpd[1956]: segfault at 24 ip 00005567eb159aeb sp 00007ffd67051170 error 4 in ntpd[68aeb,5567eb0f7000+80000] likely on CPU 0 (core 0, socket 0) Mar 3 13:40:37.457755 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Mar 3 13:40:37.447930 ntpd[1956]: basedate set to 2026-02-19 Mar 3 13:40:37.457827 extend-filesystems[2005]: resize2fs 1.47.3 (8-Jul-2025) Mar 3 13:40:37.458488 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: basedate set to 2026-02-19 Mar 3 13:40:37.458488 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: gps base set to 2026-02-22 (week 2407) Mar 3 13:40:37.458488 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 13:40:37.458488 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 13:40:37.458488 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 13:40:37.458488 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: Listen normally on 3 eth0 172.31.29.215:123 Mar 3 13:40:37.458488 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: Listen normally on 4 lo [::1]:123 Mar 3 13:40:37.458488 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: bind(21) AF_INET6 [fe80::424:b2ff:fe7b:744b%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 13:40:37.458488 ntpd[1956]: 3 Mar 13:40:37 ntpd[1956]: unable to create socket on eth0 (5) for [fe80::424:b2ff:fe7b:744b%2]:123 Mar 3 13:40:37.447950 ntpd[1956]: gps base set to 2026-02-22 (week 2407) Mar 3 13:40:37.448056 ntpd[1956]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 13:40:37.450652 ntpd[1956]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 13:40:37.450836 ntpd[1956]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 13:40:37.450857 ntpd[1956]: Listen normally on 3 eth0 172.31.29.215:123 Mar 3 13:40:37.450882 ntpd[1956]: Listen normally on 4 lo [::1]:123 Mar 3 13:40:37.450905 ntpd[1956]: bind(21) AF_INET6 [fe80::424:b2ff:fe7b:744b%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 13:40:37.450920 ntpd[1956]: unable to create socket on eth0 (5) for [fe80::424:b2ff:fe7b:744b%2]:123 Mar 3 13:40:37.476162 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 3 13:40:37.479204 dbus-daemon[1950]: [system] SELinux support is enabled Mar 3 13:40:37.483486 systemd-coredump[2011]: Process 1956 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 3 13:40:37.485570 systemd-logind[1964]: Watching system buttons on /dev/input/event2 (Power Button) Mar 3 13:40:37.485590 systemd-logind[1964]: Watching system buttons on /dev/input/event3 (Sleep Button) Mar 3 13:40:37.485607 systemd-logind[1964]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 3 13:40:37.486559 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Mar 3 13:40:37.494604 systemd-logind[1964]: New seat seat0. Mar 3 13:40:37.498099 dbus-daemon[1950]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1790 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 3 13:40:37.501953 systemd[1]: Started systemd-coredump@0-2011-0.service - Process Core Dump (PID 2011/UID 0). Mar 3 13:40:37.503657 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 3 13:40:37.505663 coreos-metadata[1949]: Mar 03 13:40:37.505 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 3 13:40:37.505663 coreos-metadata[1949]: Mar 03 13:40:37.505 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 3 13:40:37.511484 coreos-metadata[1949]: Mar 03 13:40:37.507 INFO Fetch successful Mar 3 13:40:37.511484 coreos-metadata[1949]: Mar 03 13:40:37.507 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 3 13:40:37.508176 systemd[1]: Started systemd-logind.service - User Login Management. Mar 3 13:40:37.509295 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 3 13:40:37.509433 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 3 13:40:37.510408 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 3 13:40:37.510423 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 3 13:40:37.516867 jq[2004]: true Mar 3 13:40:37.517527 update_engine[1967]: I20260303 13:40:37.516000 1967 update_check_scheduler.cc:74] Next update check in 11m26s Mar 3 13:40:37.517575 coreos-metadata[1949]: Mar 03 13:40:37.515 INFO Fetch successful Mar 3 13:40:37.517575 coreos-metadata[1949]: Mar 03 13:40:37.515 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 3 13:40:37.517575 coreos-metadata[1949]: Mar 03 13:40:37.516 INFO Fetch successful Mar 3 13:40:37.517575 coreos-metadata[1949]: Mar 03 13:40:37.516 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 3 13:40:37.518945 coreos-metadata[1949]: Mar 03 13:40:37.518 INFO Fetch successful Mar 3 13:40:37.518945 coreos-metadata[1949]: Mar 03 13:40:37.518 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 3 13:40:37.521152 coreos-metadata[1949]: Mar 03 13:40:37.519 INFO Fetch failed with 404: resource not found Mar 3 13:40:37.521152 coreos-metadata[1949]: Mar 03 13:40:37.519 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 3 13:40:37.523423 systemd[1]: Started update-engine.service - Update Engine. Mar 3 13:40:37.527821 coreos-metadata[1949]: Mar 03 13:40:37.527 INFO Fetch successful Mar 3 13:40:37.527821 coreos-metadata[1949]: Mar 03 13:40:37.527 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 3 13:40:37.525129 dbus-daemon[1950]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 3 13:40:37.535907 coreos-metadata[1949]: Mar 03 13:40:37.535 INFO Fetch successful Mar 3 13:40:37.535907 coreos-metadata[1949]: Mar 03 13:40:37.535 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 3 13:40:37.536354 coreos-metadata[1949]: Mar 03 13:40:37.536 INFO Fetch successful Mar 3 13:40:37.536354 coreos-metadata[1949]: Mar 03 13:40:37.536 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 3 13:40:37.538994 coreos-metadata[1949]: Mar 03 13:40:37.538 INFO Fetch successful Mar 3 13:40:37.538994 coreos-metadata[1949]: Mar 03 13:40:37.538 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 3 13:40:37.541911 coreos-metadata[1949]: Mar 03 13:40:37.540 INFO Fetch successful Mar 3 13:40:37.550789 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 3 13:40:37.564027 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 3 13:40:37.629851 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 3 13:40:37.645043 extend-filesystems[2005]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 3 13:40:37.645043 extend-filesystems[2005]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 3 13:40:37.645043 extend-filesystems[2005]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 3 13:40:37.643470 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 3 13:40:37.662018 extend-filesystems[1953]: Resized filesystem in /dev/nvme0n1p9 Mar 3 13:40:37.643687 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 3 13:40:37.675655 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 3 13:40:37.676435 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 3 13:40:37.682964 bash[2047]: Updated "/home/core/.ssh/authorized_keys" Mar 3 13:40:37.683787 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 3 13:40:37.693526 systemd[1]: Starting sshkeys.service... Mar 3 13:40:37.715637 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 3 13:40:37.718452 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 3 13:40:37.910804 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 3 13:40:37.913183 dbus-daemon[1950]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 3 13:40:37.913523 systemd-coredump[2016]: Process 1956 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1956: #0 0x00005567eb159aeb n/a (ntpd + 0x68aeb) #1 0x00005567eb102cdf n/a (ntpd + 0x11cdf) #2 0x00005567eb103575 n/a (ntpd + 0x12575) #3 0x00005567eb0fed8a n/a (ntpd + 0xdd8a) #4 0x00005567eb1005d3 n/a (ntpd + 0xf5d3) #5 0x00005567eb108fd1 n/a (ntpd + 0x17fd1) #6 0x00005567eb0f9c2d n/a (ntpd + 0x8c2d) #7 0x00007f3cd61c016c n/a (libc.so.6 + 0x2716c) #8 0x00007f3cd61c0229 __libc_start_main (libc.so.6 + 0x27229) #9 0x00005567eb0f9c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Mar 3 13:40:37.918050 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 3 13:40:37.918244 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 3 13:40:37.927782 dbus-daemon[1950]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2022 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 3 13:40:37.926916 systemd[1]: systemd-coredump@0-2011-0.service: Deactivated successfully. Mar 3 13:40:37.948530 systemd[1]: Starting polkit.service - Authorization Manager... Mar 3 13:40:37.956647 locksmithd[2019]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 3 13:40:37.961338 coreos-metadata[2068]: Mar 03 13:40:37.960 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 3 13:40:37.969341 coreos-metadata[2068]: Mar 03 13:40:37.966 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 3 13:40:37.971556 coreos-metadata[2068]: Mar 03 13:40:37.971 INFO Fetch successful Mar 3 13:40:37.971556 coreos-metadata[2068]: Mar 03 13:40:37.971 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 3 13:40:37.972775 coreos-metadata[2068]: Mar 03 13:40:37.972 INFO Fetch successful Mar 3 13:40:37.979447 unknown[2068]: wrote ssh authorized keys file for user: core Mar 3 13:40:38.031220 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Mar 3 13:40:38.037090 systemd[1]: Started ntpd.service - Network Time Service. Mar 3 13:40:38.054162 update-ssh-keys[2131]: Updated "/home/core/.ssh/authorized_keys" Mar 3 13:40:38.057825 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 3 13:40:38.062957 systemd[1]: Finished sshkeys.service. Mar 3 13:40:38.194428 containerd[1977]: time="2026-03-03T13:40:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 3 13:40:38.199453 containerd[1977]: time="2026-03-03T13:40:38.197295278Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 3 13:40:38.211251 ntpd[2136]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:22:55 UTC 2026 (1): Starting Mar 3 13:40:38.214216 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:22:55 UTC 2026 (1): Starting Mar 3 13:40:38.214216 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 13:40:38.214216 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: ---------------------------------------------------- Mar 3 13:40:38.214216 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: ntp-4 is maintained by Network Time Foundation, Mar 3 13:40:38.214216 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 13:40:38.214216 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: corporation. Support and training for ntp-4 are Mar 3 13:40:38.214216 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: available at https://www.nwtime.org/support Mar 3 13:40:38.214216 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: ---------------------------------------------------- Mar 3 13:40:38.214216 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: proto: precision = 0.081 usec (-23) Mar 3 13:40:38.211343 ntpd[2136]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 13:40:38.211356 ntpd[2136]: ---------------------------------------------------- Mar 3 13:40:38.211366 ntpd[2136]: ntp-4 is maintained by Network Time Foundation, Mar 3 13:40:38.211375 ntpd[2136]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 13:40:38.211386 ntpd[2136]: corporation. Support and training for ntp-4 are Mar 3 13:40:38.211396 ntpd[2136]: available at https://www.nwtime.org/support Mar 3 13:40:38.211407 ntpd[2136]: ---------------------------------------------------- Mar 3 13:40:38.212193 ntpd[2136]: proto: precision = 0.081 usec (-23) Mar 3 13:40:38.229342 kernel: ntpd[2136]: segfault at 24 ip 00005601cc161aeb sp 00007ffdec34d0c0 error 4 in ntpd[68aeb,5601cc0ff000+80000] likely on CPU 0 (core 0, socket 0) Mar 3 13:40:38.229458 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Mar 3 13:40:38.229490 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: basedate set to 2026-02-19 Mar 3 13:40:38.229490 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: gps base set to 2026-02-22 (week 2407) Mar 3 13:40:38.229490 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 13:40:38.229490 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 13:40:38.229490 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 13:40:38.229490 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: Listen normally on 3 eth0 172.31.29.215:123 Mar 3 13:40:38.229490 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: Listen normally on 4 lo [::1]:123 Mar 3 13:40:38.229490 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: bind(21) AF_INET6 [fe80::424:b2ff:fe7b:744b%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 13:40:38.229490 ntpd[2136]: 3 Mar 13:40:38 ntpd[2136]: unable to create socket on eth0 (5) for [fe80::424:b2ff:fe7b:744b%2]:123 Mar 3 13:40:38.221243 ntpd[2136]: basedate set to 2026-02-19 Mar 3 13:40:38.229954 containerd[1977]: time="2026-03-03T13:40:38.224865704Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.905µs" Mar 3 13:40:38.229954 containerd[1977]: time="2026-03-03T13:40:38.224914522Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 3 13:40:38.229954 containerd[1977]: time="2026-03-03T13:40:38.224946124Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 3 13:40:38.229954 containerd[1977]: time="2026-03-03T13:40:38.225144937Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 3 13:40:38.229954 containerd[1977]: time="2026-03-03T13:40:38.225172271Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 3 13:40:38.229954 containerd[1977]: time="2026-03-03T13:40:38.225211873Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 13:40:38.229954 containerd[1977]: time="2026-03-03T13:40:38.225283200Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 13:40:38.229954 containerd[1977]: time="2026-03-03T13:40:38.225362691Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 13:40:38.229954 containerd[1977]: time="2026-03-03T13:40:38.225705901Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 13:40:38.229954 containerd[1977]: time="2026-03-03T13:40:38.225730509Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 13:40:38.229954 containerd[1977]: time="2026-03-03T13:40:38.225755192Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 13:40:38.229954 containerd[1977]: time="2026-03-03T13:40:38.225773160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 3 13:40:38.221268 ntpd[2136]: gps base set to 2026-02-22 (week 2407) Mar 3 13:40:38.237735 containerd[1977]: time="2026-03-03T13:40:38.225886314Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 3 13:40:38.237735 containerd[1977]: time="2026-03-03T13:40:38.226139341Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 13:40:38.237735 containerd[1977]: time="2026-03-03T13:40:38.226186925Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 13:40:38.237735 containerd[1977]: time="2026-03-03T13:40:38.226201425Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 3 13:40:38.237735 containerd[1977]: time="2026-03-03T13:40:38.226237445Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 3 13:40:38.237735 containerd[1977]: time="2026-03-03T13:40:38.226665185Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 3 13:40:38.237735 containerd[1977]: time="2026-03-03T13:40:38.226730142Z" level=info msg="metadata content store policy set" policy=shared Mar 3 13:40:38.237735 containerd[1977]: time="2026-03-03T13:40:38.235720278Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 3 13:40:38.237735 containerd[1977]: time="2026-03-03T13:40:38.235813574Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 3 13:40:38.237735 containerd[1977]: time="2026-03-03T13:40:38.235835097Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 3 13:40:38.237735 containerd[1977]: time="2026-03-03T13:40:38.235929839Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 3 13:40:38.237735 containerd[1977]: time="2026-03-03T13:40:38.235952410Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 3 13:40:38.237735 containerd[1977]: time="2026-03-03T13:40:38.235970716Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 3 13:40:38.237735 containerd[1977]: time="2026-03-03T13:40:38.235992500Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 3 13:40:38.221391 ntpd[2136]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 13:40:38.238263 containerd[1977]: time="2026-03-03T13:40:38.236009347Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 3 13:40:38.238263 containerd[1977]: time="2026-03-03T13:40:38.236026252Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 3 13:40:38.238263 containerd[1977]: time="2026-03-03T13:40:38.236041560Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 3 13:40:38.238263 containerd[1977]: time="2026-03-03T13:40:38.236055450Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 3 13:40:38.238263 containerd[1977]: time="2026-03-03T13:40:38.236073596Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 3 13:40:38.238263 containerd[1977]: time="2026-03-03T13:40:38.236232782Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 3 13:40:38.238263 containerd[1977]: time="2026-03-03T13:40:38.236259933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 3 13:40:38.238263 containerd[1977]: time="2026-03-03T13:40:38.236282682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 3 13:40:38.238263 containerd[1977]: time="2026-03-03T13:40:38.237034119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 3 13:40:38.238263 containerd[1977]: time="2026-03-03T13:40:38.237079777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 3 13:40:38.238263 containerd[1977]: time="2026-03-03T13:40:38.237110259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 3 13:40:38.238263 containerd[1977]: time="2026-03-03T13:40:38.237134497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 3 13:40:38.238263 containerd[1977]: time="2026-03-03T13:40:38.237157547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 3 13:40:38.238263 containerd[1977]: time="2026-03-03T13:40:38.237185281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 3 13:40:38.238263 containerd[1977]: time="2026-03-03T13:40:38.237203717Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 3 13:40:38.221419 ntpd[2136]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 13:40:38.238851 containerd[1977]: time="2026-03-03T13:40:38.237222499Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 3 13:40:38.238851 containerd[1977]: time="2026-03-03T13:40:38.237288256Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 3 13:40:38.238851 containerd[1977]: time="2026-03-03T13:40:38.237362109Z" level=info msg="Start snapshots syncer" Mar 3 13:40:38.238851 containerd[1977]: time="2026-03-03T13:40:38.237428340Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 3 13:40:38.221603 ntpd[2136]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 13:40:38.239040 containerd[1977]: time="2026-03-03T13:40:38.237903835Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 3 13:40:38.239040 containerd[1977]: time="2026-03-03T13:40:38.237973867Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 3 13:40:38.239040 containerd[1977]: time="2026-03-03T13:40:38.238048274Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 3 13:40:38.239040 containerd[1977]: time="2026-03-03T13:40:38.238451294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 3 13:40:38.239040 containerd[1977]: time="2026-03-03T13:40:38.238486496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 3 13:40:38.239040 containerd[1977]: time="2026-03-03T13:40:38.238504701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 3 13:40:38.239040 containerd[1977]: time="2026-03-03T13:40:38.238522118Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 3 13:40:38.239040 containerd[1977]: time="2026-03-03T13:40:38.238541881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 3 13:40:38.239040 containerd[1977]: time="2026-03-03T13:40:38.238558132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 3 13:40:38.239040 containerd[1977]: time="2026-03-03T13:40:38.238574082Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 3 13:40:38.239040 containerd[1977]: time="2026-03-03T13:40:38.238608669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 3 13:40:38.239040 containerd[1977]: time="2026-03-03T13:40:38.238626107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 3 13:40:38.239040 containerd[1977]: time="2026-03-03T13:40:38.238644202Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 3 13:40:38.221629 ntpd[2136]: Listen normally on 3 eth0 172.31.29.215:123 Mar 3 13:40:38.248429 containerd[1977]: time="2026-03-03T13:40:38.239688792Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 13:40:38.248429 containerd[1977]: time="2026-03-03T13:40:38.239784373Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 13:40:38.248429 containerd[1977]: time="2026-03-03T13:40:38.239802354Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 13:40:38.248429 containerd[1977]: time="2026-03-03T13:40:38.239817903Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 13:40:38.248429 containerd[1977]: time="2026-03-03T13:40:38.239832735Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 3 13:40:38.248429 containerd[1977]: time="2026-03-03T13:40:38.239851361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 3 13:40:38.248429 containerd[1977]: time="2026-03-03T13:40:38.239878463Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 3 13:40:38.248429 containerd[1977]: time="2026-03-03T13:40:38.239901701Z" level=info msg="runtime interface created" Mar 3 13:40:38.248429 containerd[1977]: time="2026-03-03T13:40:38.239909406Z" level=info msg="created NRI interface" Mar 3 13:40:38.248429 containerd[1977]: time="2026-03-03T13:40:38.239922163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 3 13:40:38.248429 containerd[1977]: time="2026-03-03T13:40:38.239939443Z" level=info msg="Connect containerd service" Mar 3 13:40:38.248429 containerd[1977]: time="2026-03-03T13:40:38.239974235Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 3 13:40:38.248429 containerd[1977]: time="2026-03-03T13:40:38.241131187Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 3 13:40:38.221657 ntpd[2136]: Listen normally on 4 lo [::1]:123 Mar 3 13:40:38.221685 ntpd[2136]: bind(21) AF_INET6 [fe80::424:b2ff:fe7b:744b%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 13:40:38.221705 ntpd[2136]: unable to create socket on eth0 (5) for [fe80::424:b2ff:fe7b:744b%2]:123 Mar 3 13:40:38.258821 systemd-coredump[2167]: Process 2136 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 3 13:40:38.272806 systemd[1]: Started systemd-coredump@1-2167-0.service - Process Core Dump (PID 2167/UID 0). Mar 3 13:40:38.354547 sshd_keygen[2007]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 3 13:40:38.392539 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 3 13:40:38.397810 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 3 13:40:38.455180 systemd[1]: issuegen.service: Deactivated successfully. Mar 3 13:40:38.456222 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 3 13:40:38.461713 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 3 13:40:38.516340 polkitd[2123]: Started polkitd version 126 Mar 3 13:40:38.521150 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 3 13:40:38.526982 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 3 13:40:38.531167 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 3 13:40:38.534443 systemd[1]: Reached target getty.target - Login Prompts. Mar 3 13:40:38.541215 polkitd[2123]: Loading rules from directory /etc/polkit-1/rules.d Mar 3 13:40:38.544554 polkitd[2123]: Loading rules from directory /run/polkit-1/rules.d Mar 3 13:40:38.546334 polkitd[2123]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 3 13:40:38.546812 polkitd[2123]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 3 13:40:38.546848 polkitd[2123]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 3 13:40:38.546895 polkitd[2123]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 3 13:40:38.550428 polkitd[2123]: Finished loading, compiling and executing 2 rules Mar 3 13:40:38.550720 systemd[1]: Started polkit.service - Authorization Manager. Mar 3 13:40:38.553712 dbus-daemon[1950]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 3 13:40:38.555387 polkitd[2123]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 3 13:40:38.574891 systemd-hostnamed[2022]: Hostname set to (transient) Mar 3 13:40:38.575720 systemd-resolved[1791]: System hostname changed to 'ip-172-31-29-215'. Mar 3 13:40:38.589610 systemd-coredump[2168]: Process 2136 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2136: #0 0x00005601cc161aeb n/a (ntpd + 0x68aeb) #1 0x00005601cc10acdf n/a (ntpd + 0x11cdf) #2 0x00005601cc10b575 n/a (ntpd + 0x12575) #3 0x00005601cc106d8a n/a (ntpd + 0xdd8a) #4 0x00005601cc1085d3 n/a (ntpd + 0xf5d3) #5 0x00005601cc110fd1 n/a (ntpd + 0x17fd1) #6 0x00005601cc101c2d n/a (ntpd + 0x8c2d) #7 0x00007f341933816c n/a (libc.so.6 + 0x2716c) #8 0x00007f3419338229 __libc_start_main (libc.so.6 + 0x27229) #9 0x00005601cc101c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Mar 3 13:40:38.592135 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 3 13:40:38.592354 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 3 13:40:38.598676 systemd[1]: systemd-coredump@1-2167-0.service: Deactivated successfully. Mar 3 13:40:38.625863 containerd[1977]: time="2026-03-03T13:40:38.625805714Z" level=info msg="Start subscribing containerd event" Mar 3 13:40:38.626275 containerd[1977]: time="2026-03-03T13:40:38.626076811Z" level=info msg="Start recovering state" Mar 3 13:40:38.626571 containerd[1977]: time="2026-03-03T13:40:38.626533501Z" level=info msg="Start event monitor" Mar 3 13:40:38.626803 containerd[1977]: time="2026-03-03T13:40:38.626557322Z" level=info msg="Start cni network conf syncer for default" Mar 3 13:40:38.626803 containerd[1977]: time="2026-03-03T13:40:38.626742996Z" level=info msg="Start streaming server" Mar 3 13:40:38.626803 containerd[1977]: time="2026-03-03T13:40:38.626759285Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 3 13:40:38.626803 containerd[1977]: time="2026-03-03T13:40:38.626770869Z" level=info msg="runtime interface starting up..." Mar 3 13:40:38.626803 containerd[1977]: time="2026-03-03T13:40:38.626781973Z" level=info msg="starting plugins..." Mar 3 13:40:38.627321 containerd[1977]: time="2026-03-03T13:40:38.627271466Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 3 13:40:38.627548 containerd[1977]: time="2026-03-03T13:40:38.627508756Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 3 13:40:38.627883 containerd[1977]: time="2026-03-03T13:40:38.627860059Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 3 13:40:38.628276 systemd[1]: Started containerd.service - containerd container runtime. Mar 3 13:40:38.629236 containerd[1977]: time="2026-03-03T13:40:38.629084686Z" level=info msg="containerd successfully booted in 0.441011s" Mar 3 13:40:38.716504 tar[1982]: linux-amd64/README.md Mar 3 13:40:38.719569 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Mar 3 13:40:38.724233 systemd[1]: Started ntpd.service - Network Time Service. Mar 3 13:40:38.733731 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 3 13:40:38.743845 ntpd[2214]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:22:55 UTC 2026 (1): Starting Mar 3 13:40:38.743914 ntpd[2214]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 13:40:38.744333 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:22:55 UTC 2026 (1): Starting Mar 3 13:40:38.744333 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 13:40:38.744333 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: ---------------------------------------------------- Mar 3 13:40:38.744333 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: ntp-4 is maintained by Network Time Foundation, Mar 3 13:40:38.744333 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 13:40:38.744333 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: corporation. Support and training for ntp-4 are Mar 3 13:40:38.744333 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: available at https://www.nwtime.org/support Mar 3 13:40:38.744333 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: ---------------------------------------------------- Mar 3 13:40:38.743925 ntpd[2214]: ---------------------------------------------------- Mar 3 13:40:38.744955 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: proto: precision = 0.095 usec (-23) Mar 3 13:40:38.743935 ntpd[2214]: ntp-4 is maintained by Network Time Foundation, Mar 3 13:40:38.743944 ntpd[2214]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 13:40:38.745086 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: basedate set to 2026-02-19 Mar 3 13:40:38.745086 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: gps base set to 2026-02-22 (week 2407) Mar 3 13:40:38.743953 ntpd[2214]: corporation. Support and training for ntp-4 are Mar 3 13:40:38.745202 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 13:40:38.745202 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 13:40:38.743962 ntpd[2214]: available at https://www.nwtime.org/support Mar 3 13:40:38.743970 ntpd[2214]: ---------------------------------------------------- Mar 3 13:40:38.744707 ntpd[2214]: proto: precision = 0.095 usec (-23) Mar 3 13:40:38.744989 ntpd[2214]: basedate set to 2026-02-19 Mar 3 13:40:38.745003 ntpd[2214]: gps base set to 2026-02-22 (week 2407) Mar 3 13:40:38.745535 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 13:40:38.745535 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: Listen normally on 3 eth0 172.31.29.215:123 Mar 3 13:40:38.745535 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: Listen normally on 4 lo [::1]:123 Mar 3 13:40:38.745092 ntpd[2214]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 13:40:38.745687 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: bind(21) AF_INET6 [fe80::424:b2ff:fe7b:744b%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 13:40:38.745687 ntpd[2214]: 3 Mar 13:40:38 ntpd[2214]: unable to create socket on eth0 (5) for [fe80::424:b2ff:fe7b:744b%2]:123 Mar 3 13:40:38.745120 ntpd[2214]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 13:40:38.745437 ntpd[2214]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 13:40:38.745470 ntpd[2214]: Listen normally on 3 eth0 172.31.29.215:123 Mar 3 13:40:38.745502 ntpd[2214]: Listen normally on 4 lo [::1]:123 Mar 3 13:40:38.745534 ntpd[2214]: bind(21) AF_INET6 [fe80::424:b2ff:fe7b:744b%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 13:40:38.745556 ntpd[2214]: unable to create socket on eth0 (5) for [fe80::424:b2ff:fe7b:744b%2]:123 Mar 3 13:40:38.746472 kernel: ntpd[2214]: segfault at 24 ip 0000559442264aeb sp 00007ffe0e160df0 error 4 in ntpd[68aeb,559442202000+80000] likely on CPU 0 (core 0, socket 0) Mar 3 13:40:38.749176 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Mar 3 13:40:38.755543 systemd-coredump[2217]: Process 2214 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 3 13:40:38.760573 systemd[1]: Started systemd-coredump@2-2217-0.service - Process Core Dump (PID 2217/UID 0). Mar 3 13:40:38.843402 systemd-coredump[2218]: Process 2214 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2214: #0 0x0000559442264aeb n/a (ntpd + 0x68aeb) #1 0x000055944220dcdf n/a (ntpd + 0x11cdf) #2 0x000055944220e575 n/a (ntpd + 0x12575) #3 0x0000559442209d8a n/a (ntpd + 0xdd8a) #4 0x000055944220b5d3 n/a (ntpd + 0xf5d3) #5 0x0000559442213fd1 n/a (ntpd + 0x17fd1) #6 0x0000559442204c2d n/a (ntpd + 0x8c2d) #7 0x00007f4d2dff716c n/a (libc.so.6 + 0x2716c) #8 0x00007f4d2dff7229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000559442204c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Mar 3 13:40:38.846220 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 3 13:40:38.846534 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 3 13:40:38.849527 systemd[1]: systemd-coredump@2-2217-0.service: Deactivated successfully. Mar 3 13:40:38.970523 systemd-networkd[1790]: eth0: Gained IPv6LL Mar 3 13:40:38.973358 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 3 13:40:38.974957 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 3. Mar 3 13:40:38.975336 systemd[1]: Reached target network-online.target - Network is Online. Mar 3 13:40:38.977456 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 3 13:40:38.982556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:40:38.988604 systemd[1]: Started ntpd.service - Network Time Service. Mar 3 13:40:38.994188 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 3 13:40:39.035360 ntpd[2231]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:22:55 UTC 2026 (1): Starting Mar 3 13:40:39.035432 ntpd[2231]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 13:40:39.035806 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:22:55 UTC 2026 (1): Starting Mar 3 13:40:39.035806 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 13:40:39.035806 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: ---------------------------------------------------- Mar 3 13:40:39.035806 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: ntp-4 is maintained by Network Time Foundation, Mar 3 13:40:39.035806 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 13:40:39.035443 ntpd[2231]: ---------------------------------------------------- Mar 3 13:40:39.035452 ntpd[2231]: ntp-4 is maintained by Network Time Foundation, Mar 3 13:40:39.035461 ntpd[2231]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 13:40:39.035470 ntpd[2231]: corporation. Support and training for ntp-4 are Mar 3 13:40:39.036610 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: corporation. Support and training for ntp-4 are Mar 3 13:40:39.036666 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: available at https://www.nwtime.org/support Mar 3 13:40:39.036666 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: ---------------------------------------------------- Mar 3 13:40:39.036614 ntpd[2231]: available at https://www.nwtime.org/support Mar 3 13:40:39.036626 ntpd[2231]: ---------------------------------------------------- Mar 3 13:40:39.038562 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 3 13:40:39.040068 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: proto: precision = 0.075 usec (-24) Mar 3 13:40:39.040068 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: basedate set to 2026-02-19 Mar 3 13:40:39.040068 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: gps base set to 2026-02-22 (week 2407) Mar 3 13:40:39.040068 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 13:40:39.040068 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 13:40:39.040068 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 13:40:39.040068 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: Listen normally on 3 eth0 172.31.29.215:123 Mar 3 13:40:39.040068 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: Listen normally on 4 lo [::1]:123 Mar 3 13:40:39.040068 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: Listen normally on 5 eth0 [fe80::424:b2ff:fe7b:744b%2]:123 Mar 3 13:40:39.040068 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: Listening on routing socket on fd #22 for interface updates Mar 3 13:40:39.038658 ntpd[2231]: proto: precision = 0.075 usec (-24) Mar 3 13:40:39.038909 ntpd[2231]: basedate set to 2026-02-19 Mar 3 13:40:39.038923 ntpd[2231]: gps base set to 2026-02-22 (week 2407) Mar 3 13:40:39.039014 ntpd[2231]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 13:40:39.039042 ntpd[2231]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 13:40:39.039227 ntpd[2231]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 13:40:39.039255 ntpd[2231]: Listen normally on 3 eth0 172.31.29.215:123 Mar 3 13:40:39.039282 ntpd[2231]: Listen normally on 4 lo [::1]:123 Mar 3 13:40:39.039334 ntpd[2231]: Listen normally on 5 eth0 [fe80::424:b2ff:fe7b:744b%2]:123 Mar 3 13:40:39.039361 ntpd[2231]: Listening on routing socket on fd #22 for interface updates Mar 3 13:40:39.043534 ntpd[2231]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 13:40:39.043935 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 13:40:39.043935 ntpd[2231]: 3 Mar 13:40:39 ntpd[2231]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 13:40:39.043687 ntpd[2231]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 13:40:39.110632 amazon-ssm-agent[2229]: Initializing new seelog logger Mar 3 13:40:39.111186 amazon-ssm-agent[2229]: New Seelog Logger Creation Complete Mar 3 13:40:39.111335 amazon-ssm-agent[2229]: 2026/03/03 13:40:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:40:39.111831 amazon-ssm-agent[2229]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:40:39.111831 amazon-ssm-agent[2229]: 2026/03/03 13:40:39 processing appconfig overrides Mar 3 13:40:39.112128 amazon-ssm-agent[2229]: 2026/03/03 13:40:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:40:39.112173 amazon-ssm-agent[2229]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:40:39.112271 amazon-ssm-agent[2229]: 2026/03/03 13:40:39 processing appconfig overrides Mar 3 13:40:39.112541 amazon-ssm-agent[2229]: 2026/03/03 13:40:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:40:39.112575 amazon-ssm-agent[2229]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:40:39.112692 amazon-ssm-agent[2229]: 2026/03/03 13:40:39 processing appconfig overrides Mar 3 13:40:39.113143 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.1120 INFO Proxy environment variables: Mar 3 13:40:39.116828 amazon-ssm-agent[2229]: 2026/03/03 13:40:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:40:39.116828 amazon-ssm-agent[2229]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:40:39.116828 amazon-ssm-agent[2229]: 2026/03/03 13:40:39 processing appconfig overrides Mar 3 13:40:39.213150 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.1120 INFO https_proxy: Mar 3 13:40:39.235670 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 3 13:40:39.240699 systemd[1]: Started sshd@0-172.31.29.215:22-68.220.241.50:55090.service - OpenSSH per-connection server daemon (68.220.241.50:55090). Mar 3 13:40:39.314399 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.1120 INFO http_proxy: Mar 3 13:40:39.412263 amazon-ssm-agent[2229]: 2026/03/03 13:40:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:40:39.412430 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.1121 INFO no_proxy: Mar 3 13:40:39.412503 amazon-ssm-agent[2229]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:40:39.412641 amazon-ssm-agent[2229]: 2026/03/03 13:40:39 processing appconfig overrides Mar 3 13:40:39.437609 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.1123 INFO Checking if agent identity type OnPrem can be assumed Mar 3 13:40:39.437731 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.1124 INFO Checking if agent identity type EC2 can be assumed Mar 3 13:40:39.437790 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.1598 INFO Agent will take identity from EC2 Mar 3 13:40:39.437840 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.1614 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Mar 3 13:40:39.437878 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.1614 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 3 13:40:39.437912 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.1614 INFO [amazon-ssm-agent] Starting Core Agent Mar 3 13:40:39.437947 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.1614 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Mar 3 13:40:39.437981 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.1614 INFO [Registrar] Starting registrar module Mar 3 13:40:39.438040 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.1629 INFO [EC2Identity] Checking disk for registration info Mar 3 13:40:39.438040 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.1630 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Mar 3 13:40:39.438040 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.1630 INFO [EC2Identity] Generating registration keypair Mar 3 13:40:39.438040 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.3736 INFO [EC2Identity] Checking write access before registering Mar 3 13:40:39.438191 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.3741 INFO [EC2Identity] Registering EC2 instance with Systems Manager Mar 3 13:40:39.438191 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.4120 INFO [EC2Identity] EC2 registration was successful. Mar 3 13:40:39.438191 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.4121 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Mar 3 13:40:39.438191 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.4121 INFO [CredentialRefresher] credentialRefresher has started Mar 3 13:40:39.438191 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.4122 INFO [CredentialRefresher] Starting credentials refresher loop Mar 3 13:40:39.438191 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.4373 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 3 13:40:39.438191 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.4375 INFO [CredentialRefresher] Credentials ready Mar 3 13:40:39.510278 amazon-ssm-agent[2229]: 2026-03-03 13:40:39.4381 INFO [CredentialRefresher] Next credential rotation will be in 29.9999880306 minutes Mar 3 13:40:39.745922 sshd[2251]: Accepted publickey for core from 68.220.241.50 port 55090 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:39.749423 sshd-session[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:39.758727 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 3 13:40:39.760106 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 3 13:40:39.777640 systemd-logind[1964]: New session 1 of user core. Mar 3 13:40:39.784207 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 3 13:40:39.787706 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 3 13:40:39.802477 (systemd)[2257]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 3 13:40:39.805779 systemd-logind[1964]: New session c1 of user core. Mar 3 13:40:40.010648 systemd[2257]: Queued start job for default target default.target. Mar 3 13:40:40.016056 systemd[2257]: Created slice app.slice - User Application Slice. Mar 3 13:40:40.016100 systemd[2257]: Reached target paths.target - Paths. Mar 3 13:40:40.016673 systemd[2257]: Reached target timers.target - Timers. Mar 3 13:40:40.018479 systemd[2257]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 3 13:40:40.042280 systemd[2257]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 3 13:40:40.042430 systemd[2257]: Reached target sockets.target - Sockets. Mar 3 13:40:40.042478 systemd[2257]: Reached target basic.target - Basic System. Mar 3 13:40:40.042514 systemd[2257]: Reached target default.target - Main User Target. Mar 3 13:40:40.042544 systemd[2257]: Startup finished in 229ms. Mar 3 13:40:40.042659 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 3 13:40:40.052513 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 3 13:40:40.308568 systemd[1]: Started sshd@1-172.31.29.215:22-68.220.241.50:55094.service - OpenSSH per-connection server daemon (68.220.241.50:55094). Mar 3 13:40:40.339537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:40:40.340718 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 3 13:40:40.342531 systemd[1]: Startup finished in 2.631s (kernel) + 8.878s (initrd) + 6.883s (userspace) = 18.394s. Mar 3 13:40:40.348861 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:40:40.454228 amazon-ssm-agent[2229]: 2026-03-03 13:40:40.4509 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 3 13:40:40.554330 amazon-ssm-agent[2229]: 2026-03-03 13:40:40.4526 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2284) started Mar 3 13:40:40.654864 amazon-ssm-agent[2229]: 2026-03-03 13:40:40.4536 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 3 13:40:40.747401 sshd[2270]: Accepted publickey for core from 68.220.241.50 port 55094 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:40.749043 sshd-session[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:40.754126 systemd-logind[1964]: New session 2 of user core. Mar 3 13:40:40.758512 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 3 13:40:40.986277 sshd[2300]: Connection closed by 68.220.241.50 port 55094 Mar 3 13:40:40.986767 sshd-session[2270]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:40.991582 systemd[1]: sshd@1-172.31.29.215:22-68.220.241.50:55094.service: Deactivated successfully. Mar 3 13:40:40.993383 systemd[1]: session-2.scope: Deactivated successfully. Mar 3 13:40:40.993509 systemd-logind[1964]: Session 2 logged out. Waiting for processes to exit. Mar 3 13:40:40.996677 systemd-logind[1964]: Removed session 2. Mar 3 13:40:41.043582 kubelet[2276]: E0303 13:40:41.043500 2276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:40:41.046405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:40:41.046601 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:40:41.046991 systemd[1]: kubelet.service: Consumed 978ms CPU time, 258.2M memory peak. Mar 3 13:40:41.090100 systemd[1]: Started sshd@2-172.31.29.215:22-68.220.241.50:55106.service - OpenSSH per-connection server daemon (68.220.241.50:55106). Mar 3 13:40:41.554544 sshd[2308]: Accepted publickey for core from 68.220.241.50 port 55106 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:41.555464 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:41.561373 systemd-logind[1964]: New session 3 of user core. Mar 3 13:40:41.570584 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 3 13:40:41.800551 sshd[2311]: Connection closed by 68.220.241.50 port 55106 Mar 3 13:40:41.801349 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:41.805764 systemd[1]: sshd@2-172.31.29.215:22-68.220.241.50:55106.service: Deactivated successfully. Mar 3 13:40:41.807506 systemd[1]: session-3.scope: Deactivated successfully. Mar 3 13:40:41.808220 systemd-logind[1964]: Session 3 logged out. Waiting for processes to exit. Mar 3 13:40:41.809725 systemd-logind[1964]: Removed session 3. Mar 3 13:40:41.882211 systemd[1]: Started sshd@3-172.31.29.215:22-68.220.241.50:45400.service - OpenSSH per-connection server daemon (68.220.241.50:45400). Mar 3 13:40:42.314360 sshd[2317]: Accepted publickey for core from 68.220.241.50 port 45400 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:42.315689 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:42.325952 systemd-logind[1964]: New session 4 of user core. Mar 3 13:40:42.332584 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 3 13:40:42.549420 sshd[2320]: Connection closed by 68.220.241.50 port 45400 Mar 3 13:40:42.549994 sshd-session[2317]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:42.554438 systemd-logind[1964]: Session 4 logged out. Waiting for processes to exit. Mar 3 13:40:42.555464 systemd[1]: sshd@3-172.31.29.215:22-68.220.241.50:45400.service: Deactivated successfully. Mar 3 13:40:42.557550 systemd[1]: session-4.scope: Deactivated successfully. Mar 3 13:40:42.559410 systemd-logind[1964]: Removed session 4. Mar 3 13:40:42.641518 systemd[1]: Started sshd@4-172.31.29.215:22-68.220.241.50:45414.service - OpenSSH per-connection server daemon (68.220.241.50:45414). Mar 3 13:40:43.067539 sshd[2326]: Accepted publickey for core from 68.220.241.50 port 45414 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:43.068950 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:43.075173 systemd-logind[1964]: New session 5 of user core. Mar 3 13:40:43.084575 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 3 13:40:43.285454 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 3 13:40:43.285728 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:40:43.297544 sudo[2330]: pam_unix(sudo:session): session closed for user root Mar 3 13:40:43.375147 sshd[2329]: Connection closed by 68.220.241.50 port 45414 Mar 3 13:40:43.376593 sshd-session[2326]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:43.380147 systemd[1]: sshd@4-172.31.29.215:22-68.220.241.50:45414.service: Deactivated successfully. Mar 3 13:40:43.382376 systemd[1]: session-5.scope: Deactivated successfully. Mar 3 13:40:43.383823 systemd-logind[1964]: Session 5 logged out. Waiting for processes to exit. Mar 3 13:40:43.385406 systemd-logind[1964]: Removed session 5. Mar 3 13:40:43.462194 systemd[1]: Started sshd@5-172.31.29.215:22-68.220.241.50:45426.service - OpenSSH per-connection server daemon (68.220.241.50:45426). Mar 3 13:40:43.897144 sshd[2336]: Accepted publickey for core from 68.220.241.50 port 45426 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:43.897711 sshd-session[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:43.902664 systemd-logind[1964]: New session 6 of user core. Mar 3 13:40:43.912567 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 3 13:40:44.054745 sudo[2341]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 3 13:40:44.055114 sudo[2341]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:40:44.060433 sudo[2341]: pam_unix(sudo:session): session closed for user root Mar 3 13:40:44.066396 sudo[2340]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 3 13:40:44.066775 sudo[2340]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:40:44.077475 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 13:40:44.121515 augenrules[2363]: No rules Mar 3 13:40:44.122854 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 13:40:44.123131 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 13:40:44.124226 sudo[2340]: pam_unix(sudo:session): session closed for user root Mar 3 13:40:44.201458 sshd[2339]: Connection closed by 68.220.241.50 port 45426 Mar 3 13:40:44.201893 sshd-session[2336]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:44.205914 systemd[1]: sshd@5-172.31.29.215:22-68.220.241.50:45426.service: Deactivated successfully. Mar 3 13:40:44.207678 systemd[1]: session-6.scope: Deactivated successfully. Mar 3 13:40:44.208354 systemd-logind[1964]: Session 6 logged out. Waiting for processes to exit. Mar 3 13:40:44.209828 systemd-logind[1964]: Removed session 6. Mar 3 13:40:44.292389 systemd[1]: Started sshd@6-172.31.29.215:22-68.220.241.50:45442.service - OpenSSH per-connection server daemon (68.220.241.50:45442). Mar 3 13:40:44.720036 sshd[2372]: Accepted publickey for core from 68.220.241.50 port 45442 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:44.721686 sshd-session[2372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:44.728000 systemd-logind[1964]: New session 7 of user core. Mar 3 13:40:44.735547 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 3 13:40:44.877611 sudo[2376]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 3 13:40:44.877884 sudo[2376]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:40:45.541051 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 3 13:40:45.551777 (dockerd)[2395]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 3 13:40:46.585844 systemd-resolved[1791]: Clock change detected. Flushing caches. Mar 3 13:40:46.630549 dockerd[2395]: time="2026-03-03T13:40:46.629957385Z" level=info msg="Starting up" Mar 3 13:40:46.631590 dockerd[2395]: time="2026-03-03T13:40:46.631541848Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 3 13:40:46.644691 dockerd[2395]: time="2026-03-03T13:40:46.644643611Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 3 13:40:46.662716 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport40298663-merged.mount: Deactivated successfully. Mar 3 13:40:46.748315 systemd[1]: var-lib-docker-metacopy\x2dcheck127659019-merged.mount: Deactivated successfully. Mar 3 13:40:46.769326 dockerd[2395]: time="2026-03-03T13:40:46.769128583Z" level=info msg="Loading containers: start." Mar 3 13:40:46.780809 kernel: Initializing XFRM netlink socket Mar 3 13:40:47.102326 (udev-worker)[2415]: Network interface NamePolicy= disabled on kernel command line. Mar 3 13:40:47.148840 systemd-networkd[1790]: docker0: Link UP Mar 3 13:40:47.154400 dockerd[2395]: time="2026-03-03T13:40:47.154348139Z" level=info msg="Loading containers: done." Mar 3 13:40:47.212034 dockerd[2395]: time="2026-03-03T13:40:47.211986635Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 3 13:40:47.212223 dockerd[2395]: time="2026-03-03T13:40:47.212081636Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 3 13:40:47.212223 dockerd[2395]: time="2026-03-03T13:40:47.212165726Z" level=info msg="Initializing buildkit" Mar 3 13:40:47.255103 dockerd[2395]: time="2026-03-03T13:40:47.254908738Z" level=info msg="Completed buildkit initialization" Mar 3 13:40:47.264176 dockerd[2395]: time="2026-03-03T13:40:47.264120282Z" level=info msg="Daemon has completed initialization" Mar 3 13:40:47.264920 dockerd[2395]: time="2026-03-03T13:40:47.264306696Z" level=info msg="API listen on /run/docker.sock" Mar 3 13:40:47.264423 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 3 13:40:47.659030 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2974133293-merged.mount: Deactivated successfully. Mar 3 13:40:47.926811 containerd[1977]: time="2026-03-03T13:40:47.926648147Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 3 13:40:48.515453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1775128718.mount: Deactivated successfully. Mar 3 13:40:50.430674 containerd[1977]: time="2026-03-03T13:40:50.430317852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:50.431861 containerd[1977]: time="2026-03-03T13:40:50.431592822Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 3 13:40:50.432904 containerd[1977]: time="2026-03-03T13:40:50.432866398Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:50.435703 containerd[1977]: time="2026-03-03T13:40:50.435665643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:50.436601 containerd[1977]: time="2026-03-03T13:40:50.436571647Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 2.509887932s" Mar 3 13:40:50.436685 containerd[1977]: time="2026-03-03T13:40:50.436607310Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 3 13:40:50.437257 containerd[1977]: time="2026-03-03T13:40:50.437091788Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 3 13:40:51.845881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 3 13:40:51.848543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:40:52.179653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:40:52.197605 (kubelet)[2676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:40:52.277150 kubelet[2676]: E0303 13:40:52.277102 2676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:40:52.283376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:40:52.283571 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:40:52.284014 systemd[1]: kubelet.service: Consumed 220ms CPU time, 110.6M memory peak. Mar 3 13:40:52.455367 containerd[1977]: time="2026-03-03T13:40:52.455256525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:52.462248 containerd[1977]: time="2026-03-03T13:40:52.462199104Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 3 13:40:52.468810 containerd[1977]: time="2026-03-03T13:40:52.467824012Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:52.519661 containerd[1977]: time="2026-03-03T13:40:52.519611379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:52.520603 containerd[1977]: time="2026-03-03T13:40:52.520574539Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 2.083292314s" Mar 3 13:40:52.520711 containerd[1977]: time="2026-03-03T13:40:52.520698528Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 3 13:40:52.521287 containerd[1977]: time="2026-03-03T13:40:52.521238120Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 3 13:40:54.226783 containerd[1977]: time="2026-03-03T13:40:54.226719901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:54.228158 containerd[1977]: time="2026-03-03T13:40:54.227981223Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 3 13:40:54.229823 containerd[1977]: time="2026-03-03T13:40:54.229768718Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:54.233807 containerd[1977]: time="2026-03-03T13:40:54.233634286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:54.235041 containerd[1977]: time="2026-03-03T13:40:54.234628344Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.713340211s" Mar 3 13:40:54.235041 containerd[1977]: time="2026-03-03T13:40:54.234671033Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 3 13:40:54.235308 containerd[1977]: time="2026-03-03T13:40:54.235274439Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 3 13:40:55.343843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3887690687.mount: Deactivated successfully. Mar 3 13:40:55.744292 containerd[1977]: time="2026-03-03T13:40:55.744004459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:55.745498 containerd[1977]: time="2026-03-03T13:40:55.745362406Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 3 13:40:55.746434 containerd[1977]: time="2026-03-03T13:40:55.746400905Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:55.748351 containerd[1977]: time="2026-03-03T13:40:55.748209729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:55.748728 containerd[1977]: time="2026-03-03T13:40:55.748704742Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.513400933s" Mar 3 13:40:55.748805 containerd[1977]: time="2026-03-03T13:40:55.748734435Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 3 13:40:55.749165 containerd[1977]: time="2026-03-03T13:40:55.749144078Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 3 13:40:56.273665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3994929952.mount: Deactivated successfully. Mar 3 13:40:57.855444 containerd[1977]: time="2026-03-03T13:40:57.855389134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:57.857431 containerd[1977]: time="2026-03-03T13:40:57.857299114Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 3 13:40:57.859433 containerd[1977]: time="2026-03-03T13:40:57.859376772Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:57.863158 containerd[1977]: time="2026-03-03T13:40:57.863088694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:57.863819 containerd[1977]: time="2026-03-03T13:40:57.863786400Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.114599727s" Mar 3 13:40:57.863819 containerd[1977]: time="2026-03-03T13:40:57.863823205Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 3 13:40:57.864266 containerd[1977]: time="2026-03-03T13:40:57.864238242Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 3 13:40:58.386462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2624513572.mount: Deactivated successfully. Mar 3 13:40:58.399173 containerd[1977]: time="2026-03-03T13:40:58.399123203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:58.401362 containerd[1977]: time="2026-03-03T13:40:58.400951159Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 3 13:40:58.403374 containerd[1977]: time="2026-03-03T13:40:58.403329482Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:58.406607 containerd[1977]: time="2026-03-03T13:40:58.406568421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:58.407375 containerd[1977]: time="2026-03-03T13:40:58.407347393Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 543.084969ms" Mar 3 13:40:58.407476 containerd[1977]: time="2026-03-03T13:40:58.407463403Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 3 13:40:58.408089 containerd[1977]: time="2026-03-03T13:40:58.407923782Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 3 13:40:58.972747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3021504746.mount: Deactivated successfully. Mar 3 13:40:59.944638 containerd[1977]: time="2026-03-03T13:40:59.944587627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:59.945684 containerd[1977]: time="2026-03-03T13:40:59.945596913Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 3 13:40:59.946527 containerd[1977]: time="2026-03-03T13:40:59.946498054Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:59.949232 containerd[1977]: time="2026-03-03T13:40:59.949181545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:59.950456 containerd[1977]: time="2026-03-03T13:40:59.950038692Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.542091147s" Mar 3 13:40:59.950456 containerd[1977]: time="2026-03-03T13:40:59.950067408Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 3 13:41:02.540388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 3 13:41:02.544885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:41:03.011978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:41:03.022208 (kubelet)[2841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:41:03.083146 kubelet[2841]: E0303 13:41:03.083102 2841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:41:03.086517 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:41:03.087409 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:41:03.087981 systemd[1]: kubelet.service: Consumed 211ms CPU time, 109.7M memory peak. Mar 3 13:41:04.196600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:41:04.196886 systemd[1]: kubelet.service: Consumed 211ms CPU time, 109.7M memory peak. Mar 3 13:41:04.199732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:41:04.236883 systemd[1]: Reload requested from client PID 2855 ('systemctl') (unit session-7.scope)... Mar 3 13:41:04.236904 systemd[1]: Reloading... Mar 3 13:41:04.384807 zram_generator::config[2900]: No configuration found. Mar 3 13:41:04.653214 systemd[1]: Reloading finished in 415 ms. Mar 3 13:41:04.722401 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 3 13:41:04.722490 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 3 13:41:04.723006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:41:04.723198 systemd[1]: kubelet.service: Consumed 140ms CPU time, 98.5M memory peak. Mar 3 13:41:04.724975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:41:04.968007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:41:04.980480 (kubelet)[2963]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 13:41:05.050975 kubelet[2963]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 3 13:41:05.050975 kubelet[2963]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 13:41:05.052954 kubelet[2963]: I0303 13:41:05.052877 2963 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 3 13:41:05.549797 kubelet[2963]: I0303 13:41:05.548466 2963 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 3 13:41:05.549797 kubelet[2963]: I0303 13:41:05.548500 2963 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 13:41:05.549797 kubelet[2963]: I0303 13:41:05.548532 2963 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 3 13:41:05.549797 kubelet[2963]: I0303 13:41:05.548542 2963 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 13:41:05.549797 kubelet[2963]: I0303 13:41:05.549173 2963 server.go:956] "Client rotation is on, will bootstrap in background" Mar 3 13:41:05.563015 kubelet[2963]: I0303 13:41:05.562969 2963 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 13:41:05.567745 kubelet[2963]: E0303 13:41:05.567296 2963 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.29.215:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.215:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 3 13:41:05.570979 kubelet[2963]: I0303 13:41:05.570951 2963 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 13:41:05.574144 kubelet[2963]: I0303 13:41:05.574092 2963 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 3 13:41:05.583241 kubelet[2963]: I0303 13:41:05.583164 2963 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 13:41:05.583419 kubelet[2963]: I0303 13:41:05.583238 2963 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-215","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 13:41:05.583419 kubelet[2963]: I0303 13:41:05.583418 2963 topology_manager.go:138] "Creating topology manager with none policy" Mar 3 13:41:05.583539 kubelet[2963]: I0303 13:41:05.583428 2963 container_manager_linux.go:306] "Creating device plugin manager" Mar 3 13:41:05.583574 kubelet[2963]: I0303 13:41:05.583544 2963 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 3 13:41:05.587548 kubelet[2963]: I0303 13:41:05.587372 2963 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:41:05.587713 kubelet[2963]: I0303 13:41:05.587674 2963 kubelet.go:475] "Attempting to sync node with API server" Mar 3 13:41:05.587713 kubelet[2963]: I0303 13:41:05.587705 2963 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 13:41:05.588130 kubelet[2963]: I0303 13:41:05.587732 2963 kubelet.go:387] "Adding apiserver pod source" Mar 3 13:41:05.588130 kubelet[2963]: I0303 13:41:05.587747 2963 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 13:41:05.590641 kubelet[2963]: E0303 13:41:05.590614 2963 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.29.215:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.215:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 3 13:41:05.591623 kubelet[2963]: E0303 13:41:05.591575 2963 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.29.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-215&limit=500&resourceVersion=0\": dial tcp 172.31.29.215:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 3 13:41:05.591914 kubelet[2963]: I0303 13:41:05.591896 2963 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 13:41:05.592686 kubelet[2963]: I0303 13:41:05.592665 2963 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 13:41:05.592849 kubelet[2963]: I0303 13:41:05.592834 2963 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 3 13:41:05.592976 kubelet[2963]: W0303 13:41:05.592965 2963 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 3 13:41:05.597961 kubelet[2963]: I0303 13:41:05.597938 2963 server.go:1262] "Started kubelet" Mar 3 13:41:05.599388 kubelet[2963]: I0303 13:41:05.599350 2963 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 3 13:41:05.606797 kubelet[2963]: E0303 13:41:05.604265 2963 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.215:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.215:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-215.1899588718435b59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-215,UID:ip-172-31-29-215,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-215,},FirstTimestamp:2026-03-03 13:41:05.597897561 +0000 UTC m=+0.612192204,LastTimestamp:2026-03-03 13:41:05.597897561 +0000 UTC m=+0.612192204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-215,}" Mar 3 13:41:05.608218 kubelet[2963]: I0303 13:41:05.607558 2963 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 13:41:05.610245 kubelet[2963]: I0303 13:41:05.610207 2963 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 3 13:41:05.610663 kubelet[2963]: E0303 13:41:05.610633 2963 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-29-215\" not found" Mar 3 13:41:05.612786 kubelet[2963]: I0303 13:41:05.612742 2963 server.go:310] "Adding debug handlers to kubelet server" Mar 3 13:41:05.617548 kubelet[2963]: I0303 13:41:05.617043 2963 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 3 13:41:05.617548 kubelet[2963]: I0303 13:41:05.617121 2963 reconciler.go:29] "Reconciler: start to sync state" Mar 3 13:41:05.621794 kubelet[2963]: I0303 13:41:05.621012 2963 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 13:41:05.621794 kubelet[2963]: I0303 13:41:05.621108 2963 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 3 13:41:05.621794 kubelet[2963]: I0303 13:41:05.621326 2963 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 13:41:05.622041 kubelet[2963]: I0303 13:41:05.621762 2963 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 13:41:05.625966 kubelet[2963]: E0303 13:41:05.625930 2963 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-215?timeout=10s\": dial tcp 172.31.29.215:6443: connect: connection refused" interval="200ms" Mar 3 13:41:05.627955 kubelet[2963]: E0303 13:41:05.626411 2963 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.29.215:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.215:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 3 13:41:05.629234 kubelet[2963]: E0303 13:41:05.629207 2963 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 13:41:05.631322 kubelet[2963]: I0303 13:41:05.631302 2963 factory.go:223] Registration of the containerd container factory successfully Mar 3 13:41:05.631467 kubelet[2963]: I0303 13:41:05.631456 2963 factory.go:223] Registration of the systemd container factory successfully Mar 3 13:41:05.631676 kubelet[2963]: I0303 13:41:05.631653 2963 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 13:41:05.634397 kubelet[2963]: I0303 13:41:05.634311 2963 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 3 13:41:05.638032 kubelet[2963]: I0303 13:41:05.638001 2963 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 3 13:41:05.638032 kubelet[2963]: I0303 13:41:05.638031 2963 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 3 13:41:05.638184 kubelet[2963]: I0303 13:41:05.638064 2963 kubelet.go:2428] "Starting kubelet main sync loop" Mar 3 13:41:05.638184 kubelet[2963]: E0303 13:41:05.638108 2963 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 13:41:05.648003 kubelet[2963]: E0303 13:41:05.647963 2963 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.29.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.215:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 3 13:41:05.663825 kubelet[2963]: I0303 13:41:05.663793 2963 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 3 13:41:05.663825 kubelet[2963]: I0303 13:41:05.663814 2963 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 3 13:41:05.663825 kubelet[2963]: I0303 13:41:05.663836 2963 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:41:05.668186 kubelet[2963]: I0303 13:41:05.668144 2963 policy_none.go:49] "None policy: Start" Mar 3 13:41:05.668186 kubelet[2963]: I0303 13:41:05.668172 2963 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 3 13:41:05.668186 kubelet[2963]: I0303 13:41:05.668187 2963 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 3 13:41:05.672136 kubelet[2963]: I0303 13:41:05.672101 2963 policy_none.go:47] "Start" Mar 3 13:41:05.678812 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 3 13:41:05.693962 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 3 13:41:05.698265 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 3 13:41:05.707939 kubelet[2963]: E0303 13:41:05.707912 2963 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 13:41:05.708256 kubelet[2963]: I0303 13:41:05.708232 2963 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 3 13:41:05.708372 kubelet[2963]: I0303 13:41:05.708344 2963 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 13:41:05.709975 kubelet[2963]: I0303 13:41:05.709953 2963 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 3 13:41:05.711052 kubelet[2963]: E0303 13:41:05.711036 2963 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 13:41:05.711202 kubelet[2963]: E0303 13:41:05.711100 2963 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-215\" not found" Mar 3 13:41:05.754413 systemd[1]: Created slice kubepods-burstable-pod807da84afd202fea3e9fe902d7ee7f7f.slice - libcontainer container kubepods-burstable-pod807da84afd202fea3e9fe902d7ee7f7f.slice. Mar 3 13:41:05.763808 kubelet[2963]: E0303 13:41:05.763738 2963 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-215\" not found" node="ip-172-31-29-215" Mar 3 13:41:05.767846 systemd[1]: Created slice kubepods-burstable-pod742a62ec6e8bb43d1dc4de3ab6a694e7.slice - libcontainer container kubepods-burstable-pod742a62ec6e8bb43d1dc4de3ab6a694e7.slice. Mar 3 13:41:05.771021 kubelet[2963]: E0303 13:41:05.770749 2963 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-215\" not found" node="ip-172-31-29-215" Mar 3 13:41:05.774139 systemd[1]: Created slice kubepods-burstable-pod865b8eb563a32c8690ce05761133ad26.slice - libcontainer container kubepods-burstable-pod865b8eb563a32c8690ce05761133ad26.slice. Mar 3 13:41:05.776309 kubelet[2963]: E0303 13:41:05.776281 2963 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-215\" not found" node="ip-172-31-29-215" Mar 3 13:41:05.811828 kubelet[2963]: I0303 13:41:05.810972 2963 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-215" Mar 3 13:41:05.811828 kubelet[2963]: E0303 13:41:05.811277 2963 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.215:6443/api/v1/nodes\": dial tcp 172.31.29.215:6443: connect: connection refused" node="ip-172-31-29-215" Mar 3 13:41:05.818472 kubelet[2963]: I0303 13:41:05.818436 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/742a62ec6e8bb43d1dc4de3ab6a694e7-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-215\" (UID: \"742a62ec6e8bb43d1dc4de3ab6a694e7\") " pod="kube-system/kube-controller-manager-ip-172-31-29-215" Mar 3 13:41:05.818472 kubelet[2963]: I0303 13:41:05.818480 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/742a62ec6e8bb43d1dc4de3ab6a694e7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-215\" (UID: \"742a62ec6e8bb43d1dc4de3ab6a694e7\") " pod="kube-system/kube-controller-manager-ip-172-31-29-215" Mar 3 13:41:05.818654 kubelet[2963]: I0303 13:41:05.818500 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/742a62ec6e8bb43d1dc4de3ab6a694e7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-215\" (UID: \"742a62ec6e8bb43d1dc4de3ab6a694e7\") " pod="kube-system/kube-controller-manager-ip-172-31-29-215" Mar 3 13:41:05.818654 kubelet[2963]: I0303 13:41:05.818517 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/742a62ec6e8bb43d1dc4de3ab6a694e7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-215\" (UID: \"742a62ec6e8bb43d1dc4de3ab6a694e7\") " pod="kube-system/kube-controller-manager-ip-172-31-29-215" Mar 3 13:41:05.818654 kubelet[2963]: I0303 13:41:05.818533 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/742a62ec6e8bb43d1dc4de3ab6a694e7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-215\" (UID: \"742a62ec6e8bb43d1dc4de3ab6a694e7\") " pod="kube-system/kube-controller-manager-ip-172-31-29-215" Mar 3 13:41:05.818654 kubelet[2963]: I0303 13:41:05.818548 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/807da84afd202fea3e9fe902d7ee7f7f-ca-certs\") pod \"kube-apiserver-ip-172-31-29-215\" (UID: \"807da84afd202fea3e9fe902d7ee7f7f\") " pod="kube-system/kube-apiserver-ip-172-31-29-215" Mar 3 13:41:05.818654 kubelet[2963]: I0303 13:41:05.818561 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/807da84afd202fea3e9fe902d7ee7f7f-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-215\" (UID: \"807da84afd202fea3e9fe902d7ee7f7f\") " pod="kube-system/kube-apiserver-ip-172-31-29-215" Mar 3 13:41:05.818802 kubelet[2963]: I0303 13:41:05.818576 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/807da84afd202fea3e9fe902d7ee7f7f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-215\" (UID: \"807da84afd202fea3e9fe902d7ee7f7f\") " pod="kube-system/kube-apiserver-ip-172-31-29-215" Mar 3 13:41:05.818802 kubelet[2963]: I0303 13:41:05.818602 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/865b8eb563a32c8690ce05761133ad26-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-215\" (UID: \"865b8eb563a32c8690ce05761133ad26\") " pod="kube-system/kube-scheduler-ip-172-31-29-215" Mar 3 13:41:05.829209 kubelet[2963]: E0303 13:41:05.829172 2963 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-215?timeout=10s\": dial tcp 172.31.29.215:6443: connect: connection refused" interval="400ms" Mar 3 13:41:06.013653 kubelet[2963]: I0303 13:41:06.013615 2963 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-215" Mar 3 13:41:06.014039 kubelet[2963]: E0303 13:41:06.014003 2963 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.215:6443/api/v1/nodes\": dial tcp 172.31.29.215:6443: connect: connection refused" node="ip-172-31-29-215" Mar 3 13:41:06.070709 containerd[1977]: time="2026-03-03T13:41:06.070561028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-215,Uid:807da84afd202fea3e9fe902d7ee7f7f,Namespace:kube-system,Attempt:0,}" Mar 3 13:41:06.075075 containerd[1977]: time="2026-03-03T13:41:06.075038325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-215,Uid:742a62ec6e8bb43d1dc4de3ab6a694e7,Namespace:kube-system,Attempt:0,}" Mar 3 13:41:06.080003 containerd[1977]: time="2026-03-03T13:41:06.079953280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-215,Uid:865b8eb563a32c8690ce05761133ad26,Namespace:kube-system,Attempt:0,}" Mar 3 13:41:06.230640 kubelet[2963]: E0303 13:41:06.230595 2963 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-215?timeout=10s\": dial tcp 172.31.29.215:6443: connect: connection refused" interval="800ms" Mar 3 13:41:06.416322 kubelet[2963]: I0303 13:41:06.416191 2963 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-215" Mar 3 13:41:06.416732 kubelet[2963]: E0303 13:41:06.416698 2963 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.215:6443/api/v1/nodes\": dial tcp 172.31.29.215:6443: connect: connection refused" node="ip-172-31-29-215" Mar 3 13:41:06.529367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2306655110.mount: Deactivated successfully. Mar 3 13:41:06.537455 containerd[1977]: time="2026-03-03T13:41:06.537405728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:41:06.540787 containerd[1977]: time="2026-03-03T13:41:06.540710028Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 3 13:41:06.542030 containerd[1977]: time="2026-03-03T13:41:06.541993510Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:41:06.543468 containerd[1977]: time="2026-03-03T13:41:06.543415010Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:41:06.545680 containerd[1977]: time="2026-03-03T13:41:06.545511045Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 3 13:41:06.546638 containerd[1977]: time="2026-03-03T13:41:06.546597336Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:41:06.547992 containerd[1977]: time="2026-03-03T13:41:06.547952670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:41:06.549680 containerd[1977]: time="2026-03-03T13:41:06.548611146Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 467.147966ms" Mar 3 13:41:06.549680 containerd[1977]: time="2026-03-03T13:41:06.548760099Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 3 13:41:06.550363 containerd[1977]: time="2026-03-03T13:41:06.550333519Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 473.863354ms" Mar 3 13:41:06.554408 containerd[1977]: time="2026-03-03T13:41:06.554053702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 481.706399ms" Mar 3 13:41:06.599017 containerd[1977]: time="2026-03-03T13:41:06.598964153Z" level=info msg="connecting to shim a1a1152fef2f4f92449ffb93cb1bcff4e55c06d471cd3d31a01edc96ac4b244c" address="unix:///run/containerd/s/2a81e79b249c03fbfba13f6d571531aff8c34ab444721de440fa93da1e25bcbb" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:41:06.599657 containerd[1977]: time="2026-03-03T13:41:06.599616765Z" level=info msg="connecting to shim 873e9475a729862f85b95c34a69717737c6be35d2cd8568c721cc56e93250a2f" address="unix:///run/containerd/s/aa552c06205047ef1df530834018f47c4266f86bd0995cbb54f9789df8c4eb0a" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:41:06.608971 containerd[1977]: time="2026-03-03T13:41:06.608903203Z" level=info msg="connecting to shim 8fa12fd8fb6fac4ae74a315e160e99dd6738d0f870dbf9143f0441ee1df2829a" address="unix:///run/containerd/s/77c38709662a8fda5b16807d0fce54663677af9713f42fa688534ce6459150e5" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:41:06.641215 systemd[1]: Started cri-containerd-8fa12fd8fb6fac4ae74a315e160e99dd6738d0f870dbf9143f0441ee1df2829a.scope - libcontainer container 8fa12fd8fb6fac4ae74a315e160e99dd6738d0f870dbf9143f0441ee1df2829a. Mar 3 13:41:06.661352 systemd[1]: Started cri-containerd-873e9475a729862f85b95c34a69717737c6be35d2cd8568c721cc56e93250a2f.scope - libcontainer container 873e9475a729862f85b95c34a69717737c6be35d2cd8568c721cc56e93250a2f. Mar 3 13:41:06.663594 systemd[1]: Started cri-containerd-a1a1152fef2f4f92449ffb93cb1bcff4e55c06d471cd3d31a01edc96ac4b244c.scope - libcontainer container a1a1152fef2f4f92449ffb93cb1bcff4e55c06d471cd3d31a01edc96ac4b244c. Mar 3 13:41:06.739129 kubelet[2963]: E0303 13:41:06.739019 2963 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.29.215:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.215:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 3 13:41:06.773704 containerd[1977]: time="2026-03-03T13:41:06.773626183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-215,Uid:742a62ec6e8bb43d1dc4de3ab6a694e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fa12fd8fb6fac4ae74a315e160e99dd6738d0f870dbf9143f0441ee1df2829a\"" Mar 3 13:41:06.774272 containerd[1977]: time="2026-03-03T13:41:06.774091518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-215,Uid:807da84afd202fea3e9fe902d7ee7f7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"873e9475a729862f85b95c34a69717737c6be35d2cd8568c721cc56e93250a2f\"" Mar 3 13:41:06.803288 containerd[1977]: time="2026-03-03T13:41:06.803233659Z" level=info msg="CreateContainer within sandbox \"873e9475a729862f85b95c34a69717737c6be35d2cd8568c721cc56e93250a2f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 3 13:41:06.804701 containerd[1977]: time="2026-03-03T13:41:06.804658653Z" level=info msg="CreateContainer within sandbox \"8fa12fd8fb6fac4ae74a315e160e99dd6738d0f870dbf9143f0441ee1df2829a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 3 13:41:06.807314 containerd[1977]: time="2026-03-03T13:41:06.807282587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-215,Uid:865b8eb563a32c8690ce05761133ad26,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1a1152fef2f4f92449ffb93cb1bcff4e55c06d471cd3d31a01edc96ac4b244c\"" Mar 3 13:41:06.813971 containerd[1977]: time="2026-03-03T13:41:06.813929042Z" level=info msg="CreateContainer within sandbox \"a1a1152fef2f4f92449ffb93cb1bcff4e55c06d471cd3d31a01edc96ac4b244c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 3 13:41:06.821321 kubelet[2963]: E0303 13:41:06.821281 2963 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.29.215:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.215:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 3 13:41:06.834438 containerd[1977]: time="2026-03-03T13:41:06.834375544Z" level=info msg="Container 1d7ec1041a0e3d6235b0454fc5ee3513ff38afa3072752e192de190c89f70457: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:41:06.839519 containerd[1977]: time="2026-03-03T13:41:06.839477614Z" level=info msg="Container 2b19ee4a4570810c33f4e2461981f57af3b52c68ce0cf4ac29b2919bb14febc4: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:41:06.856749 containerd[1977]: time="2026-03-03T13:41:06.856700712Z" level=info msg="CreateContainer within sandbox \"8fa12fd8fb6fac4ae74a315e160e99dd6738d0f870dbf9143f0441ee1df2829a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1d7ec1041a0e3d6235b0454fc5ee3513ff38afa3072752e192de190c89f70457\"" Mar 3 13:41:06.857461 containerd[1977]: time="2026-03-03T13:41:06.857427020Z" level=info msg="StartContainer for \"1d7ec1041a0e3d6235b0454fc5ee3513ff38afa3072752e192de190c89f70457\"" Mar 3 13:41:06.858548 containerd[1977]: time="2026-03-03T13:41:06.858446790Z" level=info msg="connecting to shim 1d7ec1041a0e3d6235b0454fc5ee3513ff38afa3072752e192de190c89f70457" address="unix:///run/containerd/s/77c38709662a8fda5b16807d0fce54663677af9713f42fa688534ce6459150e5" protocol=ttrpc version=3 Mar 3 13:41:06.859111 containerd[1977]: time="2026-03-03T13:41:06.859091191Z" level=info msg="Container e267db21c11aa700a559848c52c1bd7277cf8dbb1ef50dad4e14d5e170c896e6: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:41:06.866883 containerd[1977]: time="2026-03-03T13:41:06.866732027Z" level=info msg="CreateContainer within sandbox \"873e9475a729862f85b95c34a69717737c6be35d2cd8568c721cc56e93250a2f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2b19ee4a4570810c33f4e2461981f57af3b52c68ce0cf4ac29b2919bb14febc4\"" Mar 3 13:41:06.867308 containerd[1977]: time="2026-03-03T13:41:06.867234857Z" level=info msg="StartContainer for \"2b19ee4a4570810c33f4e2461981f57af3b52c68ce0cf4ac29b2919bb14febc4\"" Mar 3 13:41:06.868241 containerd[1977]: time="2026-03-03T13:41:06.868215048Z" level=info msg="connecting to shim 2b19ee4a4570810c33f4e2461981f57af3b52c68ce0cf4ac29b2919bb14febc4" address="unix:///run/containerd/s/aa552c06205047ef1df530834018f47c4266f86bd0995cbb54f9789df8c4eb0a" protocol=ttrpc version=3 Mar 3 13:41:06.883947 systemd[1]: Started cri-containerd-1d7ec1041a0e3d6235b0454fc5ee3513ff38afa3072752e192de190c89f70457.scope - libcontainer container 1d7ec1041a0e3d6235b0454fc5ee3513ff38afa3072752e192de190c89f70457. Mar 3 13:41:06.887596 kubelet[2963]: E0303 13:41:06.887563 2963 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.29.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-215&limit=500&resourceVersion=0\": dial tcp 172.31.29.215:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 3 13:41:06.889740 systemd[1]: Started cri-containerd-2b19ee4a4570810c33f4e2461981f57af3b52c68ce0cf4ac29b2919bb14febc4.scope - libcontainer container 2b19ee4a4570810c33f4e2461981f57af3b52c68ce0cf4ac29b2919bb14febc4. Mar 3 13:41:06.894799 containerd[1977]: time="2026-03-03T13:41:06.894749999Z" level=info msg="CreateContainer within sandbox \"a1a1152fef2f4f92449ffb93cb1bcff4e55c06d471cd3d31a01edc96ac4b244c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e267db21c11aa700a559848c52c1bd7277cf8dbb1ef50dad4e14d5e170c896e6\"" Mar 3 13:41:06.896382 containerd[1977]: time="2026-03-03T13:41:06.895381747Z" level=info msg="StartContainer for \"e267db21c11aa700a559848c52c1bd7277cf8dbb1ef50dad4e14d5e170c896e6\"" Mar 3 13:41:06.899659 containerd[1977]: time="2026-03-03T13:41:06.899632788Z" level=info msg="connecting to shim e267db21c11aa700a559848c52c1bd7277cf8dbb1ef50dad4e14d5e170c896e6" address="unix:///run/containerd/s/2a81e79b249c03fbfba13f6d571531aff8c34ab444721de440fa93da1e25bcbb" protocol=ttrpc version=3 Mar 3 13:41:06.935955 systemd[1]: Started cri-containerd-e267db21c11aa700a559848c52c1bd7277cf8dbb1ef50dad4e14d5e170c896e6.scope - libcontainer container e267db21c11aa700a559848c52c1bd7277cf8dbb1ef50dad4e14d5e170c896e6. Mar 3 13:41:06.970392 containerd[1977]: time="2026-03-03T13:41:06.970325527Z" level=info msg="StartContainer for \"1d7ec1041a0e3d6235b0454fc5ee3513ff38afa3072752e192de190c89f70457\" returns successfully" Mar 3 13:41:06.982041 containerd[1977]: time="2026-03-03T13:41:06.981931655Z" level=info msg="StartContainer for \"2b19ee4a4570810c33f4e2461981f57af3b52c68ce0cf4ac29b2919bb14febc4\" returns successfully" Mar 3 13:41:07.025255 containerd[1977]: time="2026-03-03T13:41:07.025138172Z" level=info msg="StartContainer for \"e267db21c11aa700a559848c52c1bd7277cf8dbb1ef50dad4e14d5e170c896e6\" returns successfully" Mar 3 13:41:07.031329 kubelet[2963]: E0303 13:41:07.031284 2963 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-215?timeout=10s\": dial tcp 172.31.29.215:6443: connect: connection refused" interval="1.6s" Mar 3 13:41:07.219028 kubelet[2963]: I0303 13:41:07.219001 2963 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-215" Mar 3 13:41:07.674702 kubelet[2963]: E0303 13:41:07.674670 2963 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-215\" not found" node="ip-172-31-29-215" Mar 3 13:41:07.681793 kubelet[2963]: E0303 13:41:07.681748 2963 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-215\" not found" node="ip-172-31-29-215" Mar 3 13:41:07.685387 kubelet[2963]: E0303 13:41:07.685359 2963 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-215\" not found" node="ip-172-31-29-215" Mar 3 13:41:08.689684 kubelet[2963]: E0303 13:41:08.689459 2963 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-215\" not found" node="ip-172-31-29-215" Mar 3 13:41:08.691823 kubelet[2963]: E0303 13:41:08.690856 2963 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-215\" not found" node="ip-172-31-29-215" Mar 3 13:41:09.158458 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 3 13:41:09.994798 kubelet[2963]: E0303 13:41:09.993138 2963 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-215\" not found" node="ip-172-31-29-215" Mar 3 13:41:10.114950 kubelet[2963]: I0303 13:41:10.114914 2963 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-215" Mar 3 13:41:10.114950 kubelet[2963]: E0303 13:41:10.114955 2963 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-29-215\": node \"ip-172-31-29-215\" not found" Mar 3 13:41:10.212063 kubelet[2963]: I0303 13:41:10.211985 2963 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-215" Mar 3 13:41:10.243419 kubelet[2963]: E0303 13:41:10.243379 2963 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-215\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-29-215" Mar 3 13:41:10.243419 kubelet[2963]: I0303 13:41:10.243419 2963 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-215" Mar 3 13:41:10.248140 kubelet[2963]: E0303 13:41:10.247995 2963 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-215\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-29-215" Mar 3 13:41:10.248140 kubelet[2963]: I0303 13:41:10.248067 2963 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-215" Mar 3 13:41:10.253040 kubelet[2963]: E0303 13:41:10.252999 2963 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-215\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-29-215" Mar 3 13:41:10.592905 kubelet[2963]: I0303 13:41:10.592859 2963 apiserver.go:52] "Watching apiserver" Mar 3 13:41:10.617215 kubelet[2963]: I0303 13:41:10.617153 2963 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 3 13:41:11.974990 systemd[1]: Reload requested from client PID 3238 ('systemctl') (unit session-7.scope)... Mar 3 13:41:11.975010 systemd[1]: Reloading... Mar 3 13:41:12.155801 zram_generator::config[3288]: No configuration found. Mar 3 13:41:12.423382 systemd[1]: Reloading finished in 447 ms. Mar 3 13:41:12.456331 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:41:12.474060 systemd[1]: kubelet.service: Deactivated successfully. Mar 3 13:41:12.474317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:41:12.474757 systemd[1]: kubelet.service: Consumed 1.047s CPU time, 121M memory peak. Mar 3 13:41:12.477052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:41:12.747582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:41:12.759288 (kubelet)[3342]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 13:41:12.823200 kubelet[3342]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 3 13:41:12.823200 kubelet[3342]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 13:41:12.823615 kubelet[3342]: I0303 13:41:12.823284 3342 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 3 13:41:12.838559 kubelet[3342]: I0303 13:41:12.838520 3342 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 3 13:41:12.838559 kubelet[3342]: I0303 13:41:12.838548 3342 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 13:41:12.838752 kubelet[3342]: I0303 13:41:12.838580 3342 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 3 13:41:12.838752 kubelet[3342]: I0303 13:41:12.838594 3342 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 13:41:12.838927 kubelet[3342]: I0303 13:41:12.838903 3342 server.go:956] "Client rotation is on, will bootstrap in background" Mar 3 13:41:12.841408 kubelet[3342]: I0303 13:41:12.841377 3342 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 3 13:41:12.844068 kubelet[3342]: I0303 13:41:12.844047 3342 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 13:41:12.847999 kubelet[3342]: I0303 13:41:12.847940 3342 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 13:41:12.851109 kubelet[3342]: I0303 13:41:12.851040 3342 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 3 13:41:12.851668 kubelet[3342]: I0303 13:41:12.851353 3342 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 13:41:12.851668 kubelet[3342]: I0303 13:41:12.851379 3342 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-215","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 13:41:12.851668 kubelet[3342]: I0303 13:41:12.851525 3342 topology_manager.go:138] "Creating topology manager with none policy" Mar 3 13:41:12.851668 kubelet[3342]: I0303 13:41:12.851534 3342 container_manager_linux.go:306] "Creating device plugin manager" Mar 3 13:41:12.851921 kubelet[3342]: I0303 13:41:12.851555 3342 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 3 13:41:12.851921 kubelet[3342]: I0303 13:41:12.851885 3342 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:41:12.852076 kubelet[3342]: I0303 13:41:12.852045 3342 kubelet.go:475] "Attempting to sync node with API server" Mar 3 13:41:12.852076 kubelet[3342]: I0303 13:41:12.852064 3342 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 13:41:12.852166 kubelet[3342]: I0303 13:41:12.852119 3342 kubelet.go:387] "Adding apiserver pod source" Mar 3 13:41:12.852166 kubelet[3342]: I0303 13:41:12.852144 3342 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 13:41:12.855945 kubelet[3342]: I0303 13:41:12.855921 3342 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 13:41:12.856643 kubelet[3342]: I0303 13:41:12.856620 3342 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 13:41:12.856735 kubelet[3342]: I0303 13:41:12.856666 3342 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 3 13:41:12.869805 kubelet[3342]: I0303 13:41:12.868509 3342 server.go:1262] "Started kubelet" Mar 3 13:41:12.878717 kubelet[3342]: I0303 13:41:12.878300 3342 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 3 13:41:12.882753 kubelet[3342]: I0303 13:41:12.882704 3342 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 13:41:12.883960 kubelet[3342]: I0303 13:41:12.883917 3342 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 13:41:12.884052 kubelet[3342]: I0303 13:41:12.883983 3342 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 3 13:41:12.884287 kubelet[3342]: I0303 13:41:12.884263 3342 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 13:41:12.887573 kubelet[3342]: I0303 13:41:12.887412 3342 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 13:41:12.889901 kubelet[3342]: I0303 13:41:12.889876 3342 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 3 13:41:12.890228 kubelet[3342]: E0303 13:41:12.890198 3342 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-29-215\" not found" Mar 3 13:41:12.892548 kubelet[3342]: I0303 13:41:12.892479 3342 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 3 13:41:12.892642 kubelet[3342]: I0303 13:41:12.892612 3342 reconciler.go:29] "Reconciler: start to sync state" Mar 3 13:41:12.895993 kubelet[3342]: I0303 13:41:12.895952 3342 server.go:310] "Adding debug handlers to kubelet server" Mar 3 13:41:12.897616 kubelet[3342]: I0303 13:41:12.897472 3342 factory.go:223] Registration of the systemd container factory successfully Mar 3 13:41:12.897616 kubelet[3342]: I0303 13:41:12.897566 3342 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 13:41:12.901758 kubelet[3342]: I0303 13:41:12.901573 3342 factory.go:223] Registration of the containerd container factory successfully Mar 3 13:41:12.904283 kubelet[3342]: E0303 13:41:12.903690 3342 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 13:41:12.922421 kubelet[3342]: I0303 13:41:12.922380 3342 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 3 13:41:12.928351 kubelet[3342]: I0303 13:41:12.928328 3342 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 3 13:41:12.928474 kubelet[3342]: I0303 13:41:12.928466 3342 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 3 13:41:12.928561 kubelet[3342]: I0303 13:41:12.928553 3342 kubelet.go:2428] "Starting kubelet main sync loop" Mar 3 13:41:12.928655 kubelet[3342]: E0303 13:41:12.928641 3342 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 13:41:12.971209 kubelet[3342]: I0303 13:41:12.971182 3342 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 3 13:41:12.971209 kubelet[3342]: I0303 13:41:12.971204 3342 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 3 13:41:12.971392 kubelet[3342]: I0303 13:41:12.971231 3342 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:41:12.971392 kubelet[3342]: I0303 13:41:12.971380 3342 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 3 13:41:12.971474 kubelet[3342]: I0303 13:41:12.971392 3342 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 3 13:41:12.971474 kubelet[3342]: I0303 13:41:12.971415 3342 policy_none.go:49] "None policy: Start" Mar 3 13:41:12.971474 kubelet[3342]: I0303 13:41:12.971428 3342 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 3 13:41:12.971474 kubelet[3342]: I0303 13:41:12.971440 3342 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 3 13:41:12.971618 kubelet[3342]: I0303 13:41:12.971564 3342 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 3 13:41:12.971618 kubelet[3342]: I0303 13:41:12.971574 3342 policy_none.go:47] "Start" Mar 3 13:41:12.981693 kubelet[3342]: E0303 13:41:12.981561 3342 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 13:41:12.981820 kubelet[3342]: I0303 13:41:12.981748 3342 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 3 13:41:12.982203 kubelet[3342]: I0303 13:41:12.981761 3342 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 13:41:12.983586 kubelet[3342]: I0303 13:41:12.982458 3342 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 3 13:41:12.985550 kubelet[3342]: E0303 13:41:12.985528 3342 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 13:41:13.030620 kubelet[3342]: I0303 13:41:13.029557 3342 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-215" Mar 3 13:41:13.030620 kubelet[3342]: I0303 13:41:13.029918 3342 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-215" Mar 3 13:41:13.032447 kubelet[3342]: I0303 13:41:13.031935 3342 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-215" Mar 3 13:41:13.085457 kubelet[3342]: I0303 13:41:13.085423 3342 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-215" Mar 3 13:41:13.101270 kubelet[3342]: I0303 13:41:13.101205 3342 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-29-215" Mar 3 13:41:13.102737 kubelet[3342]: I0303 13:41:13.102321 3342 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-215" Mar 3 13:41:13.194103 kubelet[3342]: I0303 13:41:13.194060 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/807da84afd202fea3e9fe902d7ee7f7f-ca-certs\") pod \"kube-apiserver-ip-172-31-29-215\" (UID: \"807da84afd202fea3e9fe902d7ee7f7f\") " pod="kube-system/kube-apiserver-ip-172-31-29-215" Mar 3 13:41:13.194103 kubelet[3342]: I0303 13:41:13.194108 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/807da84afd202fea3e9fe902d7ee7f7f-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-215\" (UID: \"807da84afd202fea3e9fe902d7ee7f7f\") " pod="kube-system/kube-apiserver-ip-172-31-29-215" Mar 3 13:41:13.194103 kubelet[3342]: I0303 13:41:13.194127 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/807da84afd202fea3e9fe902d7ee7f7f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-215\" (UID: \"807da84afd202fea3e9fe902d7ee7f7f\") " pod="kube-system/kube-apiserver-ip-172-31-29-215" Mar 3 13:41:13.195641 kubelet[3342]: I0303 13:41:13.194146 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/742a62ec6e8bb43d1dc4de3ab6a694e7-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-215\" (UID: \"742a62ec6e8bb43d1dc4de3ab6a694e7\") " pod="kube-system/kube-controller-manager-ip-172-31-29-215" Mar 3 13:41:13.195641 kubelet[3342]: I0303 13:41:13.194163 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/742a62ec6e8bb43d1dc4de3ab6a694e7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-215\" (UID: \"742a62ec6e8bb43d1dc4de3ab6a694e7\") " pod="kube-system/kube-controller-manager-ip-172-31-29-215" Mar 3 13:41:13.195641 kubelet[3342]: I0303 13:41:13.194177 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/742a62ec6e8bb43d1dc4de3ab6a694e7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-215\" (UID: \"742a62ec6e8bb43d1dc4de3ab6a694e7\") " pod="kube-system/kube-controller-manager-ip-172-31-29-215" Mar 3 13:41:13.195641 kubelet[3342]: I0303 13:41:13.194191 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/742a62ec6e8bb43d1dc4de3ab6a694e7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-215\" (UID: \"742a62ec6e8bb43d1dc4de3ab6a694e7\") " pod="kube-system/kube-controller-manager-ip-172-31-29-215" Mar 3 13:41:13.195641 kubelet[3342]: I0303 13:41:13.194234 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/742a62ec6e8bb43d1dc4de3ab6a694e7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-215\" (UID: \"742a62ec6e8bb43d1dc4de3ab6a694e7\") " pod="kube-system/kube-controller-manager-ip-172-31-29-215" Mar 3 13:41:13.195787 kubelet[3342]: I0303 13:41:13.194279 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/865b8eb563a32c8690ce05761133ad26-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-215\" (UID: \"865b8eb563a32c8690ce05761133ad26\") " pod="kube-system/kube-scheduler-ip-172-31-29-215" Mar 3 13:41:13.853656 kubelet[3342]: I0303 13:41:13.853595 3342 apiserver.go:52] "Watching apiserver" Mar 3 13:41:13.893134 kubelet[3342]: I0303 13:41:13.893073 3342 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 3 13:41:13.961533 kubelet[3342]: I0303 13:41:13.961481 3342 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-215" Mar 3 13:41:13.975259 kubelet[3342]: E0303 13:41:13.975015 3342 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-215\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-215" Mar 3 13:41:14.009021 kubelet[3342]: I0303 13:41:14.008876 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-215" podStartSLOduration=1.008858283 podStartE2EDuration="1.008858283s" podCreationTimestamp="2026-03-03 13:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:41:14.008582246 +0000 UTC m=+1.239055811" watchObservedRunningTime="2026-03-03 13:41:14.008858283 +0000 UTC m=+1.239331833" Mar 3 13:41:14.044702 kubelet[3342]: I0303 13:41:14.044639 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-215" podStartSLOduration=1.044623346 podStartE2EDuration="1.044623346s" podCreationTimestamp="2026-03-03 13:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:41:14.025372815 +0000 UTC m=+1.255846367" watchObservedRunningTime="2026-03-03 13:41:14.044623346 +0000 UTC m=+1.275096891" Mar 3 13:41:14.061956 kubelet[3342]: I0303 13:41:14.061514 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-215" podStartSLOduration=1.061491634 podStartE2EDuration="1.061491634s" podCreationTimestamp="2026-03-03 13:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:41:14.045489664 +0000 UTC m=+1.275963217" watchObservedRunningTime="2026-03-03 13:41:14.061491634 +0000 UTC m=+1.291965188" Mar 3 13:41:18.580493 kubelet[3342]: I0303 13:41:18.580460 3342 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 3 13:41:18.581934 containerd[1977]: time="2026-03-03T13:41:18.581520658Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 3 13:41:18.582153 kubelet[3342]: I0303 13:41:18.581865 3342 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 3 13:41:19.257506 systemd[1]: Created slice kubepods-besteffort-pod22ea2041_b1e8_43ca_a685_b29b23afbd67.slice - libcontainer container kubepods-besteffort-pod22ea2041_b1e8_43ca_a685_b29b23afbd67.slice. Mar 3 13:41:19.334363 kubelet[3342]: I0303 13:41:19.334299 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22ea2041-b1e8-43ca-a685-b29b23afbd67-xtables-lock\") pod \"kube-proxy-jz4sz\" (UID: \"22ea2041-b1e8-43ca-a685-b29b23afbd67\") " pod="kube-system/kube-proxy-jz4sz" Mar 3 13:41:19.334363 kubelet[3342]: I0303 13:41:19.334344 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22ea2041-b1e8-43ca-a685-b29b23afbd67-lib-modules\") pod \"kube-proxy-jz4sz\" (UID: \"22ea2041-b1e8-43ca-a685-b29b23afbd67\") " pod="kube-system/kube-proxy-jz4sz" Mar 3 13:41:19.334363 kubelet[3342]: I0303 13:41:19.334362 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/22ea2041-b1e8-43ca-a685-b29b23afbd67-kube-proxy\") pod \"kube-proxy-jz4sz\" (UID: \"22ea2041-b1e8-43ca-a685-b29b23afbd67\") " pod="kube-system/kube-proxy-jz4sz" Mar 3 13:41:19.334363 kubelet[3342]: I0303 13:41:19.334378 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44hvc\" (UniqueName: \"kubernetes.io/projected/22ea2041-b1e8-43ca-a685-b29b23afbd67-kube-api-access-44hvc\") pod \"kube-proxy-jz4sz\" (UID: \"22ea2041-b1e8-43ca-a685-b29b23afbd67\") " pod="kube-system/kube-proxy-jz4sz" Mar 3 13:41:19.570699 containerd[1977]: time="2026-03-03T13:41:19.570646511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jz4sz,Uid:22ea2041-b1e8-43ca-a685-b29b23afbd67,Namespace:kube-system,Attempt:0,}" Mar 3 13:41:19.639365 containerd[1977]: time="2026-03-03T13:41:19.639246222Z" level=info msg="connecting to shim 1eea5e1b7f5d9c21415938f9134b92dcb979baed1ae97914302a547408dc391c" address="unix:///run/containerd/s/9852d2950ae03cb28256ae4e439a35bb9a1c544fa2b4c092d16cf773a6ae351a" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:41:19.675016 systemd[1]: Started cri-containerd-1eea5e1b7f5d9c21415938f9134b92dcb979baed1ae97914302a547408dc391c.scope - libcontainer container 1eea5e1b7f5d9c21415938f9134b92dcb979baed1ae97914302a547408dc391c. Mar 3 13:41:19.705705 containerd[1977]: time="2026-03-03T13:41:19.705598912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jz4sz,Uid:22ea2041-b1e8-43ca-a685-b29b23afbd67,Namespace:kube-system,Attempt:0,} returns sandbox id \"1eea5e1b7f5d9c21415938f9134b92dcb979baed1ae97914302a547408dc391c\"" Mar 3 13:41:19.714152 containerd[1977]: time="2026-03-03T13:41:19.714093921Z" level=info msg="CreateContainer within sandbox \"1eea5e1b7f5d9c21415938f9134b92dcb979baed1ae97914302a547408dc391c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 3 13:41:19.735753 containerd[1977]: time="2026-03-03T13:41:19.735624860Z" level=info msg="Container 525095f8edc8445b630841cdd8e0ea7b233b6a0ed7c816ef0a967298d896837d: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:41:19.752687 containerd[1977]: time="2026-03-03T13:41:19.752020389Z" level=info msg="CreateContainer within sandbox \"1eea5e1b7f5d9c21415938f9134b92dcb979baed1ae97914302a547408dc391c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"525095f8edc8445b630841cdd8e0ea7b233b6a0ed7c816ef0a967298d896837d\"" Mar 3 13:41:19.754050 containerd[1977]: time="2026-03-03T13:41:19.753708951Z" level=info msg="StartContainer for \"525095f8edc8445b630841cdd8e0ea7b233b6a0ed7c816ef0a967298d896837d\"" Mar 3 13:41:19.760797 containerd[1977]: time="2026-03-03T13:41:19.760741243Z" level=info msg="connecting to shim 525095f8edc8445b630841cdd8e0ea7b233b6a0ed7c816ef0a967298d896837d" address="unix:///run/containerd/s/9852d2950ae03cb28256ae4e439a35bb9a1c544fa2b4c092d16cf773a6ae351a" protocol=ttrpc version=3 Mar 3 13:41:19.788050 systemd[1]: Started cri-containerd-525095f8edc8445b630841cdd8e0ea7b233b6a0ed7c816ef0a967298d896837d.scope - libcontainer container 525095f8edc8445b630841cdd8e0ea7b233b6a0ed7c816ef0a967298d896837d. Mar 3 13:41:19.808474 systemd[1]: Created slice kubepods-besteffort-pod64583918_f259_4aea_b9f4_7fae425e8ceb.slice - libcontainer container kubepods-besteffort-pod64583918_f259_4aea_b9f4_7fae425e8ceb.slice. Mar 3 13:41:19.837017 kubelet[3342]: I0303 13:41:19.836739 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/64583918-f259-4aea-b9f4-7fae425e8ceb-var-lib-calico\") pod \"tigera-operator-5588576f44-67bmr\" (UID: \"64583918-f259-4aea-b9f4-7fae425e8ceb\") " pod="tigera-operator/tigera-operator-5588576f44-67bmr" Mar 3 13:41:19.837616 kubelet[3342]: I0303 13:41:19.837575 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmktc\" (UniqueName: \"kubernetes.io/projected/64583918-f259-4aea-b9f4-7fae425e8ceb-kube-api-access-rmktc\") pod \"tigera-operator-5588576f44-67bmr\" (UID: \"64583918-f259-4aea-b9f4-7fae425e8ceb\") " pod="tigera-operator/tigera-operator-5588576f44-67bmr" Mar 3 13:41:19.872233 containerd[1977]: time="2026-03-03T13:41:19.872197586Z" level=info msg="StartContainer for \"525095f8edc8445b630841cdd8e0ea7b233b6a0ed7c816ef0a967298d896837d\" returns successfully" Mar 3 13:41:20.119310 containerd[1977]: time="2026-03-03T13:41:20.119195896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-67bmr,Uid:64583918-f259-4aea-b9f4-7fae425e8ceb,Namespace:tigera-operator,Attempt:0,}" Mar 3 13:41:20.146889 containerd[1977]: time="2026-03-03T13:41:20.146814036Z" level=info msg="connecting to shim ed75c7ff68369dd06e6538d08cef4623c40b81f8a4f6bb92f28b4318e36ad15a" address="unix:///run/containerd/s/ad843e6ddaa473e7c91dfe59dc0edbd58981e107c8da19deb13ceefb75dbd872" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:41:20.175970 systemd[1]: Started cri-containerd-ed75c7ff68369dd06e6538d08cef4623c40b81f8a4f6bb92f28b4318e36ad15a.scope - libcontainer container ed75c7ff68369dd06e6538d08cef4623c40b81f8a4f6bb92f28b4318e36ad15a. Mar 3 13:41:20.235789 containerd[1977]: time="2026-03-03T13:41:20.235692633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-67bmr,Uid:64583918-f259-4aea-b9f4-7fae425e8ceb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ed75c7ff68369dd06e6538d08cef4623c40b81f8a4f6bb92f28b4318e36ad15a\"" Mar 3 13:41:20.239943 containerd[1977]: time="2026-03-03T13:41:20.239872033Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 3 13:41:20.542660 kubelet[3342]: I0303 13:41:20.542363 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jz4sz" podStartSLOduration=1.5423368549999998 podStartE2EDuration="1.542336855s" podCreationTimestamp="2026-03-03 13:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:41:19.987503596 +0000 UTC m=+7.217977149" watchObservedRunningTime="2026-03-03 13:41:20.542336855 +0000 UTC m=+7.772810404" Mar 3 13:41:21.477824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2610974375.mount: Deactivated successfully. Mar 3 13:41:23.500194 update_engine[1967]: I20260303 13:41:23.500097 1967 update_attempter.cc:509] Updating boot flags... Mar 3 13:41:25.951388 containerd[1977]: time="2026-03-03T13:41:25.951317459Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:25.952405 containerd[1977]: time="2026-03-03T13:41:25.952367313Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 3 13:41:25.953581 containerd[1977]: time="2026-03-03T13:41:25.953536025Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:25.955637 containerd[1977]: time="2026-03-03T13:41:25.955576302Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:25.956264 containerd[1977]: time="2026-03-03T13:41:25.956048093Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 5.7161347s" Mar 3 13:41:25.956264 containerd[1977]: time="2026-03-03T13:41:25.956081589Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 3 13:41:25.961612 containerd[1977]: time="2026-03-03T13:41:25.961546877Z" level=info msg="CreateContainer within sandbox \"ed75c7ff68369dd06e6538d08cef4623c40b81f8a4f6bb92f28b4318e36ad15a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 3 13:41:25.970296 containerd[1977]: time="2026-03-03T13:41:25.970263060Z" level=info msg="Container c7a1581ae6224097468f21053b9e147f0450dc4f15d3f355865f6dcc206844b4: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:41:25.989129 containerd[1977]: time="2026-03-03T13:41:25.989077699Z" level=info msg="CreateContainer within sandbox \"ed75c7ff68369dd06e6538d08cef4623c40b81f8a4f6bb92f28b4318e36ad15a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c7a1581ae6224097468f21053b9e147f0450dc4f15d3f355865f6dcc206844b4\"" Mar 3 13:41:25.992196 containerd[1977]: time="2026-03-03T13:41:25.992086158Z" level=info msg="StartContainer for \"c7a1581ae6224097468f21053b9e147f0450dc4f15d3f355865f6dcc206844b4\"" Mar 3 13:41:25.994021 containerd[1977]: time="2026-03-03T13:41:25.993939304Z" level=info msg="connecting to shim c7a1581ae6224097468f21053b9e147f0450dc4f15d3f355865f6dcc206844b4" address="unix:///run/containerd/s/ad843e6ddaa473e7c91dfe59dc0edbd58981e107c8da19deb13ceefb75dbd872" protocol=ttrpc version=3 Mar 3 13:41:26.030007 systemd[1]: Started cri-containerd-c7a1581ae6224097468f21053b9e147f0450dc4f15d3f355865f6dcc206844b4.scope - libcontainer container c7a1581ae6224097468f21053b9e147f0450dc4f15d3f355865f6dcc206844b4. Mar 3 13:41:26.067030 containerd[1977]: time="2026-03-03T13:41:26.066993022Z" level=info msg="StartContainer for \"c7a1581ae6224097468f21053b9e147f0450dc4f15d3f355865f6dcc206844b4\" returns successfully" Mar 3 13:41:27.021431 kubelet[3342]: I0303 13:41:27.021358 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-67bmr" podStartSLOduration=2.302249808 podStartE2EDuration="8.02133961s" podCreationTimestamp="2026-03-03 13:41:19 +0000 UTC" firstStartedPulling="2026-03-03 13:41:20.238058757 +0000 UTC m=+7.468532299" lastFinishedPulling="2026-03-03 13:41:25.95714857 +0000 UTC m=+13.187622101" observedRunningTime="2026-03-03 13:41:27.02130531 +0000 UTC m=+14.251778865" watchObservedRunningTime="2026-03-03 13:41:27.02133961 +0000 UTC m=+14.251813163" Mar 3 13:41:32.944074 sudo[2376]: pam_unix(sudo:session): session closed for user root Mar 3 13:41:33.022796 sshd[2375]: Connection closed by 68.220.241.50 port 45442 Mar 3 13:41:33.024823 sshd-session[2372]: pam_unix(sshd:session): session closed for user core Mar 3 13:41:33.031188 systemd[1]: sshd@6-172.31.29.215:22-68.220.241.50:45442.service: Deactivated successfully. Mar 3 13:41:33.035579 systemd[1]: session-7.scope: Deactivated successfully. Mar 3 13:41:33.036978 systemd[1]: session-7.scope: Consumed 6.174s CPU time, 170.1M memory peak. Mar 3 13:41:33.040027 systemd-logind[1964]: Session 7 logged out. Waiting for processes to exit. Mar 3 13:41:33.046292 systemd-logind[1964]: Removed session 7. Mar 3 13:41:34.614859 systemd[1]: Created slice kubepods-besteffort-pod82787ab3_8f20_48ce_be66_1ff2e4aadbd2.slice - libcontainer container kubepods-besteffort-pod82787ab3_8f20_48ce_be66_1ff2e4aadbd2.slice. Mar 3 13:41:34.645179 kubelet[3342]: I0303 13:41:34.645061 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98pmn\" (UniqueName: \"kubernetes.io/projected/82787ab3-8f20-48ce-be66-1ff2e4aadbd2-kube-api-access-98pmn\") pod \"calico-typha-77785956c6-mggpx\" (UID: \"82787ab3-8f20-48ce-be66-1ff2e4aadbd2\") " pod="calico-system/calico-typha-77785956c6-mggpx" Mar 3 13:41:34.645179 kubelet[3342]: I0303 13:41:34.645136 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82787ab3-8f20-48ce-be66-1ff2e4aadbd2-tigera-ca-bundle\") pod \"calico-typha-77785956c6-mggpx\" (UID: \"82787ab3-8f20-48ce-be66-1ff2e4aadbd2\") " pod="calico-system/calico-typha-77785956c6-mggpx" Mar 3 13:41:34.646870 kubelet[3342]: I0303 13:41:34.646154 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/82787ab3-8f20-48ce-be66-1ff2e4aadbd2-typha-certs\") pod \"calico-typha-77785956c6-mggpx\" (UID: \"82787ab3-8f20-48ce-be66-1ff2e4aadbd2\") " pod="calico-system/calico-typha-77785956c6-mggpx" Mar 3 13:41:34.834500 systemd[1]: Created slice kubepods-besteffort-pod545f5bdd_5267_4152_8751_462b75aa5f18.slice - libcontainer container kubepods-besteffort-pod545f5bdd_5267_4152_8751_462b75aa5f18.slice. Mar 3 13:41:34.848142 kubelet[3342]: I0303 13:41:34.848099 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/545f5bdd-5267-4152-8751-462b75aa5f18-cni-net-dir\") pod \"calico-node-lzbc5\" (UID: \"545f5bdd-5267-4152-8751-462b75aa5f18\") " pod="calico-system/calico-node-lzbc5" Mar 3 13:41:34.850477 kubelet[3342]: I0303 13:41:34.848332 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/545f5bdd-5267-4152-8751-462b75aa5f18-flexvol-driver-host\") pod \"calico-node-lzbc5\" (UID: \"545f5bdd-5267-4152-8751-462b75aa5f18\") " pod="calico-system/calico-node-lzbc5" Mar 3 13:41:34.850477 kubelet[3342]: I0303 13:41:34.848362 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/545f5bdd-5267-4152-8751-462b75aa5f18-xtables-lock\") pod \"calico-node-lzbc5\" (UID: \"545f5bdd-5267-4152-8751-462b75aa5f18\") " pod="calico-system/calico-node-lzbc5" Mar 3 13:41:34.850477 kubelet[3342]: I0303 13:41:34.848386 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/545f5bdd-5267-4152-8751-462b75aa5f18-tigera-ca-bundle\") pod \"calico-node-lzbc5\" (UID: \"545f5bdd-5267-4152-8751-462b75aa5f18\") " pod="calico-system/calico-node-lzbc5" Mar 3 13:41:34.850477 kubelet[3342]: I0303 13:41:34.848409 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/545f5bdd-5267-4152-8751-462b75aa5f18-sys-fs\") pod \"calico-node-lzbc5\" (UID: \"545f5bdd-5267-4152-8751-462b75aa5f18\") " pod="calico-system/calico-node-lzbc5" Mar 3 13:41:34.850477 kubelet[3342]: I0303 13:41:34.848442 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/545f5bdd-5267-4152-8751-462b75aa5f18-cni-bin-dir\") pod \"calico-node-lzbc5\" (UID: \"545f5bdd-5267-4152-8751-462b75aa5f18\") " pod="calico-system/calico-node-lzbc5" Mar 3 13:41:34.850813 kubelet[3342]: I0303 13:41:34.848465 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/545f5bdd-5267-4152-8751-462b75aa5f18-cni-log-dir\") pod \"calico-node-lzbc5\" (UID: \"545f5bdd-5267-4152-8751-462b75aa5f18\") " pod="calico-system/calico-node-lzbc5" Mar 3 13:41:34.850813 kubelet[3342]: I0303 13:41:34.848487 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfmtp\" (UniqueName: \"kubernetes.io/projected/545f5bdd-5267-4152-8751-462b75aa5f18-kube-api-access-kfmtp\") pod \"calico-node-lzbc5\" (UID: \"545f5bdd-5267-4152-8751-462b75aa5f18\") " pod="calico-system/calico-node-lzbc5" Mar 3 13:41:34.850813 kubelet[3342]: I0303 13:41:34.848514 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/545f5bdd-5267-4152-8751-462b75aa5f18-bpffs\") pod \"calico-node-lzbc5\" (UID: \"545f5bdd-5267-4152-8751-462b75aa5f18\") " pod="calico-system/calico-node-lzbc5" Mar 3 13:41:34.850813 kubelet[3342]: I0303 13:41:34.848536 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/545f5bdd-5267-4152-8751-462b75aa5f18-nodeproc\") pod \"calico-node-lzbc5\" (UID: \"545f5bdd-5267-4152-8751-462b75aa5f18\") " pod="calico-system/calico-node-lzbc5" Mar 3 13:41:34.850813 kubelet[3342]: I0303 13:41:34.848561 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/545f5bdd-5267-4152-8751-462b75aa5f18-policysync\") pod \"calico-node-lzbc5\" (UID: \"545f5bdd-5267-4152-8751-462b75aa5f18\") " pod="calico-system/calico-node-lzbc5" Mar 3 13:41:34.851030 kubelet[3342]: I0303 13:41:34.848613 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/545f5bdd-5267-4152-8751-462b75aa5f18-lib-modules\") pod \"calico-node-lzbc5\" (UID: \"545f5bdd-5267-4152-8751-462b75aa5f18\") " pod="calico-system/calico-node-lzbc5" Mar 3 13:41:34.851030 kubelet[3342]: I0303 13:41:34.848636 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/545f5bdd-5267-4152-8751-462b75aa5f18-var-run-calico\") pod \"calico-node-lzbc5\" (UID: \"545f5bdd-5267-4152-8751-462b75aa5f18\") " pod="calico-system/calico-node-lzbc5" Mar 3 13:41:34.851030 kubelet[3342]: I0303 13:41:34.848658 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/545f5bdd-5267-4152-8751-462b75aa5f18-node-certs\") pod \"calico-node-lzbc5\" (UID: \"545f5bdd-5267-4152-8751-462b75aa5f18\") " pod="calico-system/calico-node-lzbc5" Mar 3 13:41:34.851030 kubelet[3342]: I0303 13:41:34.848682 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/545f5bdd-5267-4152-8751-462b75aa5f18-var-lib-calico\") pod \"calico-node-lzbc5\" (UID: \"545f5bdd-5267-4152-8751-462b75aa5f18\") " pod="calico-system/calico-node-lzbc5" Mar 3 13:41:34.913841 kubelet[3342]: E0303 13:41:34.913725 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vzcdk" podUID="4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e" Mar 3 13:41:34.923006 containerd[1977]: time="2026-03-03T13:41:34.922918859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77785956c6-mggpx,Uid:82787ab3-8f20-48ce-be66-1ff2e4aadbd2,Namespace:calico-system,Attempt:0,}" Mar 3 13:41:34.949089 kubelet[3342]: I0303 13:41:34.948938 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e-kubelet-dir\") pod \"csi-node-driver-vzcdk\" (UID: \"4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e\") " pod="calico-system/csi-node-driver-vzcdk" Mar 3 13:41:34.949089 kubelet[3342]: I0303 13:41:34.948974 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e-socket-dir\") pod \"csi-node-driver-vzcdk\" (UID: \"4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e\") " pod="calico-system/csi-node-driver-vzcdk" Mar 3 13:41:34.949089 kubelet[3342]: I0303 13:41:34.949024 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e-registration-dir\") pod \"csi-node-driver-vzcdk\" (UID: \"4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e\") " pod="calico-system/csi-node-driver-vzcdk" Mar 3 13:41:34.949089 kubelet[3342]: I0303 13:41:34.949092 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm9d6\" (UniqueName: \"kubernetes.io/projected/4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e-kube-api-access-wm9d6\") pod \"csi-node-driver-vzcdk\" (UID: \"4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e\") " pod="calico-system/csi-node-driver-vzcdk" Mar 3 13:41:34.949272 kubelet[3342]: I0303 13:41:34.949119 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e-varrun\") pod \"csi-node-driver-vzcdk\" (UID: \"4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e\") " pod="calico-system/csi-node-driver-vzcdk" Mar 3 13:41:34.965376 kubelet[3342]: E0303 13:41:34.964931 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:34.965376 kubelet[3342]: W0303 13:41:34.964954 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:34.965376 kubelet[3342]: E0303 13:41:34.964977 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:34.968713 containerd[1977]: time="2026-03-03T13:41:34.968672483Z" level=info msg="connecting to shim 06499bb64c2401205ab085ff9492c7cb0803e5a63d244b2dd3395143d0053dbb" address="unix:///run/containerd/s/3eccd4aec323c73d831d6c894ca3c061f852412eb16cadbdd64cbf54ff2ae2ad" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:41:34.982951 kubelet[3342]: E0303 13:41:34.982866 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:34.982951 kubelet[3342]: W0303 13:41:34.982886 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:34.982951 kubelet[3342]: E0303 13:41:34.982904 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.001080 systemd[1]: Started cri-containerd-06499bb64c2401205ab085ff9492c7cb0803e5a63d244b2dd3395143d0053dbb.scope - libcontainer container 06499bb64c2401205ab085ff9492c7cb0803e5a63d244b2dd3395143d0053dbb. Mar 3 13:41:35.050860 kubelet[3342]: E0303 13:41:35.050807 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.050860 kubelet[3342]: W0303 13:41:35.050835 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.050860 kubelet[3342]: E0303 13:41:35.050861 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.052130 kubelet[3342]: E0303 13:41:35.051799 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.052130 kubelet[3342]: W0303 13:41:35.051820 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.052130 kubelet[3342]: E0303 13:41:35.051837 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.052950 kubelet[3342]: E0303 13:41:35.052515 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.052950 kubelet[3342]: W0303 13:41:35.052528 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.052950 kubelet[3342]: E0303 13:41:35.052638 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.053441 kubelet[3342]: E0303 13:41:35.053394 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.053441 kubelet[3342]: W0303 13:41:35.053406 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.053441 kubelet[3342]: E0303 13:41:35.053421 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.054126 kubelet[3342]: E0303 13:41:35.053955 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.054126 kubelet[3342]: W0303 13:41:35.053967 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.054126 kubelet[3342]: E0303 13:41:35.053981 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.054629 kubelet[3342]: E0303 13:41:35.054607 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.054629 kubelet[3342]: W0303 13:41:35.054625 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.055345 kubelet[3342]: E0303 13:41:35.054639 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.057107 kubelet[3342]: E0303 13:41:35.057088 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.057107 kubelet[3342]: W0303 13:41:35.057105 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.057236 kubelet[3342]: E0303 13:41:35.057125 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.057557 kubelet[3342]: E0303 13:41:35.057525 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.057557 kubelet[3342]: W0303 13:41:35.057541 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.057660 kubelet[3342]: E0303 13:41:35.057556 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.058250 kubelet[3342]: E0303 13:41:35.058228 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.058250 kubelet[3342]: W0303 13:41:35.058245 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.058372 kubelet[3342]: E0303 13:41:35.058260 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.060878 kubelet[3342]: E0303 13:41:35.059209 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.060878 kubelet[3342]: W0303 13:41:35.059225 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.060878 kubelet[3342]: E0303 13:41:35.059242 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.060878 kubelet[3342]: E0303 13:41:35.060867 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.060878 kubelet[3342]: W0303 13:41:35.060880 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.061148 kubelet[3342]: E0303 13:41:35.060895 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.061395 kubelet[3342]: E0303 13:41:35.061377 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.061395 kubelet[3342]: W0303 13:41:35.061395 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.061522 kubelet[3342]: E0303 13:41:35.061513 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.062267 kubelet[3342]: E0303 13:41:35.062249 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.062267 kubelet[3342]: W0303 13:41:35.062267 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.062704 kubelet[3342]: E0303 13:41:35.062282 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.063152 kubelet[3342]: E0303 13:41:35.063120 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.063152 kubelet[3342]: W0303 13:41:35.063136 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.063152 kubelet[3342]: E0303 13:41:35.063150 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.063717 kubelet[3342]: E0303 13:41:35.063696 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.063717 kubelet[3342]: W0303 13:41:35.063711 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.063845 kubelet[3342]: E0303 13:41:35.063724 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.064764 kubelet[3342]: E0303 13:41:35.064736 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.064764 kubelet[3342]: W0303 13:41:35.064764 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.064764 kubelet[3342]: E0303 13:41:35.064796 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.065175 kubelet[3342]: E0303 13:41:35.065159 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.065250 kubelet[3342]: W0303 13:41:35.065177 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.065250 kubelet[3342]: E0303 13:41:35.065190 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.065593 kubelet[3342]: E0303 13:41:35.065568 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.065593 kubelet[3342]: W0303 13:41:35.065585 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.065859 kubelet[3342]: E0303 13:41:35.065599 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.066964 kubelet[3342]: E0303 13:41:35.066947 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.067133 kubelet[3342]: W0303 13:41:35.067033 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.067133 kubelet[3342]: E0303 13:41:35.067053 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.067515 kubelet[3342]: E0303 13:41:35.067501 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.067753 kubelet[3342]: W0303 13:41:35.067558 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.067753 kubelet[3342]: E0303 13:41:35.067575 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.067944 kubelet[3342]: E0303 13:41:35.067923 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.068085 kubelet[3342]: W0303 13:41:35.068018 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.068085 kubelet[3342]: E0303 13:41:35.068038 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.068849 kubelet[3342]: E0303 13:41:35.068737 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.068849 kubelet[3342]: W0303 13:41:35.068828 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.068951 kubelet[3342]: E0303 13:41:35.068851 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.070000 kubelet[3342]: E0303 13:41:35.069983 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.070000 kubelet[3342]: W0303 13:41:35.070000 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.070129 kubelet[3342]: E0303 13:41:35.070015 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.070340 kubelet[3342]: E0303 13:41:35.070310 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.070340 kubelet[3342]: W0303 13:41:35.070327 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.070340 kubelet[3342]: E0303 13:41:35.070340 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.071916 kubelet[3342]: E0303 13:41:35.071899 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.071916 kubelet[3342]: W0303 13:41:35.071915 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.072250 kubelet[3342]: E0303 13:41:35.071930 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.077082 containerd[1977]: time="2026-03-03T13:41:35.077010241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77785956c6-mggpx,Uid:82787ab3-8f20-48ce-be66-1ff2e4aadbd2,Namespace:calico-system,Attempt:0,} returns sandbox id \"06499bb64c2401205ab085ff9492c7cb0803e5a63d244b2dd3395143d0053dbb\"" Mar 3 13:41:35.079939 containerd[1977]: time="2026-03-03T13:41:35.079904794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 3 13:41:35.086510 kubelet[3342]: E0303 13:41:35.086480 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:35.086510 kubelet[3342]: W0303 13:41:35.086503 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:35.086640 kubelet[3342]: E0303 13:41:35.086525 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:35.148113 containerd[1977]: time="2026-03-03T13:41:35.148067840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lzbc5,Uid:545f5bdd-5267-4152-8751-462b75aa5f18,Namespace:calico-system,Attempt:0,}" Mar 3 13:41:35.168577 containerd[1977]: time="2026-03-03T13:41:35.168077165Z" level=info msg="connecting to shim 422b7b865bce93571614b1a6308f20793a53f16d4445b08da64726c064c4db6f" address="unix:///run/containerd/s/8a978f6d0ffa6a248cbd1be403a647b1b3781da47e48b8312b365e2f21c78003" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:41:35.201176 systemd[1]: Started cri-containerd-422b7b865bce93571614b1a6308f20793a53f16d4445b08da64726c064c4db6f.scope - libcontainer container 422b7b865bce93571614b1a6308f20793a53f16d4445b08da64726c064c4db6f. Mar 3 13:41:35.237998 containerd[1977]: time="2026-03-03T13:41:35.237910660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lzbc5,Uid:545f5bdd-5267-4152-8751-462b75aa5f18,Namespace:calico-system,Attempt:0,} returns sandbox id \"422b7b865bce93571614b1a6308f20793a53f16d4445b08da64726c064c4db6f\"" Mar 3 13:41:36.669723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2845638408.mount: Deactivated successfully. Mar 3 13:41:36.929703 kubelet[3342]: E0303 13:41:36.929589 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vzcdk" podUID="4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e" Mar 3 13:41:38.066503 containerd[1977]: time="2026-03-03T13:41:38.066447729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:38.067583 containerd[1977]: time="2026-03-03T13:41:38.067464894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 3 13:41:38.068621 containerd[1977]: time="2026-03-03T13:41:38.068583119Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:38.071357 containerd[1977]: time="2026-03-03T13:41:38.071307563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:38.071966 containerd[1977]: time="2026-03-03T13:41:38.071825953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.991881824s" Mar 3 13:41:38.071966 containerd[1977]: time="2026-03-03T13:41:38.071856160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 3 13:41:38.073861 containerd[1977]: time="2026-03-03T13:41:38.073658832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 3 13:41:38.093800 containerd[1977]: time="2026-03-03T13:41:38.093727881Z" level=info msg="CreateContainer within sandbox \"06499bb64c2401205ab085ff9492c7cb0803e5a63d244b2dd3395143d0053dbb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 3 13:41:38.116520 containerd[1977]: time="2026-03-03T13:41:38.116285547Z" level=info msg="Container dfef79fc8a2f7732a0fec0b9b297167563a2370d570e4854956259ab61f946f8: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:41:38.132454 containerd[1977]: time="2026-03-03T13:41:38.132408949Z" level=info msg="CreateContainer within sandbox \"06499bb64c2401205ab085ff9492c7cb0803e5a63d244b2dd3395143d0053dbb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dfef79fc8a2f7732a0fec0b9b297167563a2370d570e4854956259ab61f946f8\"" Mar 3 13:41:38.133380 containerd[1977]: time="2026-03-03T13:41:38.133352777Z" level=info msg="StartContainer for \"dfef79fc8a2f7732a0fec0b9b297167563a2370d570e4854956259ab61f946f8\"" Mar 3 13:41:38.134846 containerd[1977]: time="2026-03-03T13:41:38.134791246Z" level=info msg="connecting to shim dfef79fc8a2f7732a0fec0b9b297167563a2370d570e4854956259ab61f946f8" address="unix:///run/containerd/s/3eccd4aec323c73d831d6c894ca3c061f852412eb16cadbdd64cbf54ff2ae2ad" protocol=ttrpc version=3 Mar 3 13:41:38.179965 systemd[1]: Started cri-containerd-dfef79fc8a2f7732a0fec0b9b297167563a2370d570e4854956259ab61f946f8.scope - libcontainer container dfef79fc8a2f7732a0fec0b9b297167563a2370d570e4854956259ab61f946f8. Mar 3 13:41:38.247930 containerd[1977]: time="2026-03-03T13:41:38.247887117Z" level=info msg="StartContainer for \"dfef79fc8a2f7732a0fec0b9b297167563a2370d570e4854956259ab61f946f8\" returns successfully" Mar 3 13:41:38.929569 kubelet[3342]: E0303 13:41:38.929224 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vzcdk" podUID="4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e" Mar 3 13:41:39.063874 kubelet[3342]: E0303 13:41:39.063744 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.063874 kubelet[3342]: W0303 13:41:39.063783 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.063874 kubelet[3342]: E0303 13:41:39.063803 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.064071 kubelet[3342]: E0303 13:41:39.063968 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.064071 kubelet[3342]: W0303 13:41:39.063974 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.064071 kubelet[3342]: E0303 13:41:39.063983 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.064145 kubelet[3342]: E0303 13:41:39.064102 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.064145 kubelet[3342]: W0303 13:41:39.064108 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.064145 kubelet[3342]: E0303 13:41:39.064114 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.064287 kubelet[3342]: E0303 13:41:39.064274 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.064287 kubelet[3342]: W0303 13:41:39.064280 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.064287 kubelet[3342]: E0303 13:41:39.064287 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.064612 kubelet[3342]: E0303 13:41:39.064585 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.064612 kubelet[3342]: W0303 13:41:39.064603 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.064701 kubelet[3342]: E0303 13:41:39.064619 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.064997 kubelet[3342]: E0303 13:41:39.064982 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.065966 kubelet[3342]: W0303 13:41:39.065817 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.065966 kubelet[3342]: E0303 13:41:39.065840 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.066061 kubelet[3342]: E0303 13:41:39.066033 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.066061 kubelet[3342]: W0303 13:41:39.066040 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.066061 kubelet[3342]: E0303 13:41:39.066048 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.066199 kubelet[3342]: E0303 13:41:39.066179 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.066199 kubelet[3342]: W0303 13:41:39.066193 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.066249 kubelet[3342]: E0303 13:41:39.066200 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.066340 kubelet[3342]: E0303 13:41:39.066325 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.066340 kubelet[3342]: W0303 13:41:39.066336 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.066407 kubelet[3342]: E0303 13:41:39.066343 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.066474 kubelet[3342]: E0303 13:41:39.066460 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.066474 kubelet[3342]: W0303 13:41:39.066470 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.066525 kubelet[3342]: E0303 13:41:39.066478 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.067221 kubelet[3342]: E0303 13:41:39.066610 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.067221 kubelet[3342]: W0303 13:41:39.066621 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.067221 kubelet[3342]: E0303 13:41:39.066627 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.067221 kubelet[3342]: E0303 13:41:39.066738 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.067221 kubelet[3342]: W0303 13:41:39.066744 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.067221 kubelet[3342]: E0303 13:41:39.066750 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.067221 kubelet[3342]: E0303 13:41:39.066878 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.067221 kubelet[3342]: W0303 13:41:39.066884 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.067221 kubelet[3342]: E0303 13:41:39.066890 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.067221 kubelet[3342]: E0303 13:41:39.067021 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.067548 kubelet[3342]: W0303 13:41:39.067026 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.067548 kubelet[3342]: E0303 13:41:39.067032 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.067548 kubelet[3342]: E0303 13:41:39.067144 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.067548 kubelet[3342]: W0303 13:41:39.067150 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.067548 kubelet[3342]: E0303 13:41:39.067156 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.092365 kubelet[3342]: E0303 13:41:39.092323 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.092365 kubelet[3342]: W0303 13:41:39.092345 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.092365 kubelet[3342]: E0303 13:41:39.092363 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.092825 kubelet[3342]: E0303 13:41:39.092545 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.092825 kubelet[3342]: W0303 13:41:39.092552 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.092825 kubelet[3342]: E0303 13:41:39.092560 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.093201 kubelet[3342]: E0303 13:41:39.093067 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.093201 kubelet[3342]: W0303 13:41:39.093094 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.093201 kubelet[3342]: E0303 13:41:39.093108 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.093380 kubelet[3342]: E0303 13:41:39.093361 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.093380 kubelet[3342]: W0303 13:41:39.093373 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.093495 kubelet[3342]: E0303 13:41:39.093385 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.093707 kubelet[3342]: E0303 13:41:39.093692 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.093707 kubelet[3342]: W0303 13:41:39.093705 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.093780 kubelet[3342]: E0303 13:41:39.093722 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.094059 kubelet[3342]: E0303 13:41:39.094019 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.094059 kubelet[3342]: W0303 13:41:39.094028 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.094059 kubelet[3342]: E0303 13:41:39.094037 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.094687 kubelet[3342]: E0303 13:41:39.094668 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.094687 kubelet[3342]: W0303 13:41:39.094686 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.094759 kubelet[3342]: E0303 13:41:39.094699 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.095027 kubelet[3342]: E0303 13:41:39.094970 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.095027 kubelet[3342]: W0303 13:41:39.094979 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.095027 kubelet[3342]: E0303 13:41:39.094987 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.095235 kubelet[3342]: E0303 13:41:39.095135 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.095235 kubelet[3342]: W0303 13:41:39.095141 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.095235 kubelet[3342]: E0303 13:41:39.095148 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.095324 kubelet[3342]: E0303 13:41:39.095310 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.095324 kubelet[3342]: W0303 13:41:39.095318 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.095372 kubelet[3342]: E0303 13:41:39.095326 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.095471 kubelet[3342]: E0303 13:41:39.095457 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.095471 kubelet[3342]: W0303 13:41:39.095467 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.095610 kubelet[3342]: E0303 13:41:39.095474 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.096025 kubelet[3342]: E0303 13:41:39.096007 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.096025 kubelet[3342]: W0303 13:41:39.096023 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.096134 kubelet[3342]: E0303 13:41:39.096033 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.096351 kubelet[3342]: E0303 13:41:39.096217 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.096351 kubelet[3342]: W0303 13:41:39.096227 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.096351 kubelet[3342]: E0303 13:41:39.096250 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.096595 kubelet[3342]: E0303 13:41:39.096579 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.096595 kubelet[3342]: W0303 13:41:39.096590 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.096657 kubelet[3342]: E0303 13:41:39.096609 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.097106 kubelet[3342]: E0303 13:41:39.097085 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.097106 kubelet[3342]: W0303 13:41:39.097099 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.097106 kubelet[3342]: E0303 13:41:39.097109 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.097566 kubelet[3342]: E0303 13:41:39.097543 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.097566 kubelet[3342]: W0303 13:41:39.097556 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.097566 kubelet[3342]: E0303 13:41:39.097566 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.098273 kubelet[3342]: E0303 13:41:39.098255 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.098273 kubelet[3342]: W0303 13:41:39.098267 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.098273 kubelet[3342]: E0303 13:41:39.098276 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.098546 kubelet[3342]: E0303 13:41:39.098524 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:41:39.098546 kubelet[3342]: W0303 13:41:39.098540 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:41:39.098644 kubelet[3342]: E0303 13:41:39.098553 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:41:39.480445 containerd[1977]: time="2026-03-03T13:41:39.480376166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:39.481445 containerd[1977]: time="2026-03-03T13:41:39.481401930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 3 13:41:39.482549 containerd[1977]: time="2026-03-03T13:41:39.482500509Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:39.485816 containerd[1977]: time="2026-03-03T13:41:39.484979478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:39.485816 containerd[1977]: time="2026-03-03T13:41:39.485636200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.411951523s" Mar 3 13:41:39.485816 containerd[1977]: time="2026-03-03T13:41:39.485660974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 3 13:41:39.490848 containerd[1977]: time="2026-03-03T13:41:39.490666119Z" level=info msg="CreateContainer within sandbox \"422b7b865bce93571614b1a6308f20793a53f16d4445b08da64726c064c4db6f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 3 13:41:39.514917 containerd[1977]: time="2026-03-03T13:41:39.514860805Z" level=info msg="Container a6ee8a8c89e1a4addb6500486eb3bf6e1211f6d494cba3ffee44615f3812d748: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:41:39.551227 containerd[1977]: time="2026-03-03T13:41:39.551175200Z" level=info msg="CreateContainer within sandbox \"422b7b865bce93571614b1a6308f20793a53f16d4445b08da64726c064c4db6f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a6ee8a8c89e1a4addb6500486eb3bf6e1211f6d494cba3ffee44615f3812d748\"" Mar 3 13:41:39.551978 containerd[1977]: time="2026-03-03T13:41:39.551944692Z" level=info msg="StartContainer for \"a6ee8a8c89e1a4addb6500486eb3bf6e1211f6d494cba3ffee44615f3812d748\"" Mar 3 13:41:39.553755 containerd[1977]: time="2026-03-03T13:41:39.553667527Z" level=info msg="connecting to shim a6ee8a8c89e1a4addb6500486eb3bf6e1211f6d494cba3ffee44615f3812d748" address="unix:///run/containerd/s/8a978f6d0ffa6a248cbd1be403a647b1b3781da47e48b8312b365e2f21c78003" protocol=ttrpc version=3 Mar 3 13:41:39.577072 systemd[1]: Started cri-containerd-a6ee8a8c89e1a4addb6500486eb3bf6e1211f6d494cba3ffee44615f3812d748.scope - libcontainer container a6ee8a8c89e1a4addb6500486eb3bf6e1211f6d494cba3ffee44615f3812d748. Mar 3 13:41:39.650686 containerd[1977]: time="2026-03-03T13:41:39.650640975Z" level=info msg="StartContainer for \"a6ee8a8c89e1a4addb6500486eb3bf6e1211f6d494cba3ffee44615f3812d748\" returns successfully" Mar 3 13:41:39.663171 systemd[1]: cri-containerd-a6ee8a8c89e1a4addb6500486eb3bf6e1211f6d494cba3ffee44615f3812d748.scope: Deactivated successfully. Mar 3 13:41:39.700580 containerd[1977]: time="2026-03-03T13:41:39.700509222Z" level=info msg="received container exit event container_id:\"a6ee8a8c89e1a4addb6500486eb3bf6e1211f6d494cba3ffee44615f3812d748\" id:\"a6ee8a8c89e1a4addb6500486eb3bf6e1211f6d494cba3ffee44615f3812d748\" pid:4241 exited_at:{seconds:1772545299 nanos:666666319}" Mar 3 13:41:39.737866 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6ee8a8c89e1a4addb6500486eb3bf6e1211f6d494cba3ffee44615f3812d748-rootfs.mount: Deactivated successfully. Mar 3 13:41:40.063500 kubelet[3342]: I0303 13:41:40.063418 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 3 13:41:40.065082 containerd[1977]: time="2026-03-03T13:41:40.064831433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 3 13:41:40.100511 kubelet[3342]: I0303 13:41:40.100215 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77785956c6-mggpx" podStartSLOduration=3.104379364 podStartE2EDuration="6.099009103s" podCreationTimestamp="2026-03-03 13:41:34 +0000 UTC" firstStartedPulling="2026-03-03 13:41:35.078723166 +0000 UTC m=+22.309196702" lastFinishedPulling="2026-03-03 13:41:38.073352911 +0000 UTC m=+25.303826441" observedRunningTime="2026-03-03 13:41:39.077719756 +0000 UTC m=+26.308193307" watchObservedRunningTime="2026-03-03 13:41:40.099009103 +0000 UTC m=+27.329482657" Mar 3 13:41:40.930532 kubelet[3342]: E0303 13:41:40.930135 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vzcdk" podUID="4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e" Mar 3 13:41:42.932065 kubelet[3342]: E0303 13:41:42.932024 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vzcdk" podUID="4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e" Mar 3 13:41:44.930706 kubelet[3342]: E0303 13:41:44.929852 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vzcdk" podUID="4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e" Mar 3 13:41:46.930676 kubelet[3342]: E0303 13:41:46.929876 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vzcdk" podUID="4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e" Mar 3 13:41:48.929177 kubelet[3342]: E0303 13:41:48.929040 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vzcdk" podUID="4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e" Mar 3 13:41:50.930144 kubelet[3342]: E0303 13:41:50.929984 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vzcdk" podUID="4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e" Mar 3 13:41:52.491735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3456373054.mount: Deactivated successfully. Mar 3 13:41:52.543229 containerd[1977]: time="2026-03-03T13:41:52.536141202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:52.555019 containerd[1977]: time="2026-03-03T13:41:52.537308641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 3 13:41:52.609652 containerd[1977]: time="2026-03-03T13:41:52.608853878Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:52.611630 containerd[1977]: time="2026-03-03T13:41:52.611561943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:52.612460 containerd[1977]: time="2026-03-03T13:41:52.612225738Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 12.547351989s" Mar 3 13:41:52.612460 containerd[1977]: time="2026-03-03T13:41:52.612268928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 3 13:41:52.619789 containerd[1977]: time="2026-03-03T13:41:52.617843768Z" level=info msg="CreateContainer within sandbox \"422b7b865bce93571614b1a6308f20793a53f16d4445b08da64726c064c4db6f\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 3 13:41:52.666716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249486248.mount: Deactivated successfully. Mar 3 13:41:52.667559 containerd[1977]: time="2026-03-03T13:41:52.666972577Z" level=info msg="Container 889d9e67d821bb384eaf3bcf5c8d218936237efcd4b94db231df5262a288d198: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:41:52.682482 containerd[1977]: time="2026-03-03T13:41:52.682440014Z" level=info msg="CreateContainer within sandbox \"422b7b865bce93571614b1a6308f20793a53f16d4445b08da64726c064c4db6f\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"889d9e67d821bb384eaf3bcf5c8d218936237efcd4b94db231df5262a288d198\"" Mar 3 13:41:52.683037 containerd[1977]: time="2026-03-03T13:41:52.682978157Z" level=info msg="StartContainer for \"889d9e67d821bb384eaf3bcf5c8d218936237efcd4b94db231df5262a288d198\"" Mar 3 13:41:52.685019 containerd[1977]: time="2026-03-03T13:41:52.684982655Z" level=info msg="connecting to shim 889d9e67d821bb384eaf3bcf5c8d218936237efcd4b94db231df5262a288d198" address="unix:///run/containerd/s/8a978f6d0ffa6a248cbd1be403a647b1b3781da47e48b8312b365e2f21c78003" protocol=ttrpc version=3 Mar 3 13:41:52.737022 systemd[1]: Started cri-containerd-889d9e67d821bb384eaf3bcf5c8d218936237efcd4b94db231df5262a288d198.scope - libcontainer container 889d9e67d821bb384eaf3bcf5c8d218936237efcd4b94db231df5262a288d198. Mar 3 13:41:52.860941 containerd[1977]: time="2026-03-03T13:41:52.860898451Z" level=info msg="StartContainer for \"889d9e67d821bb384eaf3bcf5c8d218936237efcd4b94db231df5262a288d198\" returns successfully" Mar 3 13:41:52.882036 systemd[1]: cri-containerd-889d9e67d821bb384eaf3bcf5c8d218936237efcd4b94db231df5262a288d198.scope: Deactivated successfully. Mar 3 13:41:52.908849 containerd[1977]: time="2026-03-03T13:41:52.908574612Z" level=info msg="received container exit event container_id:\"889d9e67d821bb384eaf3bcf5c8d218936237efcd4b94db231df5262a288d198\" id:\"889d9e67d821bb384eaf3bcf5c8d218936237efcd4b94db231df5262a288d198\" pid:4300 exited_at:{seconds:1772545312 nanos:908387238}" Mar 3 13:41:52.932687 kubelet[3342]: E0303 13:41:52.932616 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vzcdk" podUID="4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e" Mar 3 13:41:53.104400 containerd[1977]: time="2026-03-03T13:41:53.104050867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 3 13:41:53.490763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-889d9e67d821bb384eaf3bcf5c8d218936237efcd4b94db231df5262a288d198-rootfs.mount: Deactivated successfully. Mar 3 13:41:54.929648 kubelet[3342]: E0303 13:41:54.929595 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vzcdk" podUID="4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e" Mar 3 13:41:56.240288 containerd[1977]: time="2026-03-03T13:41:56.240234715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:56.241624 containerd[1977]: time="2026-03-03T13:41:56.241528636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 3 13:41:56.242687 containerd[1977]: time="2026-03-03T13:41:56.242646566Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:56.245016 containerd[1977]: time="2026-03-03T13:41:56.244975285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:56.257850 containerd[1977]: time="2026-03-03T13:41:56.257696906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.153610367s" Mar 3 13:41:56.257850 containerd[1977]: time="2026-03-03T13:41:56.257739309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 3 13:41:56.262264 containerd[1977]: time="2026-03-03T13:41:56.262223641Z" level=info msg="CreateContainer within sandbox \"422b7b865bce93571614b1a6308f20793a53f16d4445b08da64726c064c4db6f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 3 13:41:56.273210 containerd[1977]: time="2026-03-03T13:41:56.273070882Z" level=info msg="Container 6de8e5475a907f3bb36646c84e37c9ee912eb42f34c990032d19e85dc368282f: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:41:56.281734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount691169731.mount: Deactivated successfully. Mar 3 13:41:56.287107 containerd[1977]: time="2026-03-03T13:41:56.287053746Z" level=info msg="CreateContainer within sandbox \"422b7b865bce93571614b1a6308f20793a53f16d4445b08da64726c064c4db6f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6de8e5475a907f3bb36646c84e37c9ee912eb42f34c990032d19e85dc368282f\"" Mar 3 13:41:56.287622 containerd[1977]: time="2026-03-03T13:41:56.287587028Z" level=info msg="StartContainer for \"6de8e5475a907f3bb36646c84e37c9ee912eb42f34c990032d19e85dc368282f\"" Mar 3 13:41:56.289301 containerd[1977]: time="2026-03-03T13:41:56.289247839Z" level=info msg="connecting to shim 6de8e5475a907f3bb36646c84e37c9ee912eb42f34c990032d19e85dc368282f" address="unix:///run/containerd/s/8a978f6d0ffa6a248cbd1be403a647b1b3781da47e48b8312b365e2f21c78003" protocol=ttrpc version=3 Mar 3 13:41:56.315986 systemd[1]: Started cri-containerd-6de8e5475a907f3bb36646c84e37c9ee912eb42f34c990032d19e85dc368282f.scope - libcontainer container 6de8e5475a907f3bb36646c84e37c9ee912eb42f34c990032d19e85dc368282f. Mar 3 13:41:56.388740 containerd[1977]: time="2026-03-03T13:41:56.388478785Z" level=info msg="StartContainer for \"6de8e5475a907f3bb36646c84e37c9ee912eb42f34c990032d19e85dc368282f\" returns successfully" Mar 3 13:41:56.928965 kubelet[3342]: E0303 13:41:56.928912 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vzcdk" podUID="4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e" Mar 3 13:41:57.320098 systemd[1]: cri-containerd-6de8e5475a907f3bb36646c84e37c9ee912eb42f34c990032d19e85dc368282f.scope: Deactivated successfully. Mar 3 13:41:57.320372 systemd[1]: cri-containerd-6de8e5475a907f3bb36646c84e37c9ee912eb42f34c990032d19e85dc368282f.scope: Consumed 543ms CPU time, 168.8M memory peak, 5.9M read from disk, 177M written to disk. Mar 3 13:41:57.329838 containerd[1977]: time="2026-03-03T13:41:57.329793035Z" level=info msg="received container exit event container_id:\"6de8e5475a907f3bb36646c84e37c9ee912eb42f34c990032d19e85dc368282f\" id:\"6de8e5475a907f3bb36646c84e37c9ee912eb42f34c990032d19e85dc368282f\" pid:4358 exited_at:{seconds:1772545317 nanos:329490226}" Mar 3 13:41:57.373537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6de8e5475a907f3bb36646c84e37c9ee912eb42f34c990032d19e85dc368282f-rootfs.mount: Deactivated successfully. Mar 3 13:41:57.393221 kubelet[3342]: I0303 13:41:57.393197 3342 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 3 13:41:57.670335 systemd[1]: Created slice kubepods-burstable-podc0efcb7b_93dc_4d18_9b21_e4718494e8da.slice - libcontainer container kubepods-burstable-podc0efcb7b_93dc_4d18_9b21_e4718494e8da.slice. Mar 3 13:41:57.680985 systemd[1]: Created slice kubepods-besteffort-pod67ac86c0_8603_47c0_a321_050ba93c4a04.slice - libcontainer container kubepods-besteffort-pod67ac86c0_8603_47c0_a321_050ba93c4a04.slice. Mar 3 13:41:57.686752 systemd[1]: Created slice kubepods-besteffort-pod76acbdd4_dcb3_4968_9ce4_244304aed0dc.slice - libcontainer container kubepods-besteffort-pod76acbdd4_dcb3_4968_9ce4_244304aed0dc.slice. Mar 3 13:41:57.694645 systemd[1]: Created slice kubepods-burstable-pod04b4b97f_e557_499d_975d_9317e6f7cb25.slice - libcontainer container kubepods-burstable-pod04b4b97f_e557_499d_975d_9317e6f7cb25.slice. Mar 3 13:41:57.702997 systemd[1]: Created slice kubepods-besteffort-podfd08abe4_ac09_48f6_bb93_09e07187cce3.slice - libcontainer container kubepods-besteffort-podfd08abe4_ac09_48f6_bb93_09e07187cce3.slice. Mar 3 13:41:57.733868 kubelet[3342]: I0303 13:41:57.733823 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76acbdd4-dcb3-4968-9ce4-244304aed0dc-tigera-ca-bundle\") pod \"calico-kube-controllers-77bcd9b5f-p2cx4\" (UID: \"76acbdd4-dcb3-4968-9ce4-244304aed0dc\") " pod="calico-system/calico-kube-controllers-77bcd9b5f-p2cx4" Mar 3 13:41:57.733868 kubelet[3342]: I0303 13:41:57.733859 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67ac86c0-8603-47c0-a321-050ba93c4a04-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-vnbqd\" (UID: \"67ac86c0-8603-47c0-a321-050ba93c4a04\") " pod="calico-system/goldmane-cccfbd5cf-vnbqd" Mar 3 13:41:57.733868 kubelet[3342]: I0303 13:41:57.733874 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjc5t\" (UniqueName: \"kubernetes.io/projected/67ac86c0-8603-47c0-a321-050ba93c4a04-kube-api-access-rjc5t\") pod \"goldmane-cccfbd5cf-vnbqd\" (UID: \"67ac86c0-8603-47c0-a321-050ba93c4a04\") " pod="calico-system/goldmane-cccfbd5cf-vnbqd" Mar 3 13:41:57.734076 kubelet[3342]: I0303 13:41:57.733891 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04b4b97f-e557-499d-975d-9317e6f7cb25-config-volume\") pod \"coredns-66bc5c9577-scfjs\" (UID: \"04b4b97f-e557-499d-975d-9317e6f7cb25\") " pod="kube-system/coredns-66bc5c9577-scfjs" Mar 3 13:41:57.734076 kubelet[3342]: I0303 13:41:57.733908 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ddmk\" (UniqueName: \"kubernetes.io/projected/fd08abe4-ac09-48f6-bb93-09e07187cce3-kube-api-access-5ddmk\") pod \"calico-apiserver-57f877f9cd-7w9vm\" (UID: \"fd08abe4-ac09-48f6-bb93-09e07187cce3\") " pod="calico-system/calico-apiserver-57f877f9cd-7w9vm" Mar 3 13:41:57.734076 kubelet[3342]: I0303 13:41:57.733928 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0efcb7b-93dc-4d18-9b21-e4718494e8da-config-volume\") pod \"coredns-66bc5c9577-4h8sk\" (UID: \"c0efcb7b-93dc-4d18-9b21-e4718494e8da\") " pod="kube-system/coredns-66bc5c9577-4h8sk" Mar 3 13:41:57.734076 kubelet[3342]: I0303 13:41:57.733943 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5zqg\" (UniqueName: \"kubernetes.io/projected/c0efcb7b-93dc-4d18-9b21-e4718494e8da-kube-api-access-m5zqg\") pod \"coredns-66bc5c9577-4h8sk\" (UID: \"c0efcb7b-93dc-4d18-9b21-e4718494e8da\") " pod="kube-system/coredns-66bc5c9577-4h8sk" Mar 3 13:41:57.734076 kubelet[3342]: I0303 13:41:57.733957 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fd08abe4-ac09-48f6-bb93-09e07187cce3-calico-apiserver-certs\") pod \"calico-apiserver-57f877f9cd-7w9vm\" (UID: \"fd08abe4-ac09-48f6-bb93-09e07187cce3\") " pod="calico-system/calico-apiserver-57f877f9cd-7w9vm" Mar 3 13:41:57.734204 kubelet[3342]: I0303 13:41:57.733974 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxgz8\" (UniqueName: \"kubernetes.io/projected/76acbdd4-dcb3-4968-9ce4-244304aed0dc-kube-api-access-dxgz8\") pod \"calico-kube-controllers-77bcd9b5f-p2cx4\" (UID: \"76acbdd4-dcb3-4968-9ce4-244304aed0dc\") " pod="calico-system/calico-kube-controllers-77bcd9b5f-p2cx4" Mar 3 13:41:57.734204 kubelet[3342]: I0303 13:41:57.733991 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67ac86c0-8603-47c0-a321-050ba93c4a04-config\") pod \"goldmane-cccfbd5cf-vnbqd\" (UID: \"67ac86c0-8603-47c0-a321-050ba93c4a04\") " pod="calico-system/goldmane-cccfbd5cf-vnbqd" Mar 3 13:41:57.734204 kubelet[3342]: I0303 13:41:57.734006 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/67ac86c0-8603-47c0-a321-050ba93c4a04-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-vnbqd\" (UID: \"67ac86c0-8603-47c0-a321-050ba93c4a04\") " pod="calico-system/goldmane-cccfbd5cf-vnbqd" Mar 3 13:41:57.734204 kubelet[3342]: I0303 13:41:57.734022 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-979pj\" (UniqueName: \"kubernetes.io/projected/04b4b97f-e557-499d-975d-9317e6f7cb25-kube-api-access-979pj\") pod \"coredns-66bc5c9577-scfjs\" (UID: \"04b4b97f-e557-499d-975d-9317e6f7cb25\") " pod="kube-system/coredns-66bc5c9577-scfjs" Mar 3 13:41:57.750272 systemd[1]: Created slice kubepods-besteffort-podf1461887_2a0b_476f_877e_64ddc68ab5a8.slice - libcontainer container kubepods-besteffort-podf1461887_2a0b_476f_877e_64ddc68ab5a8.slice. Mar 3 13:41:57.767494 systemd[1]: Created slice kubepods-besteffort-podf1a68e16_2cc0_44c9_8e62_da67fc36763c.slice - libcontainer container kubepods-besteffort-podf1a68e16_2cc0_44c9_8e62_da67fc36763c.slice. Mar 3 13:41:57.835750 kubelet[3342]: I0303 13:41:57.835113 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ds7r\" (UniqueName: \"kubernetes.io/projected/f1a68e16-2cc0-44c9-8e62-da67fc36763c-kube-api-access-6ds7r\") pod \"calico-apiserver-57f877f9cd-frzvl\" (UID: \"f1a68e16-2cc0-44c9-8e62-da67fc36763c\") " pod="calico-system/calico-apiserver-57f877f9cd-frzvl" Mar 3 13:41:57.835750 kubelet[3342]: I0303 13:41:57.835158 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f1461887-2a0b-476f-877e-64ddc68ab5a8-nginx-config\") pod \"whisker-67bd66cb7f-r4xhf\" (UID: \"f1461887-2a0b-476f-877e-64ddc68ab5a8\") " pod="calico-system/whisker-67bd66cb7f-r4xhf" Mar 3 13:41:57.835750 kubelet[3342]: I0303 13:41:57.835211 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1461887-2a0b-476f-877e-64ddc68ab5a8-whisker-ca-bundle\") pod \"whisker-67bd66cb7f-r4xhf\" (UID: \"f1461887-2a0b-476f-877e-64ddc68ab5a8\") " pod="calico-system/whisker-67bd66cb7f-r4xhf" Mar 3 13:41:57.835750 kubelet[3342]: I0303 13:41:57.835227 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts6qj\" (UniqueName: \"kubernetes.io/projected/f1461887-2a0b-476f-877e-64ddc68ab5a8-kube-api-access-ts6qj\") pod \"whisker-67bd66cb7f-r4xhf\" (UID: \"f1461887-2a0b-476f-877e-64ddc68ab5a8\") " pod="calico-system/whisker-67bd66cb7f-r4xhf" Mar 3 13:41:57.835750 kubelet[3342]: I0303 13:41:57.835262 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f1a68e16-2cc0-44c9-8e62-da67fc36763c-calico-apiserver-certs\") pod \"calico-apiserver-57f877f9cd-frzvl\" (UID: \"f1a68e16-2cc0-44c9-8e62-da67fc36763c\") " pod="calico-system/calico-apiserver-57f877f9cd-frzvl" Mar 3 13:41:57.836301 kubelet[3342]: I0303 13:41:57.835277 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1461887-2a0b-476f-877e-64ddc68ab5a8-whisker-backend-key-pair\") pod \"whisker-67bd66cb7f-r4xhf\" (UID: \"f1461887-2a0b-476f-877e-64ddc68ab5a8\") " pod="calico-system/whisker-67bd66cb7f-r4xhf" Mar 3 13:41:57.978694 containerd[1977]: time="2026-03-03T13:41:57.978602526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4h8sk,Uid:c0efcb7b-93dc-4d18-9b21-e4718494e8da,Namespace:kube-system,Attempt:0,}" Mar 3 13:41:57.987298 containerd[1977]: time="2026-03-03T13:41:57.987243263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-vnbqd,Uid:67ac86c0-8603-47c0-a321-050ba93c4a04,Namespace:calico-system,Attempt:0,}" Mar 3 13:41:58.008509 containerd[1977]: time="2026-03-03T13:41:58.008202226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-scfjs,Uid:04b4b97f-e557-499d-975d-9317e6f7cb25,Namespace:kube-system,Attempt:0,}" Mar 3 13:41:58.009533 containerd[1977]: time="2026-03-03T13:41:58.009495994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77bcd9b5f-p2cx4,Uid:76acbdd4-dcb3-4968-9ce4-244304aed0dc,Namespace:calico-system,Attempt:0,}" Mar 3 13:41:58.010637 containerd[1977]: time="2026-03-03T13:41:58.010564407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f877f9cd-7w9vm,Uid:fd08abe4-ac09-48f6-bb93-09e07187cce3,Namespace:calico-system,Attempt:0,}" Mar 3 13:41:58.066580 containerd[1977]: time="2026-03-03T13:41:58.066534878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67bd66cb7f-r4xhf,Uid:f1461887-2a0b-476f-877e-64ddc68ab5a8,Namespace:calico-system,Attempt:0,}" Mar 3 13:41:58.075075 containerd[1977]: time="2026-03-03T13:41:58.074961477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f877f9cd-frzvl,Uid:f1a68e16-2cc0-44c9-8e62-da67fc36763c,Namespace:calico-system,Attempt:0,}" Mar 3 13:41:58.178033 containerd[1977]: time="2026-03-03T13:41:58.177985615Z" level=info msg="CreateContainer within sandbox \"422b7b865bce93571614b1a6308f20793a53f16d4445b08da64726c064c4db6f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 3 13:41:58.214101 containerd[1977]: time="2026-03-03T13:41:58.214056048Z" level=info msg="Container 91eab6f1a429e823aec1f0b141440d28032b0846eafe5a800711f7111caeab42: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:41:58.227568 containerd[1977]: time="2026-03-03T13:41:58.227455744Z" level=info msg="CreateContainer within sandbox \"422b7b865bce93571614b1a6308f20793a53f16d4445b08da64726c064c4db6f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"91eab6f1a429e823aec1f0b141440d28032b0846eafe5a800711f7111caeab42\"" Mar 3 13:41:58.228410 containerd[1977]: time="2026-03-03T13:41:58.228388664Z" level=info msg="StartContainer for \"91eab6f1a429e823aec1f0b141440d28032b0846eafe5a800711f7111caeab42\"" Mar 3 13:41:58.230921 containerd[1977]: time="2026-03-03T13:41:58.230382533Z" level=info msg="connecting to shim 91eab6f1a429e823aec1f0b141440d28032b0846eafe5a800711f7111caeab42" address="unix:///run/containerd/s/8a978f6d0ffa6a248cbd1be403a647b1b3781da47e48b8312b365e2f21c78003" protocol=ttrpc version=3 Mar 3 13:41:58.272126 systemd[1]: Started cri-containerd-91eab6f1a429e823aec1f0b141440d28032b0846eafe5a800711f7111caeab42.scope - libcontainer container 91eab6f1a429e823aec1f0b141440d28032b0846eafe5a800711f7111caeab42. Mar 3 13:41:58.395139 containerd[1977]: time="2026-03-03T13:41:58.395096835Z" level=error msg="Failed to destroy network for sandbox \"3530af3fe024891574ecd0c0246bc444db538da8f93f75296e28f96e811e6583\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.398332 systemd[1]: run-netns-cni\x2d5aa423f7\x2dad3c\x2d347a\x2db044\x2da38062b6f42a.mount: Deactivated successfully. Mar 3 13:41:58.421177 containerd[1977]: time="2026-03-03T13:41:58.402807609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f877f9cd-frzvl,Uid:f1a68e16-2cc0-44c9-8e62-da67fc36763c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3530af3fe024891574ecd0c0246bc444db538da8f93f75296e28f96e811e6583\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.421861 kubelet[3342]: E0303 13:41:58.421453 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3530af3fe024891574ecd0c0246bc444db538da8f93f75296e28f96e811e6583\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.421861 kubelet[3342]: E0303 13:41:58.421544 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3530af3fe024891574ecd0c0246bc444db538da8f93f75296e28f96e811e6583\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57f877f9cd-frzvl" Mar 3 13:41:58.421861 kubelet[3342]: E0303 13:41:58.421581 3342 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3530af3fe024891574ecd0c0246bc444db538da8f93f75296e28f96e811e6583\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57f877f9cd-frzvl" Mar 3 13:41:58.422266 kubelet[3342]: E0303 13:41:58.421737 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57f877f9cd-frzvl_calico-system(f1a68e16-2cc0-44c9-8e62-da67fc36763c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57f877f9cd-frzvl_calico-system(f1a68e16-2cc0-44c9-8e62-da67fc36763c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3530af3fe024891574ecd0c0246bc444db538da8f93f75296e28f96e811e6583\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-57f877f9cd-frzvl" podUID="f1a68e16-2cc0-44c9-8e62-da67fc36763c" Mar 3 13:41:58.465285 containerd[1977]: time="2026-03-03T13:41:58.465253864Z" level=info msg="StartContainer for \"91eab6f1a429e823aec1f0b141440d28032b0846eafe5a800711f7111caeab42\" returns successfully" Mar 3 13:41:58.481286 containerd[1977]: time="2026-03-03T13:41:58.480949102Z" level=error msg="Failed to destroy network for sandbox \"bc24e57b9a26ec6f30ca83ec1e8a2bf984d47aa96c6418b9567b027bcf1f5a8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.483222 containerd[1977]: time="2026-03-03T13:41:58.483161717Z" level=error msg="Failed to destroy network for sandbox \"f5ca09f44365ef2a43798eca31a186dd7d42bbefa44051d8db15e228be10113a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.483764 systemd[1]: run-netns-cni\x2d10ba993d\x2dc06a\x2d9832\x2d56db\x2dd7a6dacb118a.mount: Deactivated successfully. Mar 3 13:41:58.488641 systemd[1]: run-netns-cni\x2d920339e4\x2d79a1\x2d5d09\x2d73b9\x2dc871252332ea.mount: Deactivated successfully. Mar 3 13:41:58.491172 containerd[1977]: time="2026-03-03T13:41:58.489662615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f877f9cd-7w9vm,Uid:fd08abe4-ac09-48f6-bb93-09e07187cce3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc24e57b9a26ec6f30ca83ec1e8a2bf984d47aa96c6418b9567b027bcf1f5a8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.491978 kubelet[3342]: E0303 13:41:58.491926 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc24e57b9a26ec6f30ca83ec1e8a2bf984d47aa96c6418b9567b027bcf1f5a8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.492137 kubelet[3342]: E0303 13:41:58.491983 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc24e57b9a26ec6f30ca83ec1e8a2bf984d47aa96c6418b9567b027bcf1f5a8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57f877f9cd-7w9vm" Mar 3 13:41:58.492137 kubelet[3342]: E0303 13:41:58.492007 3342 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc24e57b9a26ec6f30ca83ec1e8a2bf984d47aa96c6418b9567b027bcf1f5a8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57f877f9cd-7w9vm" Mar 3 13:41:58.492137 kubelet[3342]: E0303 13:41:58.492068 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57f877f9cd-7w9vm_calico-system(fd08abe4-ac09-48f6-bb93-09e07187cce3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57f877f9cd-7w9vm_calico-system(fd08abe4-ac09-48f6-bb93-09e07187cce3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc24e57b9a26ec6f30ca83ec1e8a2bf984d47aa96c6418b9567b027bcf1f5a8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-57f877f9cd-7w9vm" podUID="fd08abe4-ac09-48f6-bb93-09e07187cce3" Mar 3 13:41:58.492637 containerd[1977]: time="2026-03-03T13:41:58.492356305Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-vnbqd,Uid:67ac86c0-8603-47c0-a321-050ba93c4a04,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ca09f44365ef2a43798eca31a186dd7d42bbefa44051d8db15e228be10113a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.492983 kubelet[3342]: E0303 13:41:58.492951 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ca09f44365ef2a43798eca31a186dd7d42bbefa44051d8db15e228be10113a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.493052 kubelet[3342]: E0303 13:41:58.492998 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ca09f44365ef2a43798eca31a186dd7d42bbefa44051d8db15e228be10113a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-vnbqd" Mar 3 13:41:58.493052 kubelet[3342]: E0303 13:41:58.493014 3342 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ca09f44365ef2a43798eca31a186dd7d42bbefa44051d8db15e228be10113a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-vnbqd" Mar 3 13:41:58.493115 kubelet[3342]: E0303 13:41:58.493054 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-vnbqd_calico-system(67ac86c0-8603-47c0-a321-050ba93c4a04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-vnbqd_calico-system(67ac86c0-8603-47c0-a321-050ba93c4a04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5ca09f44365ef2a43798eca31a186dd7d42bbefa44051d8db15e228be10113a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-vnbqd" podUID="67ac86c0-8603-47c0-a321-050ba93c4a04" Mar 3 13:41:58.513163 containerd[1977]: time="2026-03-03T13:41:58.513103154Z" level=error msg="Failed to destroy network for sandbox \"4729cbebb17224ff863d8f34b2437800d9bf2bbad234304a7ba8c00c9e1dd061\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.515166 containerd[1977]: time="2026-03-03T13:41:58.515117784Z" level=error msg="Failed to destroy network for sandbox \"e8da4d3d05ffd744d9e7fee09b1cf645243061ce22f8722be3a8ae341704a25a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.523206 systemd[1]: run-netns-cni\x2da0031683\x2dc709\x2dac9e\x2d8a69\x2d1e383c14712d.mount: Deactivated successfully. Mar 3 13:41:58.523984 containerd[1977]: time="2026-03-03T13:41:58.523525980Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4h8sk,Uid:c0efcb7b-93dc-4d18-9b21-e4718494e8da,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4729cbebb17224ff863d8f34b2437800d9bf2bbad234304a7ba8c00c9e1dd061\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.527084 kubelet[3342]: E0303 13:41:58.526112 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4729cbebb17224ff863d8f34b2437800d9bf2bbad234304a7ba8c00c9e1dd061\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.527084 kubelet[3342]: E0303 13:41:58.526922 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4729cbebb17224ff863d8f34b2437800d9bf2bbad234304a7ba8c00c9e1dd061\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4h8sk" Mar 3 13:41:58.527084 kubelet[3342]: E0303 13:41:58.526978 3342 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4729cbebb17224ff863d8f34b2437800d9bf2bbad234304a7ba8c00c9e1dd061\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4h8sk" Mar 3 13:41:58.528274 containerd[1977]: time="2026-03-03T13:41:58.527382552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-scfjs,Uid:04b4b97f-e557-499d-975d-9317e6f7cb25,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8da4d3d05ffd744d9e7fee09b1cf645243061ce22f8722be3a8ae341704a25a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.528443 kubelet[3342]: E0303 13:41:58.528199 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4h8sk_kube-system(c0efcb7b-93dc-4d18-9b21-e4718494e8da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4h8sk_kube-system(c0efcb7b-93dc-4d18-9b21-e4718494e8da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4729cbebb17224ff863d8f34b2437800d9bf2bbad234304a7ba8c00c9e1dd061\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4h8sk" podUID="c0efcb7b-93dc-4d18-9b21-e4718494e8da" Mar 3 13:41:58.532524 kubelet[3342]: E0303 13:41:58.529888 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8da4d3d05ffd744d9e7fee09b1cf645243061ce22f8722be3a8ae341704a25a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.532524 kubelet[3342]: E0303 13:41:58.529938 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8da4d3d05ffd744d9e7fee09b1cf645243061ce22f8722be3a8ae341704a25a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-scfjs" Mar 3 13:41:58.532524 kubelet[3342]: E0303 13:41:58.529962 3342 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8da4d3d05ffd744d9e7fee09b1cf645243061ce22f8722be3a8ae341704a25a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-scfjs" Mar 3 13:41:58.533001 kubelet[3342]: E0303 13:41:58.530018 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-scfjs_kube-system(04b4b97f-e557-499d-975d-9317e6f7cb25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-scfjs_kube-system(04b4b97f-e557-499d-975d-9317e6f7cb25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8da4d3d05ffd744d9e7fee09b1cf645243061ce22f8722be3a8ae341704a25a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-scfjs" podUID="04b4b97f-e557-499d-975d-9317e6f7cb25" Mar 3 13:41:58.536751 containerd[1977]: time="2026-03-03T13:41:58.536702674Z" level=error msg="Failed to destroy network for sandbox \"625331f7880a552d1f5c427e739061f8d3898fc3c4f784d733b434753624b76b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.539407 containerd[1977]: time="2026-03-03T13:41:58.539108046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77bcd9b5f-p2cx4,Uid:76acbdd4-dcb3-4968-9ce4-244304aed0dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"625331f7880a552d1f5c427e739061f8d3898fc3c4f784d733b434753624b76b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.539664 kubelet[3342]: E0303 13:41:58.539547 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"625331f7880a552d1f5c427e739061f8d3898fc3c4f784d733b434753624b76b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.539664 kubelet[3342]: E0303 13:41:58.539628 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"625331f7880a552d1f5c427e739061f8d3898fc3c4f784d733b434753624b76b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77bcd9b5f-p2cx4" Mar 3 13:41:58.542018 kubelet[3342]: E0303 13:41:58.539653 3342 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"625331f7880a552d1f5c427e739061f8d3898fc3c4f784d733b434753624b76b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77bcd9b5f-p2cx4" Mar 3 13:41:58.542018 kubelet[3342]: E0303 13:41:58.539806 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77bcd9b5f-p2cx4_calico-system(76acbdd4-dcb3-4968-9ce4-244304aed0dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77bcd9b5f-p2cx4_calico-system(76acbdd4-dcb3-4968-9ce4-244304aed0dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"625331f7880a552d1f5c427e739061f8d3898fc3c4f784d733b434753624b76b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77bcd9b5f-p2cx4" podUID="76acbdd4-dcb3-4968-9ce4-244304aed0dc" Mar 3 13:41:58.547564 containerd[1977]: time="2026-03-03T13:41:58.547519162Z" level=error msg="Failed to destroy network for sandbox \"f59fd681ec915c27989fb395030c8a4a395e5c6ad6e6b8984388e1d1e47a7cb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.549425 containerd[1977]: time="2026-03-03T13:41:58.548987879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67bd66cb7f-r4xhf,Uid:f1461887-2a0b-476f-877e-64ddc68ab5a8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59fd681ec915c27989fb395030c8a4a395e5c6ad6e6b8984388e1d1e47a7cb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.550176 kubelet[3342]: E0303 13:41:58.549845 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59fd681ec915c27989fb395030c8a4a395e5c6ad6e6b8984388e1d1e47a7cb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:58.550176 kubelet[3342]: E0303 13:41:58.549896 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59fd681ec915c27989fb395030c8a4a395e5c6ad6e6b8984388e1d1e47a7cb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67bd66cb7f-r4xhf" Mar 3 13:41:58.550176 kubelet[3342]: E0303 13:41:58.549918 3342 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59fd681ec915c27989fb395030c8a4a395e5c6ad6e6b8984388e1d1e47a7cb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67bd66cb7f-r4xhf" Mar 3 13:41:58.550368 kubelet[3342]: E0303 13:41:58.549970 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-67bd66cb7f-r4xhf_calico-system(f1461887-2a0b-476f-877e-64ddc68ab5a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-67bd66cb7f-r4xhf_calico-system(f1461887-2a0b-476f-877e-64ddc68ab5a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f59fd681ec915c27989fb395030c8a4a395e5c6ad6e6b8984388e1d1e47a7cb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-67bd66cb7f-r4xhf" podUID="f1461887-2a0b-476f-877e-64ddc68ab5a8" Mar 3 13:41:58.937316 systemd[1]: Created slice kubepods-besteffort-pod4bd43a2b_aace_4d10_bd7f_1c90a5c76f8e.slice - libcontainer container kubepods-besteffort-pod4bd43a2b_aace_4d10_bd7f_1c90a5c76f8e.slice. Mar 3 13:41:58.942043 containerd[1977]: time="2026-03-03T13:41:58.942002517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vzcdk,Uid:4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e,Namespace:calico-system,Attempt:0,}" Mar 3 13:41:59.002853 containerd[1977]: time="2026-03-03T13:41:59.002804079Z" level=error msg="Failed to destroy network for sandbox \"2d38dcb63898370e4ddee5d998acba537bdade6f327b4c6356ebeb0daa6b2ab1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:59.006500 containerd[1977]: time="2026-03-03T13:41:59.004295292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vzcdk,Uid:4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d38dcb63898370e4ddee5d998acba537bdade6f327b4c6356ebeb0daa6b2ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:59.006979 kubelet[3342]: E0303 13:41:59.006866 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d38dcb63898370e4ddee5d998acba537bdade6f327b4c6356ebeb0daa6b2ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:41:59.006979 kubelet[3342]: E0303 13:41:59.006948 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d38dcb63898370e4ddee5d998acba537bdade6f327b4c6356ebeb0daa6b2ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vzcdk" Mar 3 13:41:59.007134 kubelet[3342]: E0303 13:41:59.006980 3342 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d38dcb63898370e4ddee5d998acba537bdade6f327b4c6356ebeb0daa6b2ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vzcdk" Mar 3 13:41:59.007305 kubelet[3342]: E0303 13:41:59.007252 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vzcdk_calico-system(4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vzcdk_calico-system(4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d38dcb63898370e4ddee5d998acba537bdade6f327b4c6356ebeb0daa6b2ab1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vzcdk" podUID="4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e" Mar 3 13:41:59.163660 kubelet[3342]: I0303 13:41:59.163284 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lzbc5" podStartSLOduration=4.144701964 podStartE2EDuration="25.163264313s" podCreationTimestamp="2026-03-03 13:41:34 +0000 UTC" firstStartedPulling="2026-03-03 13:41:35.240053153 +0000 UTC m=+22.470526687" lastFinishedPulling="2026-03-03 13:41:56.258615503 +0000 UTC m=+43.489089036" observedRunningTime="2026-03-03 13:41:59.162147021 +0000 UTC m=+46.392620598" watchObservedRunningTime="2026-03-03 13:41:59.163264313 +0000 UTC m=+46.393737867" Mar 3 13:41:59.374026 systemd[1]: run-netns-cni\x2d6b6a5eeb\x2d042b\x2d6510\x2d7016\x2d666e8ee9b7ad.mount: Deactivated successfully. Mar 3 13:41:59.374126 systemd[1]: run-netns-cni\x2df38450aa\x2d18cb\x2da56f\x2d9214\x2d6f80dde2d9af.mount: Deactivated successfully. Mar 3 13:41:59.374182 systemd[1]: run-netns-cni\x2d27da20a8\x2def7d\x2d6189\x2df2d4\x2d6793c77034cc.mount: Deactivated successfully. Mar 3 13:42:00.073705 kubelet[3342]: I0303 13:42:00.067982 3342 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1461887-2a0b-476f-877e-64ddc68ab5a8-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "f1461887-2a0b-476f-877e-64ddc68ab5a8" (UID: "f1461887-2a0b-476f-877e-64ddc68ab5a8"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 13:42:00.075670 kubelet[3342]: I0303 13:42:00.073797 3342 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f1461887-2a0b-476f-877e-64ddc68ab5a8-nginx-config\") pod \"f1461887-2a0b-476f-877e-64ddc68ab5a8\" (UID: \"f1461887-2a0b-476f-877e-64ddc68ab5a8\") " Mar 3 13:42:00.075670 kubelet[3342]: I0303 13:42:00.073867 3342 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1461887-2a0b-476f-877e-64ddc68ab5a8-whisker-ca-bundle\") pod \"f1461887-2a0b-476f-877e-64ddc68ab5a8\" (UID: \"f1461887-2a0b-476f-877e-64ddc68ab5a8\") " Mar 3 13:42:00.075670 kubelet[3342]: I0303 13:42:00.073903 3342 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts6qj\" (UniqueName: \"kubernetes.io/projected/f1461887-2a0b-476f-877e-64ddc68ab5a8-kube-api-access-ts6qj\") pod \"f1461887-2a0b-476f-877e-64ddc68ab5a8\" (UID: \"f1461887-2a0b-476f-877e-64ddc68ab5a8\") " Mar 3 13:42:00.075670 kubelet[3342]: I0303 13:42:00.073927 3342 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1461887-2a0b-476f-877e-64ddc68ab5a8-whisker-backend-key-pair\") pod \"f1461887-2a0b-476f-877e-64ddc68ab5a8\" (UID: \"f1461887-2a0b-476f-877e-64ddc68ab5a8\") " Mar 3 13:42:00.075670 kubelet[3342]: I0303 13:42:00.074033 3342 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f1461887-2a0b-476f-877e-64ddc68ab5a8-nginx-config\") on node \"ip-172-31-29-215\" DevicePath \"\"" Mar 3 13:42:00.076397 kubelet[3342]: I0303 13:42:00.076292 3342 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1461887-2a0b-476f-877e-64ddc68ab5a8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f1461887-2a0b-476f-877e-64ddc68ab5a8" (UID: "f1461887-2a0b-476f-877e-64ddc68ab5a8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 13:42:00.103438 systemd[1]: var-lib-kubelet-pods-f1461887\x2d2a0b\x2d476f\x2d877e\x2d64ddc68ab5a8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dts6qj.mount: Deactivated successfully. Mar 3 13:42:00.107842 kubelet[3342]: I0303 13:42:00.107801 3342 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1461887-2a0b-476f-877e-64ddc68ab5a8-kube-api-access-ts6qj" (OuterVolumeSpecName: "kube-api-access-ts6qj") pod "f1461887-2a0b-476f-877e-64ddc68ab5a8" (UID: "f1461887-2a0b-476f-877e-64ddc68ab5a8"). InnerVolumeSpecName "kube-api-access-ts6qj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 13:42:00.111976 kubelet[3342]: I0303 13:42:00.111926 3342 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1461887-2a0b-476f-877e-64ddc68ab5a8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f1461887-2a0b-476f-877e-64ddc68ab5a8" (UID: "f1461887-2a0b-476f-877e-64ddc68ab5a8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 3 13:42:00.113253 systemd[1]: var-lib-kubelet-pods-f1461887\x2d2a0b\x2d476f\x2d877e\x2d64ddc68ab5a8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 3 13:42:00.150531 kubelet[3342]: I0303 13:42:00.149755 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 3 13:42:00.162103 systemd[1]: Removed slice kubepods-besteffort-podf1461887_2a0b_476f_877e_64ddc68ab5a8.slice - libcontainer container kubepods-besteffort-podf1461887_2a0b_476f_877e_64ddc68ab5a8.slice. Mar 3 13:42:00.174744 kubelet[3342]: I0303 13:42:00.174699 3342 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1461887-2a0b-476f-877e-64ddc68ab5a8-whisker-ca-bundle\") on node \"ip-172-31-29-215\" DevicePath \"\"" Mar 3 13:42:00.175070 kubelet[3342]: I0303 13:42:00.174984 3342 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ts6qj\" (UniqueName: \"kubernetes.io/projected/f1461887-2a0b-476f-877e-64ddc68ab5a8-kube-api-access-ts6qj\") on node \"ip-172-31-29-215\" DevicePath \"\"" Mar 3 13:42:00.175070 kubelet[3342]: I0303 13:42:00.175003 3342 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1461887-2a0b-476f-877e-64ddc68ab5a8-whisker-backend-key-pair\") on node \"ip-172-31-29-215\" DevicePath \"\"" Mar 3 13:42:00.282319 systemd[1]: Created slice kubepods-besteffort-podd55ce6fb_7c91_4711_8d65_02f9bf8cf40b.slice - libcontainer container kubepods-besteffort-podd55ce6fb_7c91_4711_8d65_02f9bf8cf40b.slice. Mar 3 13:42:00.382112 kubelet[3342]: I0303 13:42:00.381103 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d55ce6fb-7c91-4711-8d65-02f9bf8cf40b-whisker-backend-key-pair\") pod \"whisker-78f4dfbcf6-q847l\" (UID: \"d55ce6fb-7c91-4711-8d65-02f9bf8cf40b\") " pod="calico-system/whisker-78f4dfbcf6-q847l" Mar 3 13:42:00.382112 kubelet[3342]: I0303 13:42:00.381166 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d55ce6fb-7c91-4711-8d65-02f9bf8cf40b-whisker-ca-bundle\") pod \"whisker-78f4dfbcf6-q847l\" (UID: \"d55ce6fb-7c91-4711-8d65-02f9bf8cf40b\") " pod="calico-system/whisker-78f4dfbcf6-q847l" Mar 3 13:42:00.382112 kubelet[3342]: I0303 13:42:00.381209 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbhr2\" (UniqueName: \"kubernetes.io/projected/d55ce6fb-7c91-4711-8d65-02f9bf8cf40b-kube-api-access-pbhr2\") pod \"whisker-78f4dfbcf6-q847l\" (UID: \"d55ce6fb-7c91-4711-8d65-02f9bf8cf40b\") " pod="calico-system/whisker-78f4dfbcf6-q847l" Mar 3 13:42:00.382112 kubelet[3342]: I0303 13:42:00.381246 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d55ce6fb-7c91-4711-8d65-02f9bf8cf40b-nginx-config\") pod \"whisker-78f4dfbcf6-q847l\" (UID: \"d55ce6fb-7c91-4711-8d65-02f9bf8cf40b\") " pod="calico-system/whisker-78f4dfbcf6-q847l" Mar 3 13:42:00.597376 containerd[1977]: time="2026-03-03T13:42:00.596976671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78f4dfbcf6-q847l,Uid:d55ce6fb-7c91-4711-8d65-02f9bf8cf40b,Namespace:calico-system,Attempt:0,}" Mar 3 13:42:00.895902 systemd-networkd[1790]: cali138b8d9f46a: Link UP Mar 3 13:42:00.900446 systemd-networkd[1790]: cali138b8d9f46a: Gained carrier Mar 3 13:42:00.906901 (udev-worker)[4731]: Network interface NamePolicy= disabled on kernel command line. Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.669 [ERROR][4672] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.737 [INFO][4672] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--215-k8s-whisker--78f4dfbcf6--q847l-eth0 whisker-78f4dfbcf6- calico-system d55ce6fb-7c91-4711-8d65-02f9bf8cf40b 919 0 2026-03-03 13:42:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78f4dfbcf6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-29-215 whisker-78f4dfbcf6-q847l eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali138b8d9f46a [] [] }} ContainerID="7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" Namespace="calico-system" Pod="whisker-78f4dfbcf6-q847l" WorkloadEndpoint="ip--172--31--29--215-k8s-whisker--78f4dfbcf6--q847l-" Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.737 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" Namespace="calico-system" Pod="whisker-78f4dfbcf6-q847l" WorkloadEndpoint="ip--172--31--29--215-k8s-whisker--78f4dfbcf6--q847l-eth0" Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.804 [INFO][4695] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" HandleID="k8s-pod-network.7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" Workload="ip--172--31--29--215-k8s-whisker--78f4dfbcf6--q847l-eth0" Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.818 [INFO][4695] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" HandleID="k8s-pod-network.7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" Workload="ip--172--31--29--215-k8s-whisker--78f4dfbcf6--q847l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000399ed0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-215", "pod":"whisker-78f4dfbcf6-q847l", "timestamp":"2026-03-03 13:42:00.804366806 +0000 UTC"}, Hostname:"ip-172-31-29-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000531080)} Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.818 [INFO][4695] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.818 [INFO][4695] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.818 [INFO][4695] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-215' Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.821 [INFO][4695] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" host="ip-172-31-29-215" Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.829 [INFO][4695] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-29-215" Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.834 [INFO][4695] ipam/ipam.go 526: Trying affinity for 192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.837 [INFO][4695] ipam/ipam.go 160: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.839 [INFO][4695] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.839 [INFO][4695] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" host="ip-172-31-29-215" Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.841 [INFO][4695] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609 Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.847 [INFO][4695] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" host="ip-172-31-29-215" Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.854 [INFO][4695] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.127.129/26] block=192.168.127.128/26 handle="k8s-pod-network.7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" host="ip-172-31-29-215" Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.855 [INFO][4695] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.127.129/26] handle="k8s-pod-network.7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" host="ip-172-31-29-215" Mar 3 13:42:00.925714 containerd[1977]: 2026-03-03 13:42:00.855 [INFO][4695] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:42:00.927438 containerd[1977]: 2026-03-03 13:42:00.855 [INFO][4695] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.127.129/26] IPv6=[] ContainerID="7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" HandleID="k8s-pod-network.7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" Workload="ip--172--31--29--215-k8s-whisker--78f4dfbcf6--q847l-eth0" Mar 3 13:42:00.927438 containerd[1977]: 2026-03-03 13:42:00.858 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" Namespace="calico-system" Pod="whisker-78f4dfbcf6-q847l" WorkloadEndpoint="ip--172--31--29--215-k8s-whisker--78f4dfbcf6--q847l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--215-k8s-whisker--78f4dfbcf6--q847l-eth0", GenerateName:"whisker-78f4dfbcf6-", Namespace:"calico-system", SelfLink:"", UID:"d55ce6fb-7c91-4711-8d65-02f9bf8cf40b", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 42, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78f4dfbcf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-215", ContainerID:"", Pod:"whisker-78f4dfbcf6-q847l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.127.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali138b8d9f46a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:42:00.927438 containerd[1977]: 2026-03-03 13:42:00.858 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.129/32] ContainerID="7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" Namespace="calico-system" Pod="whisker-78f4dfbcf6-q847l" WorkloadEndpoint="ip--172--31--29--215-k8s-whisker--78f4dfbcf6--q847l-eth0" Mar 3 13:42:00.927438 containerd[1977]: 2026-03-03 13:42:00.858 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali138b8d9f46a ContainerID="7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" Namespace="calico-system" Pod="whisker-78f4dfbcf6-q847l" WorkloadEndpoint="ip--172--31--29--215-k8s-whisker--78f4dfbcf6--q847l-eth0" Mar 3 13:42:00.927438 containerd[1977]: 2026-03-03 13:42:00.900 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" Namespace="calico-system" Pod="whisker-78f4dfbcf6-q847l" WorkloadEndpoint="ip--172--31--29--215-k8s-whisker--78f4dfbcf6--q847l-eth0" Mar 3 13:42:00.927438 containerd[1977]: 2026-03-03 13:42:00.903 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" Namespace="calico-system" Pod="whisker-78f4dfbcf6-q847l" WorkloadEndpoint="ip--172--31--29--215-k8s-whisker--78f4dfbcf6--q847l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--215-k8s-whisker--78f4dfbcf6--q847l-eth0", GenerateName:"whisker-78f4dfbcf6-", Namespace:"calico-system", SelfLink:"", UID:"d55ce6fb-7c91-4711-8d65-02f9bf8cf40b", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 42, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78f4dfbcf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-215", ContainerID:"7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609", Pod:"whisker-78f4dfbcf6-q847l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.127.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali138b8d9f46a", MAC:"a6:40:49:2d:41:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:42:00.928064 containerd[1977]: 2026-03-03 13:42:00.921 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" Namespace="calico-system" Pod="whisker-78f4dfbcf6-q847l" WorkloadEndpoint="ip--172--31--29--215-k8s-whisker--78f4dfbcf6--q847l-eth0" Mar 3 13:42:00.941179 kubelet[3342]: I0303 13:42:00.941073 3342 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1461887-2a0b-476f-877e-64ddc68ab5a8" path="/var/lib/kubelet/pods/f1461887-2a0b-476f-877e-64ddc68ab5a8/volumes" Mar 3 13:42:01.057036 containerd[1977]: time="2026-03-03T13:42:01.056983014Z" level=info msg="connecting to shim 7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609" address="unix:///run/containerd/s/eae1ba95452ba451695b3b1b9ab07fb34fd0624484a52805ca16bfa46785aedd" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:42:01.088095 systemd[1]: Started cri-containerd-7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609.scope - libcontainer container 7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609. Mar 3 13:42:01.217871 containerd[1977]: time="2026-03-03T13:42:01.217719137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78f4dfbcf6-q847l,Uid:d55ce6fb-7c91-4711-8d65-02f9bf8cf40b,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609\"" Mar 3 13:42:01.232518 containerd[1977]: time="2026-03-03T13:42:01.232451310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 3 13:42:01.555343 kubelet[3342]: I0303 13:42:01.553372 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 3 13:42:02.719057 systemd-networkd[1790]: cali138b8d9f46a: Gained IPv6LL Mar 3 13:42:03.513572 containerd[1977]: time="2026-03-03T13:42:03.513518034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:03.515460 containerd[1977]: time="2026-03-03T13:42:03.515418571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 3 13:42:03.556459 containerd[1977]: time="2026-03-03T13:42:03.555946041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.323353438s" Mar 3 13:42:03.556459 containerd[1977]: time="2026-03-03T13:42:03.556005401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 3 13:42:03.569239 containerd[1977]: time="2026-03-03T13:42:03.567396915Z" level=info msg="CreateContainer within sandbox \"7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 3 13:42:03.576840 containerd[1977]: time="2026-03-03T13:42:03.576703723Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:03.578035 containerd[1977]: time="2026-03-03T13:42:03.577719029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:03.580006 containerd[1977]: time="2026-03-03T13:42:03.579969916Z" level=info msg="Container c685e8108744276c061abf896dfdf40c782924db96d7cfe618e0345de3b5ae1f: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:42:03.623537 containerd[1977]: time="2026-03-03T13:42:03.623478976Z" level=info msg="CreateContainer within sandbox \"7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c685e8108744276c061abf896dfdf40c782924db96d7cfe618e0345de3b5ae1f\"" Mar 3 13:42:03.627229 containerd[1977]: time="2026-03-03T13:42:03.627187063Z" level=info msg="StartContainer for \"c685e8108744276c061abf896dfdf40c782924db96d7cfe618e0345de3b5ae1f\"" Mar 3 13:42:03.629454 containerd[1977]: time="2026-03-03T13:42:03.629408747Z" level=info msg="connecting to shim c685e8108744276c061abf896dfdf40c782924db96d7cfe618e0345de3b5ae1f" address="unix:///run/containerd/s/eae1ba95452ba451695b3b1b9ab07fb34fd0624484a52805ca16bfa46785aedd" protocol=ttrpc version=3 Mar 3 13:42:03.697160 systemd[1]: Started cri-containerd-c685e8108744276c061abf896dfdf40c782924db96d7cfe618e0345de3b5ae1f.scope - libcontainer container c685e8108744276c061abf896dfdf40c782924db96d7cfe618e0345de3b5ae1f. Mar 3 13:42:03.916792 containerd[1977]: time="2026-03-03T13:42:03.915376165Z" level=info msg="StartContainer for \"c685e8108744276c061abf896dfdf40c782924db96d7cfe618e0345de3b5ae1f\" returns successfully" Mar 3 13:42:03.979041 containerd[1977]: time="2026-03-03T13:42:03.979004958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 3 13:42:05.250365 (udev-worker)[4990]: Network interface NamePolicy= disabled on kernel command line. Mar 3 13:42:05.255621 systemd-networkd[1790]: vxlan.calico: Link UP Mar 3 13:42:05.255883 systemd-networkd[1790]: vxlan.calico: Gained carrier Mar 3 13:42:05.414401 (udev-worker)[5001]: Network interface NamePolicy= disabled on kernel command line. Mar 3 13:42:06.668130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1809433356.mount: Deactivated successfully. Mar 3 13:42:06.713936 containerd[1977]: time="2026-03-03T13:42:06.713865777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:06.714931 containerd[1977]: time="2026-03-03T13:42:06.714892867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 3 13:42:06.717793 containerd[1977]: time="2026-03-03T13:42:06.716571916Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:06.718877 containerd[1977]: time="2026-03-03T13:42:06.718844601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:06.719497 containerd[1977]: time="2026-03-03T13:42:06.719453960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.740406524s" Mar 3 13:42:06.719623 containerd[1977]: time="2026-03-03T13:42:06.719606148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 3 13:42:06.725752 containerd[1977]: time="2026-03-03T13:42:06.725711647Z" level=info msg="CreateContainer within sandbox \"7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 3 13:42:06.743037 containerd[1977]: time="2026-03-03T13:42:06.742960367Z" level=info msg="Container 1c911259d1bbaebc68c7cbadccf80ae4334b8ac2b1f64ceb8c3d6e1d926b646c: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:42:06.754675 containerd[1977]: time="2026-03-03T13:42:06.754627513Z" level=info msg="CreateContainer within sandbox \"7e488278ae9e09b11c1258e183aae4cd29823581135562b39ddab1d4a21aa609\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1c911259d1bbaebc68c7cbadccf80ae4334b8ac2b1f64ceb8c3d6e1d926b646c\"" Mar 3 13:42:06.757035 containerd[1977]: time="2026-03-03T13:42:06.756994881Z" level=info msg="StartContainer for \"1c911259d1bbaebc68c7cbadccf80ae4334b8ac2b1f64ceb8c3d6e1d926b646c\"" Mar 3 13:42:06.758403 containerd[1977]: time="2026-03-03T13:42:06.758254950Z" level=info msg="connecting to shim 1c911259d1bbaebc68c7cbadccf80ae4334b8ac2b1f64ceb8c3d6e1d926b646c" address="unix:///run/containerd/s/eae1ba95452ba451695b3b1b9ab07fb34fd0624484a52805ca16bfa46785aedd" protocol=ttrpc version=3 Mar 3 13:42:06.831013 systemd[1]: Started cri-containerd-1c911259d1bbaebc68c7cbadccf80ae4334b8ac2b1f64ceb8c3d6e1d926b646c.scope - libcontainer container 1c911259d1bbaebc68c7cbadccf80ae4334b8ac2b1f64ceb8c3d6e1d926b646c. Mar 3 13:42:06.908266 containerd[1977]: time="2026-03-03T13:42:06.908227192Z" level=info msg="StartContainer for \"1c911259d1bbaebc68c7cbadccf80ae4334b8ac2b1f64ceb8c3d6e1d926b646c\" returns successfully" Mar 3 13:42:07.263600 systemd-networkd[1790]: vxlan.calico: Gained IPv6LL Mar 3 13:42:07.874893 kubelet[3342]: I0303 13:42:07.871804 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-78f4dfbcf6-q847l" podStartSLOduration=2.377830502 podStartE2EDuration="7.867366822s" podCreationTimestamp="2026-03-03 13:42:00 +0000 UTC" firstStartedPulling="2026-03-03 13:42:01.230876859 +0000 UTC m=+48.461350394" lastFinishedPulling="2026-03-03 13:42:06.720413182 +0000 UTC m=+53.950886714" observedRunningTime="2026-03-03 13:42:07.856655812 +0000 UTC m=+55.087129365" watchObservedRunningTime="2026-03-03 13:42:07.867366822 +0000 UTC m=+55.097840368" Mar 3 13:42:09.585507 ntpd[2231]: Listen normally on 6 vxlan.calico 192.168.127.128:123 Mar 3 13:42:09.585566 ntpd[2231]: Listen normally on 7 cali138b8d9f46a [fe80::ecee:eeff:feee:eeee%4]:123 Mar 3 13:42:09.587716 ntpd[2231]: 3 Mar 13:42:09 ntpd[2231]: Listen normally on 6 vxlan.calico 192.168.127.128:123 Mar 3 13:42:09.587716 ntpd[2231]: 3 Mar 13:42:09 ntpd[2231]: Listen normally on 7 cali138b8d9f46a [fe80::ecee:eeff:feee:eeee%4]:123 Mar 3 13:42:09.587716 ntpd[2231]: 3 Mar 13:42:09 ntpd[2231]: Listen normally on 8 vxlan.calico [fe80::64e9:f6ff:fe56:f94b%5]:123 Mar 3 13:42:09.585590 ntpd[2231]: Listen normally on 8 vxlan.calico [fe80::64e9:f6ff:fe56:f94b%5]:123 Mar 3 13:42:09.937225 containerd[1977]: time="2026-03-03T13:42:09.937121110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f877f9cd-7w9vm,Uid:fd08abe4-ac09-48f6-bb93-09e07187cce3,Namespace:calico-system,Attempt:0,}" Mar 3 13:42:09.975755 systemd[1]: Started sshd@7-172.31.29.215:22-68.220.241.50:58816.service - OpenSSH per-connection server daemon (68.220.241.50:58816). Mar 3 13:42:10.485035 systemd-networkd[1790]: cali0476ad8fbc8: Link UP Mar 3 13:42:10.486539 systemd-networkd[1790]: cali0476ad8fbc8: Gained carrier Mar 3 13:42:10.488628 (udev-worker)[5140]: Network interface NamePolicy= disabled on kernel command line. Mar 3 13:42:10.504876 sshd[5108]: Accepted publickey for core from 68.220.241.50 port 58816 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:42:10.508847 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:42:10.521926 systemd-logind[1964]: New session 8 of user core. Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.190 [INFO][5106] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--7w9vm-eth0 calico-apiserver-57f877f9cd- calico-system fd08abe4-ac09-48f6-bb93-09e07187cce3 866 0 2026-03-03 13:41:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57f877f9cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-215 calico-apiserver-57f877f9cd-7w9vm eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali0476ad8fbc8 [] [] }} ContainerID="ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" Namespace="calico-system" Pod="calico-apiserver-57f877f9cd-7w9vm" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--7w9vm-" Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.192 [INFO][5106] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" Namespace="calico-system" Pod="calico-apiserver-57f877f9cd-7w9vm" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--7w9vm-eth0" Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.415 [INFO][5121] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" HandleID="k8s-pod-network.ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" Workload="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--7w9vm-eth0" Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.426 [INFO][5121] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" HandleID="k8s-pod-network.ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" Workload="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--7w9vm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e220), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-215", "pod":"calico-apiserver-57f877f9cd-7w9vm", "timestamp":"2026-03-03 13:42:10.415335652 +0000 UTC"}, Hostname:"ip-172-31-29-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ee6e0)} Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.426 [INFO][5121] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.426 [INFO][5121] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.426 [INFO][5121] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-215' Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.431 [INFO][5121] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" host="ip-172-31-29-215" Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.439 [INFO][5121] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-29-215" Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.454 [INFO][5121] ipam/ipam.go 526: Trying affinity for 192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.459 [INFO][5121] ipam/ipam.go 160: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.462 [INFO][5121] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.462 [INFO][5121] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" host="ip-172-31-29-215" Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.464 [INFO][5121] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.469 [INFO][5121] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" host="ip-172-31-29-215" Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.478 [INFO][5121] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.127.130/26] block=192.168.127.128/26 handle="k8s-pod-network.ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" host="ip-172-31-29-215" Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.478 [INFO][5121] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.127.130/26] handle="k8s-pod-network.ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" host="ip-172-31-29-215" Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.478 [INFO][5121] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:42:10.524078 containerd[1977]: 2026-03-03 13:42:10.478 [INFO][5121] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.127.130/26] IPv6=[] ContainerID="ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" HandleID="k8s-pod-network.ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" Workload="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--7w9vm-eth0" Mar 3 13:42:10.527211 containerd[1977]: 2026-03-03 13:42:10.481 [INFO][5106] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" Namespace="calico-system" Pod="calico-apiserver-57f877f9cd-7w9vm" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--7w9vm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--7w9vm-eth0", GenerateName:"calico-apiserver-57f877f9cd-", Namespace:"calico-system", SelfLink:"", UID:"fd08abe4-ac09-48f6-bb93-09e07187cce3", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 41, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57f877f9cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-215", ContainerID:"", Pod:"calico-apiserver-57f877f9cd-7w9vm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0476ad8fbc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:42:10.527211 containerd[1977]: 2026-03-03 13:42:10.481 [INFO][5106] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.130/32] ContainerID="ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" Namespace="calico-system" Pod="calico-apiserver-57f877f9cd-7w9vm" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--7w9vm-eth0" Mar 3 13:42:10.527211 containerd[1977]: 2026-03-03 13:42:10.481 [INFO][5106] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0476ad8fbc8 ContainerID="ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" Namespace="calico-system" Pod="calico-apiserver-57f877f9cd-7w9vm" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--7w9vm-eth0" Mar 3 13:42:10.527211 containerd[1977]: 2026-03-03 13:42:10.487 [INFO][5106] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" Namespace="calico-system" Pod="calico-apiserver-57f877f9cd-7w9vm" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--7w9vm-eth0" Mar 3 13:42:10.527211 containerd[1977]: 2026-03-03 13:42:10.489 [INFO][5106] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" Namespace="calico-system" Pod="calico-apiserver-57f877f9cd-7w9vm" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--7w9vm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--7w9vm-eth0", GenerateName:"calico-apiserver-57f877f9cd-", Namespace:"calico-system", SelfLink:"", UID:"fd08abe4-ac09-48f6-bb93-09e07187cce3", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 41, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57f877f9cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-215", ContainerID:"ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c", Pod:"calico-apiserver-57f877f9cd-7w9vm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0476ad8fbc8", MAC:"82:26:80:bc:05:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:42:10.527211 containerd[1977]: 2026-03-03 13:42:10.512 [INFO][5106] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" Namespace="calico-system" Pod="calico-apiserver-57f877f9cd-7w9vm" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--7w9vm-eth0" Mar 3 13:42:10.527972 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 3 13:42:10.843667 containerd[1977]: time="2026-03-03T13:42:10.843612489Z" level=info msg="connecting to shim ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c" address="unix:///run/containerd/s/0776edd20640e1fb4edfe5447d17f2217334520c942568e02dbb5628485ec633" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:42:10.879254 systemd[1]: Started cri-containerd-ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c.scope - libcontainer container ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c. Mar 3 13:42:10.942386 containerd[1977]: time="2026-03-03T13:42:10.941359853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77bcd9b5f-p2cx4,Uid:76acbdd4-dcb3-4968-9ce4-244304aed0dc,Namespace:calico-system,Attempt:0,}" Mar 3 13:42:11.048798 containerd[1977]: time="2026-03-03T13:42:11.048540890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f877f9cd-7w9vm,Uid:fd08abe4-ac09-48f6-bb93-09e07187cce3,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c\"" Mar 3 13:42:11.054175 containerd[1977]: time="2026-03-03T13:42:11.054102829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 3 13:42:11.304667 systemd-networkd[1790]: cali336a37c330f: Link UP Mar 3 13:42:11.311090 systemd-networkd[1790]: cali336a37c330f: Gained carrier Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.108 [INFO][5209] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--215-k8s-calico--kube--controllers--77bcd9b5f--p2cx4-eth0 calico-kube-controllers-77bcd9b5f- calico-system 76acbdd4-dcb3-4968-9ce4-244304aed0dc 874 0 2026-03-03 13:41:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77bcd9b5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-215 calico-kube-controllers-77bcd9b5f-p2cx4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali336a37c330f [] [] }} ContainerID="d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" Namespace="calico-system" Pod="calico-kube-controllers-77bcd9b5f-p2cx4" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--kube--controllers--77bcd9b5f--p2cx4-" Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.108 [INFO][5209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" Namespace="calico-system" Pod="calico-kube-controllers-77bcd9b5f-p2cx4" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--kube--controllers--77bcd9b5f--p2cx4-eth0" Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.221 [INFO][5237] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" HandleID="k8s-pod-network.d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" Workload="ip--172--31--29--215-k8s-calico--kube--controllers--77bcd9b5f--p2cx4-eth0" Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.239 [INFO][5237] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" HandleID="k8s-pod-network.d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" Workload="ip--172--31--29--215-k8s-calico--kube--controllers--77bcd9b5f--p2cx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004feb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-215", "pod":"calico-kube-controllers-77bcd9b5f-p2cx4", "timestamp":"2026-03-03 13:42:11.221958083 +0000 UTC"}, Hostname:"ip-172-31-29-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f9080)} Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.241 [INFO][5237] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.241 [INFO][5237] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.241 [INFO][5237] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-215' Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.245 [INFO][5237] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" host="ip-172-31-29-215" Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.252 [INFO][5237] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-29-215" Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.258 [INFO][5237] ipam/ipam.go 526: Trying affinity for 192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.263 [INFO][5237] ipam/ipam.go 160: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.268 [INFO][5237] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.268 [INFO][5237] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" host="ip-172-31-29-215" Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.271 [INFO][5237] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.282 [INFO][5237] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" host="ip-172-31-29-215" Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.293 [INFO][5237] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.127.131/26] block=192.168.127.128/26 handle="k8s-pod-network.d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" host="ip-172-31-29-215" Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.294 [INFO][5237] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.127.131/26] handle="k8s-pod-network.d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" host="ip-172-31-29-215" Mar 3 13:42:11.341089 containerd[1977]: 2026-03-03 13:42:11.294 [INFO][5237] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:42:11.343395 containerd[1977]: 2026-03-03 13:42:11.294 [INFO][5237] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.127.131/26] IPv6=[] ContainerID="d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" HandleID="k8s-pod-network.d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" Workload="ip--172--31--29--215-k8s-calico--kube--controllers--77bcd9b5f--p2cx4-eth0" Mar 3 13:42:11.343395 containerd[1977]: 2026-03-03 13:42:11.299 [INFO][5209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" Namespace="calico-system" Pod="calico-kube-controllers-77bcd9b5f-p2cx4" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--kube--controllers--77bcd9b5f--p2cx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--215-k8s-calico--kube--controllers--77bcd9b5f--p2cx4-eth0", GenerateName:"calico-kube-controllers-77bcd9b5f-", Namespace:"calico-system", SelfLink:"", UID:"76acbdd4-dcb3-4968-9ce4-244304aed0dc", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 41, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77bcd9b5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-215", ContainerID:"", Pod:"calico-kube-controllers-77bcd9b5f-p2cx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali336a37c330f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:42:11.343395 containerd[1977]: 2026-03-03 13:42:11.299 [INFO][5209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.131/32] ContainerID="d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" Namespace="calico-system" Pod="calico-kube-controllers-77bcd9b5f-p2cx4" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--kube--controllers--77bcd9b5f--p2cx4-eth0" Mar 3 13:42:11.343395 containerd[1977]: 2026-03-03 13:42:11.299 [INFO][5209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali336a37c330f ContainerID="d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" Namespace="calico-system" Pod="calico-kube-controllers-77bcd9b5f-p2cx4" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--kube--controllers--77bcd9b5f--p2cx4-eth0" Mar 3 13:42:11.343395 containerd[1977]: 2026-03-03 13:42:11.310 [INFO][5209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" Namespace="calico-system" Pod="calico-kube-controllers-77bcd9b5f-p2cx4" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--kube--controllers--77bcd9b5f--p2cx4-eth0" Mar 3 13:42:11.344555 containerd[1977]: 2026-03-03 13:42:11.311 [INFO][5209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" Namespace="calico-system" Pod="calico-kube-controllers-77bcd9b5f-p2cx4" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--kube--controllers--77bcd9b5f--p2cx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--215-k8s-calico--kube--controllers--77bcd9b5f--p2cx4-eth0", GenerateName:"calico-kube-controllers-77bcd9b5f-", Namespace:"calico-system", SelfLink:"", UID:"76acbdd4-dcb3-4968-9ce4-244304aed0dc", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 41, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77bcd9b5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-215", ContainerID:"d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc", Pod:"calico-kube-controllers-77bcd9b5f-p2cx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali336a37c330f", MAC:"1a:76:62:af:c2:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:42:11.344555 containerd[1977]: 2026-03-03 13:42:11.334 [INFO][5209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" Namespace="calico-system" Pod="calico-kube-controllers-77bcd9b5f-p2cx4" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--kube--controllers--77bcd9b5f--p2cx4-eth0" Mar 3 13:42:11.411538 containerd[1977]: time="2026-03-03T13:42:11.409993578Z" level=info msg="connecting to shim d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc" address="unix:///run/containerd/s/73efff562abad8d7006d9204b7e94d9a91176b5b10c2bbbca3aef4a01a82a5d1" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:42:11.474003 systemd[1]: Started cri-containerd-d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc.scope - libcontainer container d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc. Mar 3 13:42:11.580729 containerd[1977]: time="2026-03-03T13:42:11.580689022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77bcd9b5f-p2cx4,Uid:76acbdd4-dcb3-4968-9ce4-244304aed0dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc\"" Mar 3 13:42:11.899358 sshd[5144]: Connection closed by 68.220.241.50 port 58816 Mar 3 13:42:11.900128 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Mar 3 13:42:11.907268 systemd-logind[1964]: Session 8 logged out. Waiting for processes to exit. Mar 3 13:42:11.907368 systemd[1]: sshd@7-172.31.29.215:22-68.220.241.50:58816.service: Deactivated successfully. Mar 3 13:42:11.909761 systemd[1]: session-8.scope: Deactivated successfully. Mar 3 13:42:11.911753 systemd-logind[1964]: Removed session 8. Mar 3 13:42:11.933724 containerd[1977]: time="2026-03-03T13:42:11.933684096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-scfjs,Uid:04b4b97f-e557-499d-975d-9317e6f7cb25,Namespace:kube-system,Attempt:0,}" Mar 3 13:42:12.070733 systemd-networkd[1790]: calic509dc5e9e4: Link UP Mar 3 13:42:12.071339 systemd-networkd[1790]: calic509dc5e9e4: Gained carrier Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:11.973 [INFO][5319] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--215-k8s-coredns--66bc5c9577--scfjs-eth0 coredns-66bc5c9577- kube-system 04b4b97f-e557-499d-975d-9317e6f7cb25 864 0 2026-03-03 13:41:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-215 coredns-66bc5c9577-scfjs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic509dc5e9e4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" Namespace="kube-system" Pod="coredns-66bc5c9577-scfjs" WorkloadEndpoint="ip--172--31--29--215-k8s-coredns--66bc5c9577--scfjs-" Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:11.973 [INFO][5319] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" Namespace="kube-system" Pod="coredns-66bc5c9577-scfjs" WorkloadEndpoint="ip--172--31--29--215-k8s-coredns--66bc5c9577--scfjs-eth0" Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.011 [INFO][5331] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" HandleID="k8s-pod-network.47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" Workload="ip--172--31--29--215-k8s-coredns--66bc5c9577--scfjs-eth0" Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.026 [INFO][5331] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" HandleID="k8s-pod-network.47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" Workload="ip--172--31--29--215-k8s-coredns--66bc5c9577--scfjs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef7b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-215", "pod":"coredns-66bc5c9577-scfjs", "timestamp":"2026-03-03 13:42:12.011723377 +0000 UTC"}, Hostname:"ip-172-31-29-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002df080)} Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.026 [INFO][5331] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.026 [INFO][5331] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.026 [INFO][5331] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-215' Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.030 [INFO][5331] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" host="ip-172-31-29-215" Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.037 [INFO][5331] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-29-215" Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.042 [INFO][5331] ipam/ipam.go 526: Trying affinity for 192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.044 [INFO][5331] ipam/ipam.go 160: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.046 [INFO][5331] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.046 [INFO][5331] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" host="ip-172-31-29-215" Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.049 [INFO][5331] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.055 [INFO][5331] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" host="ip-172-31-29-215" Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.064 [INFO][5331] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.127.132/26] block=192.168.127.128/26 handle="k8s-pod-network.47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" host="ip-172-31-29-215" Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.064 [INFO][5331] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.127.132/26] handle="k8s-pod-network.47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" host="ip-172-31-29-215" Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.064 [INFO][5331] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:42:12.095890 containerd[1977]: 2026-03-03 13:42:12.064 [INFO][5331] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.127.132/26] IPv6=[] ContainerID="47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" HandleID="k8s-pod-network.47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" Workload="ip--172--31--29--215-k8s-coredns--66bc5c9577--scfjs-eth0" Mar 3 13:42:12.097941 containerd[1977]: 2026-03-03 13:42:12.067 [INFO][5319] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" Namespace="kube-system" Pod="coredns-66bc5c9577-scfjs" WorkloadEndpoint="ip--172--31--29--215-k8s-coredns--66bc5c9577--scfjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--215-k8s-coredns--66bc5c9577--scfjs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"04b4b97f-e557-499d-975d-9317e6f7cb25", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 41, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-215", ContainerID:"", Pod:"coredns-66bc5c9577-scfjs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic509dc5e9e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:42:12.097941 containerd[1977]: 2026-03-03 13:42:12.067 [INFO][5319] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.132/32] ContainerID="47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" Namespace="kube-system" Pod="coredns-66bc5c9577-scfjs" WorkloadEndpoint="ip--172--31--29--215-k8s-coredns--66bc5c9577--scfjs-eth0" Mar 3 13:42:12.097941 containerd[1977]: 2026-03-03 13:42:12.067 [INFO][5319] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic509dc5e9e4 ContainerID="47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" Namespace="kube-system" Pod="coredns-66bc5c9577-scfjs" WorkloadEndpoint="ip--172--31--29--215-k8s-coredns--66bc5c9577--scfjs-eth0" Mar 3 13:42:12.097941 containerd[1977]: 2026-03-03 13:42:12.072 [INFO][5319] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" Namespace="kube-system" Pod="coredns-66bc5c9577-scfjs" WorkloadEndpoint="ip--172--31--29--215-k8s-coredns--66bc5c9577--scfjs-eth0" Mar 3 13:42:12.098113 containerd[1977]: 2026-03-03 13:42:12.074 [INFO][5319] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" Namespace="kube-system" Pod="coredns-66bc5c9577-scfjs" WorkloadEndpoint="ip--172--31--29--215-k8s-coredns--66bc5c9577--scfjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--215-k8s-coredns--66bc5c9577--scfjs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"04b4b97f-e557-499d-975d-9317e6f7cb25", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 41, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-215", ContainerID:"47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b", Pod:"coredns-66bc5c9577-scfjs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic509dc5e9e4", MAC:"22:22:a0:d2:ff:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:42:12.098113 containerd[1977]: 2026-03-03 13:42:12.092 [INFO][5319] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" Namespace="kube-system" Pod="coredns-66bc5c9577-scfjs" WorkloadEndpoint="ip--172--31--29--215-k8s-coredns--66bc5c9577--scfjs-eth0" Mar 3 13:42:12.128382 systemd-networkd[1790]: cali0476ad8fbc8: Gained IPv6LL Mar 3 13:42:12.145421 containerd[1977]: time="2026-03-03T13:42:12.145359511Z" level=info msg="connecting to shim 47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b" address="unix:///run/containerd/s/88c9311a06238d98318e3dc783021cd10f7cf85839e133fe6bd93274df88f0f4" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:42:12.195076 systemd[1]: Started cri-containerd-47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b.scope - libcontainer container 47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b. Mar 3 13:42:12.280338 containerd[1977]: time="2026-03-03T13:42:12.280301269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-scfjs,Uid:04b4b97f-e557-499d-975d-9317e6f7cb25,Namespace:kube-system,Attempt:0,} returns sandbox id \"47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b\"" Mar 3 13:42:12.287404 containerd[1977]: time="2026-03-03T13:42:12.287372079Z" level=info msg="CreateContainer within sandbox \"47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 13:42:12.354609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1143092243.mount: Deactivated successfully. Mar 3 13:42:12.355670 containerd[1977]: time="2026-03-03T13:42:12.355115728Z" level=info msg="Container cde485a9c63f4fd9611a18775b5eaf8aee9f6102f3ba0ac15ff4d127344784f4: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:42:12.368806 containerd[1977]: time="2026-03-03T13:42:12.368720096Z" level=info msg="CreateContainer within sandbox \"47af628fcdd1b98236b8bcebb8e3c11eee2ac1d719ae07260e6717da7404ca9b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cde485a9c63f4fd9611a18775b5eaf8aee9f6102f3ba0ac15ff4d127344784f4\"" Mar 3 13:42:12.369862 containerd[1977]: time="2026-03-03T13:42:12.369361351Z" level=info msg="StartContainer for \"cde485a9c63f4fd9611a18775b5eaf8aee9f6102f3ba0ac15ff4d127344784f4\"" Mar 3 13:42:12.370454 containerd[1977]: time="2026-03-03T13:42:12.370428882Z" level=info msg="connecting to shim cde485a9c63f4fd9611a18775b5eaf8aee9f6102f3ba0ac15ff4d127344784f4" address="unix:///run/containerd/s/88c9311a06238d98318e3dc783021cd10f7cf85839e133fe6bd93274df88f0f4" protocol=ttrpc version=3 Mar 3 13:42:12.389981 systemd[1]: Started cri-containerd-cde485a9c63f4fd9611a18775b5eaf8aee9f6102f3ba0ac15ff4d127344784f4.scope - libcontainer container cde485a9c63f4fd9611a18775b5eaf8aee9f6102f3ba0ac15ff4d127344784f4. Mar 3 13:42:12.434738 containerd[1977]: time="2026-03-03T13:42:12.434626500Z" level=info msg="StartContainer for \"cde485a9c63f4fd9611a18775b5eaf8aee9f6102f3ba0ac15ff4d127344784f4\" returns successfully" Mar 3 13:42:12.446948 systemd-networkd[1790]: cali336a37c330f: Gained IPv6LL Mar 3 13:42:12.954793 containerd[1977]: time="2026-03-03T13:42:12.954716991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4h8sk,Uid:c0efcb7b-93dc-4d18-9b21-e4718494e8da,Namespace:kube-system,Attempt:0,}" Mar 3 13:42:12.956312 containerd[1977]: time="2026-03-03T13:42:12.955993511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f877f9cd-frzvl,Uid:f1a68e16-2cc0-44c9-8e62-da67fc36763c,Namespace:calico-system,Attempt:0,}" Mar 3 13:42:12.961661 containerd[1977]: time="2026-03-03T13:42:12.961620717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-vnbqd,Uid:67ac86c0-8603-47c0-a321-050ba93c4a04,Namespace:calico-system,Attempt:0,}" Mar 3 13:42:13.233414 systemd-networkd[1790]: cali1b1b0fb64bd: Link UP Mar 3 13:42:13.235225 systemd-networkd[1790]: cali1b1b0fb64bd: Gained carrier Mar 3 13:42:13.254653 kubelet[3342]: I0303 13:42:13.254120 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-scfjs" podStartSLOduration=54.254100993 podStartE2EDuration="54.254100993s" podCreationTimestamp="2026-03-03 13:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:42:12.896375283 +0000 UTC m=+60.126848835" watchObservedRunningTime="2026-03-03 13:42:13.254100993 +0000 UTC m=+60.484574558" Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.067 [INFO][5445] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--frzvl-eth0 calico-apiserver-57f877f9cd- calico-system f1a68e16-2cc0-44c9-8e62-da67fc36763c 876 0 2026-03-03 13:41:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57f877f9cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-215 calico-apiserver-57f877f9cd-frzvl eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali1b1b0fb64bd [] [] }} ContainerID="e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" Namespace="calico-system" Pod="calico-apiserver-57f877f9cd-frzvl" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--frzvl-" Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.068 [INFO][5445] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" Namespace="calico-system" Pod="calico-apiserver-57f877f9cd-frzvl" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--frzvl-eth0" Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.120 [INFO][5487] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" HandleID="k8s-pod-network.e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" Workload="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--frzvl-eth0" Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.159 [INFO][5487] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" HandleID="k8s-pod-network.e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" Workload="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--frzvl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-215", "pod":"calico-apiserver-57f877f9cd-frzvl", "timestamp":"2026-03-03 13:42:13.120339886 +0000 UTC"}, Hostname:"ip-172-31-29-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001886e0)} Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.159 [INFO][5487] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.159 [INFO][5487] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.159 [INFO][5487] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-215' Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.169 [INFO][5487] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" host="ip-172-31-29-215" Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.178 [INFO][5487] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-29-215" Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.194 [INFO][5487] ipam/ipam.go 526: Trying affinity for 192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.197 [INFO][5487] ipam/ipam.go 160: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.200 [INFO][5487] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.201 [INFO][5487] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" host="ip-172-31-29-215" Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.205 [INFO][5487] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.215 [INFO][5487] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" host="ip-172-31-29-215" Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.225 [INFO][5487] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.127.133/26] block=192.168.127.128/26 handle="k8s-pod-network.e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" host="ip-172-31-29-215" Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.225 [INFO][5487] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.127.133/26] handle="k8s-pod-network.e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" host="ip-172-31-29-215" Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.225 [INFO][5487] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:42:13.257355 containerd[1977]: 2026-03-03 13:42:13.226 [INFO][5487] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.127.133/26] IPv6=[] ContainerID="e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" HandleID="k8s-pod-network.e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" Workload="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--frzvl-eth0" Mar 3 13:42:13.258186 containerd[1977]: 2026-03-03 13:42:13.229 [INFO][5445] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" Namespace="calico-system" Pod="calico-apiserver-57f877f9cd-frzvl" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--frzvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--frzvl-eth0", GenerateName:"calico-apiserver-57f877f9cd-", Namespace:"calico-system", SelfLink:"", UID:"f1a68e16-2cc0-44c9-8e62-da67fc36763c", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 41, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57f877f9cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-215", ContainerID:"", Pod:"calico-apiserver-57f877f9cd-frzvl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1b1b0fb64bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:42:13.258186 containerd[1977]: 2026-03-03 13:42:13.229 [INFO][5445] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.133/32] ContainerID="e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" Namespace="calico-system" Pod="calico-apiserver-57f877f9cd-frzvl" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--frzvl-eth0" Mar 3 13:42:13.258186 containerd[1977]: 2026-03-03 13:42:13.229 [INFO][5445] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b1b0fb64bd ContainerID="e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" Namespace="calico-system" Pod="calico-apiserver-57f877f9cd-frzvl" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--frzvl-eth0" Mar 3 13:42:13.258186 containerd[1977]: 2026-03-03 13:42:13.236 [INFO][5445] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" Namespace="calico-system" Pod="calico-apiserver-57f877f9cd-frzvl" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--frzvl-eth0" Mar 3 13:42:13.258186 containerd[1977]: 2026-03-03 13:42:13.236 [INFO][5445] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" Namespace="calico-system" Pod="calico-apiserver-57f877f9cd-frzvl" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--frzvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--frzvl-eth0", GenerateName:"calico-apiserver-57f877f9cd-", Namespace:"calico-system", SelfLink:"", UID:"f1a68e16-2cc0-44c9-8e62-da67fc36763c", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 41, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57f877f9cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-215", ContainerID:"e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c", Pod:"calico-apiserver-57f877f9cd-frzvl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1b1b0fb64bd", MAC:"ba:5a:14:f2:6e:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:42:13.258186 containerd[1977]: 2026-03-03 13:42:13.251 [INFO][5445] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" Namespace="calico-system" Pod="calico-apiserver-57f877f9cd-frzvl" WorkloadEndpoint="ip--172--31--29--215-k8s-calico--apiserver--57f877f9cd--frzvl-eth0" Mar 3 13:42:13.310457 containerd[1977]: time="2026-03-03T13:42:13.310331208Z" level=info msg="connecting to shim e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c" address="unix:///run/containerd/s/a0c75a16d3abcf2ea45aef6bf8c96b5d5c0a85d406448f39079db0001c8026c9" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:42:13.337118 systemd-networkd[1790]: cali7b723097279: Link UP Mar 3 13:42:13.337975 systemd-networkd[1790]: cali7b723097279: Gained carrier Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.080 [INFO][5466] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--215-k8s-goldmane--cccfbd5cf--vnbqd-eth0 goldmane-cccfbd5cf- calico-system 67ac86c0-8603-47c0-a321-050ba93c4a04 870 0 2026-03-03 13:41:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-29-215 goldmane-cccfbd5cf-vnbqd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7b723097279 [] [] }} ContainerID="940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" Namespace="calico-system" Pod="goldmane-cccfbd5cf-vnbqd" WorkloadEndpoint="ip--172--31--29--215-k8s-goldmane--cccfbd5cf--vnbqd-" Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.080 [INFO][5466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" Namespace="calico-system" Pod="goldmane-cccfbd5cf-vnbqd" WorkloadEndpoint="ip--172--31--29--215-k8s-goldmane--cccfbd5cf--vnbqd-eth0" Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.136 [INFO][5495] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" HandleID="k8s-pod-network.940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" Workload="ip--172--31--29--215-k8s-goldmane--cccfbd5cf--vnbqd-eth0" Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.171 [INFO][5495] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" HandleID="k8s-pod-network.940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" Workload="ip--172--31--29--215-k8s-goldmane--cccfbd5cf--vnbqd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-215", "pod":"goldmane-cccfbd5cf-vnbqd", "timestamp":"2026-03-03 13:42:13.136471136 +0000 UTC"}, Hostname:"ip-172-31-29-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002c1600)} Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.172 [INFO][5495] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.226 [INFO][5495] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.227 [INFO][5495] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-215' Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.269 [INFO][5495] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" host="ip-172-31-29-215" Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.278 [INFO][5495] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-29-215" Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.286 [INFO][5495] ipam/ipam.go 526: Trying affinity for 192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.289 [INFO][5495] ipam/ipam.go 160: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.292 [INFO][5495] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.292 [INFO][5495] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" host="ip-172-31-29-215" Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.299 [INFO][5495] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2 Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.307 [INFO][5495] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" host="ip-172-31-29-215" Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.317 [INFO][5495] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.127.134/26] block=192.168.127.128/26 handle="k8s-pod-network.940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" host="ip-172-31-29-215" Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.317 [INFO][5495] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.127.134/26] handle="k8s-pod-network.940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" host="ip-172-31-29-215" Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.317 [INFO][5495] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:42:13.366172 containerd[1977]: 2026-03-03 13:42:13.317 [INFO][5495] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.127.134/26] IPv6=[] ContainerID="940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" HandleID="k8s-pod-network.940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" Workload="ip--172--31--29--215-k8s-goldmane--cccfbd5cf--vnbqd-eth0" Mar 3 13:42:13.367382 containerd[1977]: 2026-03-03 13:42:13.324 [INFO][5466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" Namespace="calico-system" Pod="goldmane-cccfbd5cf-vnbqd" WorkloadEndpoint="ip--172--31--29--215-k8s-goldmane--cccfbd5cf--vnbqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--215-k8s-goldmane--cccfbd5cf--vnbqd-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"67ac86c0-8603-47c0-a321-050ba93c4a04", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 41, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-215", ContainerID:"", Pod:"goldmane-cccfbd5cf-vnbqd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b723097279", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:42:13.367382 containerd[1977]: 2026-03-03 13:42:13.324 [INFO][5466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.134/32] ContainerID="940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" Namespace="calico-system" Pod="goldmane-cccfbd5cf-vnbqd" WorkloadEndpoint="ip--172--31--29--215-k8s-goldmane--cccfbd5cf--vnbqd-eth0" Mar 3 13:42:13.367382 containerd[1977]: 2026-03-03 13:42:13.324 [INFO][5466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b723097279 ContainerID="940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" Namespace="calico-system" Pod="goldmane-cccfbd5cf-vnbqd" WorkloadEndpoint="ip--172--31--29--215-k8s-goldmane--cccfbd5cf--vnbqd-eth0" Mar 3 13:42:13.367382 containerd[1977]: 2026-03-03 13:42:13.338 [INFO][5466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" Namespace="calico-system" Pod="goldmane-cccfbd5cf-vnbqd" WorkloadEndpoint="ip--172--31--29--215-k8s-goldmane--cccfbd5cf--vnbqd-eth0" Mar 3 13:42:13.367382 containerd[1977]: 2026-03-03 13:42:13.339 [INFO][5466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" Namespace="calico-system" Pod="goldmane-cccfbd5cf-vnbqd" WorkloadEndpoint="ip--172--31--29--215-k8s-goldmane--cccfbd5cf--vnbqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--215-k8s-goldmane--cccfbd5cf--vnbqd-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"67ac86c0-8603-47c0-a321-050ba93c4a04", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 41, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-215", ContainerID:"940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2", Pod:"goldmane-cccfbd5cf-vnbqd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b723097279", MAC:"26:d9:48:fa:1a:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:42:13.367382 containerd[1977]: 2026-03-03 13:42:13.357 [INFO][5466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" Namespace="calico-system" Pod="goldmane-cccfbd5cf-vnbqd" WorkloadEndpoint="ip--172--31--29--215-k8s-goldmane--cccfbd5cf--vnbqd-eth0" Mar 3 13:42:13.392008 systemd[1]: Started cri-containerd-e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c.scope - libcontainer container e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c. Mar 3 13:42:13.408018 systemd-networkd[1790]: calic509dc5e9e4: Gained IPv6LL Mar 3 13:42:13.460094 systemd-networkd[1790]: calic9742ae8068: Link UP Mar 3 13:42:13.465202 systemd-networkd[1790]: calic9742ae8068: Gained carrier Mar 3 13:42:13.482673 containerd[1977]: time="2026-03-03T13:42:13.481844983Z" level=info msg="connecting to shim 940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2" address="unix:///run/containerd/s/4809d8ff0f5907b507ba13d9d547f9a8a6d9783c2f33edb1445658c624cf176f" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.066 [INFO][5441] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--215-k8s-coredns--66bc5c9577--4h8sk-eth0 coredns-66bc5c9577- kube-system c0efcb7b-93dc-4d18-9b21-e4718494e8da 863 0 2026-03-03 13:41:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-215 coredns-66bc5c9577-4h8sk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic9742ae8068 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" Namespace="kube-system" Pod="coredns-66bc5c9577-4h8sk" WorkloadEndpoint="ip--172--31--29--215-k8s-coredns--66bc5c9577--4h8sk-" Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.066 [INFO][5441] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" Namespace="kube-system" Pod="coredns-66bc5c9577-4h8sk" WorkloadEndpoint="ip--172--31--29--215-k8s-coredns--66bc5c9577--4h8sk-eth0" Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.183 [INFO][5485] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" HandleID="k8s-pod-network.2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" Workload="ip--172--31--29--215-k8s-coredns--66bc5c9577--4h8sk-eth0" Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.197 [INFO][5485] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" HandleID="k8s-pod-network.2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" Workload="ip--172--31--29--215-k8s-coredns--66bc5c9577--4h8sk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004feb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-215", "pod":"coredns-66bc5c9577-4h8sk", "timestamp":"2026-03-03 13:42:13.183013396 +0000 UTC"}, Hostname:"ip-172-31-29-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.197 [INFO][5485] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.318 [INFO][5485] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.318 [INFO][5485] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-215' Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.370 [INFO][5485] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" host="ip-172-31-29-215" Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.377 [INFO][5485] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-29-215" Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.396 [INFO][5485] ipam/ipam.go 526: Trying affinity for 192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.399 [INFO][5485] ipam/ipam.go 160: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.403 [INFO][5485] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.403 [INFO][5485] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" host="ip-172-31-29-215" Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.406 [INFO][5485] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83 Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.415 [INFO][5485] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" host="ip-172-31-29-215" Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.435 [INFO][5485] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.127.135/26] block=192.168.127.128/26 handle="k8s-pod-network.2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" host="ip-172-31-29-215" Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.436 [INFO][5485] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.127.135/26] handle="k8s-pod-network.2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" host="ip-172-31-29-215" Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.437 [INFO][5485] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:42:13.522815 containerd[1977]: 2026-03-03 13:42:13.437 [INFO][5485] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.127.135/26] IPv6=[] ContainerID="2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" HandleID="k8s-pod-network.2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" Workload="ip--172--31--29--215-k8s-coredns--66bc5c9577--4h8sk-eth0" Mar 3 13:42:13.525473 containerd[1977]: 2026-03-03 13:42:13.453 [INFO][5441] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" Namespace="kube-system" Pod="coredns-66bc5c9577-4h8sk" WorkloadEndpoint="ip--172--31--29--215-k8s-coredns--66bc5c9577--4h8sk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--215-k8s-coredns--66bc5c9577--4h8sk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c0efcb7b-93dc-4d18-9b21-e4718494e8da", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 41, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-215", ContainerID:"", Pod:"coredns-66bc5c9577-4h8sk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9742ae8068", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:42:13.525473 containerd[1977]: 2026-03-03 13:42:13.453 [INFO][5441] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.135/32] ContainerID="2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" Namespace="kube-system" Pod="coredns-66bc5c9577-4h8sk" WorkloadEndpoint="ip--172--31--29--215-k8s-coredns--66bc5c9577--4h8sk-eth0" Mar 3 13:42:13.525473 containerd[1977]: 2026-03-03 13:42:13.453 [INFO][5441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9742ae8068 ContainerID="2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" Namespace="kube-system" Pod="coredns-66bc5c9577-4h8sk" WorkloadEndpoint="ip--172--31--29--215-k8s-coredns--66bc5c9577--4h8sk-eth0" Mar 3 13:42:13.525473 containerd[1977]: 2026-03-03 13:42:13.468 [INFO][5441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" Namespace="kube-system" Pod="coredns-66bc5c9577-4h8sk" WorkloadEndpoint="ip--172--31--29--215-k8s-coredns--66bc5c9577--4h8sk-eth0" Mar 3 13:42:13.525719 containerd[1977]: 2026-03-03 13:42:13.475 [INFO][5441] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" Namespace="kube-system" Pod="coredns-66bc5c9577-4h8sk" WorkloadEndpoint="ip--172--31--29--215-k8s-coredns--66bc5c9577--4h8sk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--215-k8s-coredns--66bc5c9577--4h8sk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c0efcb7b-93dc-4d18-9b21-e4718494e8da", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 41, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-215", ContainerID:"2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83", Pod:"coredns-66bc5c9577-4h8sk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9742ae8068", MAC:"4a:f4:d0:8f:f2:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:42:13.525719 containerd[1977]: 2026-03-03 13:42:13.508 [INFO][5441] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" Namespace="kube-system" Pod="coredns-66bc5c9577-4h8sk" WorkloadEndpoint="ip--172--31--29--215-k8s-coredns--66bc5c9577--4h8sk-eth0" Mar 3 13:42:13.585593 systemd[1]: Started cri-containerd-940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2.scope - libcontainer container 940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2. Mar 3 13:42:13.669734 containerd[1977]: time="2026-03-03T13:42:13.669550560Z" level=info msg="connecting to shim 2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83" address="unix:///run/containerd/s/752472cbb10fcad8ac4a1c730e00039288fa707f846838230a39ea1031dde069" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:42:13.700355 containerd[1977]: time="2026-03-03T13:42:13.700303103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f877f9cd-frzvl,Uid:f1a68e16-2cc0-44c9-8e62-da67fc36763c,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c\"" Mar 3 13:42:13.737016 systemd[1]: Started cri-containerd-2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83.scope - libcontainer container 2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83. Mar 3 13:42:13.839858 containerd[1977]: time="2026-03-03T13:42:13.839804519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4h8sk,Uid:c0efcb7b-93dc-4d18-9b21-e4718494e8da,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83\"" Mar 3 13:42:13.857253 containerd[1977]: time="2026-03-03T13:42:13.857048393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-vnbqd,Uid:67ac86c0-8603-47c0-a321-050ba93c4a04,Namespace:calico-system,Attempt:0,} returns sandbox id \"940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2\"" Mar 3 13:42:13.862324 containerd[1977]: time="2026-03-03T13:42:13.862005593Z" level=info msg="CreateContainer within sandbox \"2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 13:42:13.897713 containerd[1977]: time="2026-03-03T13:42:13.896674925Z" level=info msg="Container 3c414335aaadd10ab5279a718ce5b05ea914a5e67ca419c54ef20bf3c168196c: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:42:13.907116 containerd[1977]: time="2026-03-03T13:42:13.906976969Z" level=info msg="CreateContainer within sandbox \"2cb9f68443e6cbd73fecde1569c606bcb2288f687804fe862c9013599b2fbe83\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c414335aaadd10ab5279a718ce5b05ea914a5e67ca419c54ef20bf3c168196c\"" Mar 3 13:42:13.908610 containerd[1977]: time="2026-03-03T13:42:13.908409547Z" level=info msg="StartContainer for \"3c414335aaadd10ab5279a718ce5b05ea914a5e67ca419c54ef20bf3c168196c\"" Mar 3 13:42:13.913432 containerd[1977]: time="2026-03-03T13:42:13.913232024Z" level=info msg="connecting to shim 3c414335aaadd10ab5279a718ce5b05ea914a5e67ca419c54ef20bf3c168196c" address="unix:///run/containerd/s/752472cbb10fcad8ac4a1c730e00039288fa707f846838230a39ea1031dde069" protocol=ttrpc version=3 Mar 3 13:42:13.934704 containerd[1977]: time="2026-03-03T13:42:13.934661593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vzcdk,Uid:4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e,Namespace:calico-system,Attempt:0,}" Mar 3 13:42:13.947390 systemd[1]: Started cri-containerd-3c414335aaadd10ab5279a718ce5b05ea914a5e67ca419c54ef20bf3c168196c.scope - libcontainer container 3c414335aaadd10ab5279a718ce5b05ea914a5e67ca419c54ef20bf3c168196c. Mar 3 13:42:14.034919 containerd[1977]: time="2026-03-03T13:42:14.034880302Z" level=info msg="StartContainer for \"3c414335aaadd10ab5279a718ce5b05ea914a5e67ca419c54ef20bf3c168196c\" returns successfully" Mar 3 13:42:14.249159 systemd-networkd[1790]: calicb45805ac7a: Link UP Mar 3 13:42:14.250456 systemd-networkd[1790]: calicb45805ac7a: Gained carrier Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.071 [INFO][5705] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--215-k8s-csi--node--driver--vzcdk-eth0 csi-node-driver- calico-system 4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e 721 0 2026-03-03 13:41:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-29-215 csi-node-driver-vzcdk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicb45805ac7a [] [] }} ContainerID="19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" Namespace="calico-system" Pod="csi-node-driver-vzcdk" WorkloadEndpoint="ip--172--31--29--215-k8s-csi--node--driver--vzcdk-" Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.073 [INFO][5705] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" Namespace="calico-system" Pod="csi-node-driver-vzcdk" WorkloadEndpoint="ip--172--31--29--215-k8s-csi--node--driver--vzcdk-eth0" Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.168 [INFO][5739] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" HandleID="k8s-pod-network.19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" Workload="ip--172--31--29--215-k8s-csi--node--driver--vzcdk-eth0" Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.184 [INFO][5739] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" HandleID="k8s-pod-network.19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" Workload="ip--172--31--29--215-k8s-csi--node--driver--vzcdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fba0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-215", "pod":"csi-node-driver-vzcdk", "timestamp":"2026-03-03 13:42:14.168508985 +0000 UTC"}, Hostname:"ip-172-31-29-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003fe420)} Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.185 [INFO][5739] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.185 [INFO][5739] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.185 [INFO][5739] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-215' Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.190 [INFO][5739] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" host="ip-172-31-29-215" Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.198 [INFO][5739] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-29-215" Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.205 [INFO][5739] ipam/ipam.go 526: Trying affinity for 192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.209 [INFO][5739] ipam/ipam.go 160: Attempting to load block cidr=192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.212 [INFO][5739] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ip-172-31-29-215" Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.212 [INFO][5739] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" host="ip-172-31-29-215" Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.214 [INFO][5739] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45 Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.220 [INFO][5739] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" host="ip-172-31-29-215" Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.234 [INFO][5739] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.127.136/26] block=192.168.127.128/26 handle="k8s-pod-network.19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" host="ip-172-31-29-215" Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.234 [INFO][5739] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.127.136/26] handle="k8s-pod-network.19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" host="ip-172-31-29-215" Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.234 [INFO][5739] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:42:14.282891 containerd[1977]: 2026-03-03 13:42:14.234 [INFO][5739] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.127.136/26] IPv6=[] ContainerID="19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" HandleID="k8s-pod-network.19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" Workload="ip--172--31--29--215-k8s-csi--node--driver--vzcdk-eth0" Mar 3 13:42:14.284266 containerd[1977]: 2026-03-03 13:42:14.240 [INFO][5705] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" Namespace="calico-system" Pod="csi-node-driver-vzcdk" WorkloadEndpoint="ip--172--31--29--215-k8s-csi--node--driver--vzcdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--215-k8s-csi--node--driver--vzcdk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 41, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-215", ContainerID:"", Pod:"csi-node-driver-vzcdk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicb45805ac7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:42:14.284266 containerd[1977]: 2026-03-03 13:42:14.240 [INFO][5705] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.136/32] ContainerID="19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" Namespace="calico-system" Pod="csi-node-driver-vzcdk" WorkloadEndpoint="ip--172--31--29--215-k8s-csi--node--driver--vzcdk-eth0" Mar 3 13:42:14.284266 containerd[1977]: 2026-03-03 13:42:14.240 [INFO][5705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb45805ac7a ContainerID="19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" Namespace="calico-system" Pod="csi-node-driver-vzcdk" WorkloadEndpoint="ip--172--31--29--215-k8s-csi--node--driver--vzcdk-eth0" Mar 3 13:42:14.284266 containerd[1977]: 2026-03-03 13:42:14.251 [INFO][5705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" Namespace="calico-system" Pod="csi-node-driver-vzcdk" WorkloadEndpoint="ip--172--31--29--215-k8s-csi--node--driver--vzcdk-eth0" Mar 3 13:42:14.284266 containerd[1977]: 2026-03-03 13:42:14.251 [INFO][5705] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" Namespace="calico-system" Pod="csi-node-driver-vzcdk" WorkloadEndpoint="ip--172--31--29--215-k8s-csi--node--driver--vzcdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--215-k8s-csi--node--driver--vzcdk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 41, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-215", ContainerID:"19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45", Pod:"csi-node-driver-vzcdk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicb45805ac7a", MAC:"ca:59:5e:47:b5:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:42:14.284266 containerd[1977]: 2026-03-03 13:42:14.273 [INFO][5705] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" Namespace="calico-system" Pod="csi-node-driver-vzcdk" WorkloadEndpoint="ip--172--31--29--215-k8s-csi--node--driver--vzcdk-eth0" Mar 3 13:42:14.334797 containerd[1977]: time="2026-03-03T13:42:14.332066290Z" level=info msg="connecting to shim 19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45" address="unix:///run/containerd/s/40e8e927f2968488850be68049f2c0407040f23ee115b362e8382999cf672ea9" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:42:14.401335 systemd[1]: Started cri-containerd-19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45.scope - libcontainer container 19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45. Mar 3 13:42:14.494179 containerd[1977]: time="2026-03-03T13:42:14.494130015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vzcdk,Uid:4bd43a2b-aace-4d10-bd7f-1c90a5c76f8e,Namespace:calico-system,Attempt:0,} returns sandbox id \"19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45\"" Mar 3 13:42:14.917172 kubelet[3342]: I0303 13:42:14.917025 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4h8sk" podStartSLOduration=55.917000728 podStartE2EDuration="55.917000728s" podCreationTimestamp="2026-03-03 13:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:42:14.912846065 +0000 UTC m=+62.143319619" watchObservedRunningTime="2026-03-03 13:42:14.917000728 +0000 UTC m=+62.147474282" Mar 3 13:42:15.075402 systemd-networkd[1790]: cali1b1b0fb64bd: Gained IPv6LL Mar 3 13:42:15.075760 systemd-networkd[1790]: cali7b723097279: Gained IPv6LL Mar 3 13:42:15.327564 systemd-networkd[1790]: calic9742ae8068: Gained IPv6LL Mar 3 13:42:15.675205 containerd[1977]: time="2026-03-03T13:42:15.675066717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 3 13:42:15.738244 containerd[1977]: time="2026-03-03T13:42:15.738185168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:15.741293 containerd[1977]: time="2026-03-03T13:42:15.741130993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 4.686975068s" Mar 3 13:42:15.741293 containerd[1977]: time="2026-03-03T13:42:15.741178382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 3 13:42:15.744761 containerd[1977]: time="2026-03-03T13:42:15.744708697Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:15.745443 containerd[1977]: time="2026-03-03T13:42:15.745399859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:15.748856 containerd[1977]: time="2026-03-03T13:42:15.748815633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 3 13:42:15.767066 containerd[1977]: time="2026-03-03T13:42:15.767023443Z" level=info msg="CreateContainer within sandbox \"ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 3 13:42:15.782826 containerd[1977]: time="2026-03-03T13:42:15.782098957Z" level=info msg="Container fe70afcc7af59c1ff352cae21f1cfb5c03e36397527efc4916be74c463ee0e4d: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:42:15.796663 containerd[1977]: time="2026-03-03T13:42:15.796622584Z" level=info msg="CreateContainer within sandbox \"ed536ac1832d20d6db7ab6566cd6418cd6dc37d87b55778b726cdc936fd1d05c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fe70afcc7af59c1ff352cae21f1cfb5c03e36397527efc4916be74c463ee0e4d\"" Mar 3 13:42:15.797630 containerd[1977]: time="2026-03-03T13:42:15.797462804Z" level=info msg="StartContainer for \"fe70afcc7af59c1ff352cae21f1cfb5c03e36397527efc4916be74c463ee0e4d\"" Mar 3 13:42:15.799322 containerd[1977]: time="2026-03-03T13:42:15.799233639Z" level=info msg="connecting to shim fe70afcc7af59c1ff352cae21f1cfb5c03e36397527efc4916be74c463ee0e4d" address="unix:///run/containerd/s/0776edd20640e1fb4edfe5447d17f2217334520c942568e02dbb5628485ec633" protocol=ttrpc version=3 Mar 3 13:42:15.860009 systemd[1]: Started cri-containerd-fe70afcc7af59c1ff352cae21f1cfb5c03e36397527efc4916be74c463ee0e4d.scope - libcontainer container fe70afcc7af59c1ff352cae21f1cfb5c03e36397527efc4916be74c463ee0e4d. Mar 3 13:42:15.946458 containerd[1977]: time="2026-03-03T13:42:15.946268375Z" level=info msg="StartContainer for \"fe70afcc7af59c1ff352cae21f1cfb5c03e36397527efc4916be74c463ee0e4d\" returns successfully" Mar 3 13:42:15.967684 systemd-networkd[1790]: calicb45805ac7a: Gained IPv6LL Mar 3 13:42:16.981419 systemd[1]: Started sshd@8-172.31.29.215:22-68.220.241.50:38332.service - OpenSSH per-connection server daemon (68.220.241.50:38332). Mar 3 13:42:17.495517 sshd[5883]: Accepted publickey for core from 68.220.241.50 port 38332 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:42:17.498290 sshd-session[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:42:17.505301 systemd-logind[1964]: New session 9 of user core. Mar 3 13:42:17.509043 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 3 13:42:17.935014 kubelet[3342]: I0303 13:42:17.925494 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 3 13:42:18.331233 sshd[5889]: Connection closed by 68.220.241.50 port 38332 Mar 3 13:42:18.332968 sshd-session[5883]: pam_unix(sshd:session): session closed for user core Mar 3 13:42:18.337316 systemd[1]: sshd@8-172.31.29.215:22-68.220.241.50:38332.service: Deactivated successfully. Mar 3 13:42:18.339755 systemd[1]: session-9.scope: Deactivated successfully. Mar 3 13:42:18.347026 systemd-logind[1964]: Session 9 logged out. Waiting for processes to exit. Mar 3 13:42:18.352025 systemd-logind[1964]: Removed session 9. Mar 3 13:42:18.585760 ntpd[2231]: Listen normally on 9 cali0476ad8fbc8 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 3 13:42:18.586697 ntpd[2231]: 3 Mar 13:42:18 ntpd[2231]: Listen normally on 9 cali0476ad8fbc8 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 3 13:42:18.586697 ntpd[2231]: 3 Mar 13:42:18 ntpd[2231]: Listen normally on 10 cali336a37c330f [fe80::ecee:eeff:feee:eeee%9]:123 Mar 3 13:42:18.586697 ntpd[2231]: 3 Mar 13:42:18 ntpd[2231]: Listen normally on 11 calic509dc5e9e4 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 3 13:42:18.586697 ntpd[2231]: 3 Mar 13:42:18 ntpd[2231]: Listen normally on 12 cali1b1b0fb64bd [fe80::ecee:eeff:feee:eeee%11]:123 Mar 3 13:42:18.586697 ntpd[2231]: 3 Mar 13:42:18 ntpd[2231]: Listen normally on 13 cali7b723097279 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 3 13:42:18.586697 ntpd[2231]: 3 Mar 13:42:18 ntpd[2231]: Listen normally on 14 calic9742ae8068 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 3 13:42:18.586697 ntpd[2231]: 3 Mar 13:42:18 ntpd[2231]: Listen normally on 15 calicb45805ac7a [fe80::ecee:eeff:feee:eeee%14]:123 Mar 3 13:42:18.585845 ntpd[2231]: Listen normally on 10 cali336a37c330f [fe80::ecee:eeff:feee:eeee%9]:123 Mar 3 13:42:18.585877 ntpd[2231]: Listen normally on 11 calic509dc5e9e4 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 3 13:42:18.585898 ntpd[2231]: Listen normally on 12 cali1b1b0fb64bd [fe80::ecee:eeff:feee:eeee%11]:123 Mar 3 13:42:18.585917 ntpd[2231]: Listen normally on 13 cali7b723097279 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 3 13:42:18.585936 ntpd[2231]: Listen normally on 14 calic9742ae8068 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 3 13:42:18.585961 ntpd[2231]: Listen normally on 15 calicb45805ac7a [fe80::ecee:eeff:feee:eeee%14]:123 Mar 3 13:42:21.561444 containerd[1977]: time="2026-03-03T13:42:21.561373595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:21.562522 containerd[1977]: time="2026-03-03T13:42:21.562176477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 3 13:42:21.630962 containerd[1977]: time="2026-03-03T13:42:21.630899543Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:21.633574 containerd[1977]: time="2026-03-03T13:42:21.633477407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:21.634472 containerd[1977]: time="2026-03-03T13:42:21.634158143Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 5.885297074s" Mar 3 13:42:21.634472 containerd[1977]: time="2026-03-03T13:42:21.634192948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 3 13:42:21.669261 containerd[1977]: time="2026-03-03T13:42:21.669221474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 3 13:42:21.874533 containerd[1977]: time="2026-03-03T13:42:21.873941708Z" level=info msg="CreateContainer within sandbox \"d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 3 13:42:21.884493 containerd[1977]: time="2026-03-03T13:42:21.884307735Z" level=info msg="Container a554edb43aeaee9916a1f78e5ea68cbe052e0b182b187583f979addb5635a4c6: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:42:21.889907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1458712260.mount: Deactivated successfully. Mar 3 13:42:21.994259 containerd[1977]: time="2026-03-03T13:42:21.994214920Z" level=info msg="CreateContainer within sandbox \"d034869d967696fd017aba60060f87b09617e7230c792cf24537445e4cfbffcc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a554edb43aeaee9916a1f78e5ea68cbe052e0b182b187583f979addb5635a4c6\"" Mar 3 13:42:21.995439 containerd[1977]: time="2026-03-03T13:42:21.995101087Z" level=info msg="StartContainer for \"a554edb43aeaee9916a1f78e5ea68cbe052e0b182b187583f979addb5635a4c6\"" Mar 3 13:42:22.015693 containerd[1977]: time="2026-03-03T13:42:22.015487500Z" level=info msg="connecting to shim a554edb43aeaee9916a1f78e5ea68cbe052e0b182b187583f979addb5635a4c6" address="unix:///run/containerd/s/73efff562abad8d7006d9204b7e94d9a91176b5b10c2bbbca3aef4a01a82a5d1" protocol=ttrpc version=3 Mar 3 13:42:22.212059 containerd[1977]: time="2026-03-03T13:42:22.211935404Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:22.215413 containerd[1977]: time="2026-03-03T13:42:22.215347626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 3 13:42:22.225074 containerd[1977]: time="2026-03-03T13:42:22.224172712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 554.906637ms" Mar 3 13:42:22.225074 containerd[1977]: time="2026-03-03T13:42:22.224222817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 3 13:42:22.226871 containerd[1977]: time="2026-03-03T13:42:22.226642280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 3 13:42:22.235034 containerd[1977]: time="2026-03-03T13:42:22.234904822Z" level=info msg="CreateContainer within sandbox \"e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 3 13:42:22.253075 containerd[1977]: time="2026-03-03T13:42:22.252826549Z" level=info msg="Container 0981e5680f0e64c14cc73542ab473331eef1c2333249a4e6a05f2f5b1f309a0c: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:42:22.264274 systemd[1]: Started cri-containerd-a554edb43aeaee9916a1f78e5ea68cbe052e0b182b187583f979addb5635a4c6.scope - libcontainer container a554edb43aeaee9916a1f78e5ea68cbe052e0b182b187583f979addb5635a4c6. Mar 3 13:42:22.357596 containerd[1977]: time="2026-03-03T13:42:22.357449964Z" level=info msg="CreateContainer within sandbox \"e6282758cf9a3595afca207375b7909890083f31c98987c85ab84919c238e02c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0981e5680f0e64c14cc73542ab473331eef1c2333249a4e6a05f2f5b1f309a0c\"" Mar 3 13:42:22.360467 containerd[1977]: time="2026-03-03T13:42:22.360343217Z" level=info msg="StartContainer for \"0981e5680f0e64c14cc73542ab473331eef1c2333249a4e6a05f2f5b1f309a0c\"" Mar 3 13:42:22.363194 containerd[1977]: time="2026-03-03T13:42:22.363039962Z" level=info msg="connecting to shim 0981e5680f0e64c14cc73542ab473331eef1c2333249a4e6a05f2f5b1f309a0c" address="unix:///run/containerd/s/a0c75a16d3abcf2ea45aef6bf8c96b5d5c0a85d406448f39079db0001c8026c9" protocol=ttrpc version=3 Mar 3 13:42:22.409183 systemd[1]: Started cri-containerd-0981e5680f0e64c14cc73542ab473331eef1c2333249a4e6a05f2f5b1f309a0c.scope - libcontainer container 0981e5680f0e64c14cc73542ab473331eef1c2333249a4e6a05f2f5b1f309a0c. Mar 3 13:42:22.427580 containerd[1977]: time="2026-03-03T13:42:22.427435692Z" level=info msg="StartContainer for \"a554edb43aeaee9916a1f78e5ea68cbe052e0b182b187583f979addb5635a4c6\" returns successfully" Mar 3 13:42:22.519509 containerd[1977]: time="2026-03-03T13:42:22.518636816Z" level=info msg="StartContainer for \"0981e5680f0e64c14cc73542ab473331eef1c2333249a4e6a05f2f5b1f309a0c\" returns successfully" Mar 3 13:42:23.274467 kubelet[3342]: I0303 13:42:23.268804 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77bcd9b5f-p2cx4" podStartSLOduration=39.146064425 podStartE2EDuration="49.229259704s" podCreationTimestamp="2026-03-03 13:41:34 +0000 UTC" firstStartedPulling="2026-03-03 13:42:11.585822006 +0000 UTC m=+58.816295553" lastFinishedPulling="2026-03-03 13:42:21.669017287 +0000 UTC m=+68.899490832" observedRunningTime="2026-03-03 13:42:23.104104874 +0000 UTC m=+70.334578425" watchObservedRunningTime="2026-03-03 13:42:23.229259704 +0000 UTC m=+70.459733272" Mar 3 13:42:23.275046 kubelet[3342]: I0303 13:42:23.274322 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-57f877f9cd-7w9vm" podStartSLOduration=44.579767446 podStartE2EDuration="49.274300156s" podCreationTimestamp="2026-03-03 13:41:34 +0000 UTC" firstStartedPulling="2026-03-03 13:42:11.051442116 +0000 UTC m=+58.281915648" lastFinishedPulling="2026-03-03 13:42:15.745974809 +0000 UTC m=+62.976448358" observedRunningTime="2026-03-03 13:42:16.938993606 +0000 UTC m=+64.169467162" watchObservedRunningTime="2026-03-03 13:42:23.274300156 +0000 UTC m=+70.504773710" Mar 3 13:42:23.409791 kubelet[3342]: I0303 13:42:23.409307 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-57f877f9cd-frzvl" podStartSLOduration=40.947185784 podStartE2EDuration="49.409279167s" podCreationTimestamp="2026-03-03 13:41:34 +0000 UTC" firstStartedPulling="2026-03-03 13:42:13.763830961 +0000 UTC m=+60.994304495" lastFinishedPulling="2026-03-03 13:42:22.225924332 +0000 UTC m=+69.456397878" observedRunningTime="2026-03-03 13:42:23.225545931 +0000 UTC m=+70.456019482" watchObservedRunningTime="2026-03-03 13:42:23.409279167 +0000 UTC m=+70.639752717" Mar 3 13:42:23.449166 systemd[1]: Started sshd@9-172.31.29.215:22-68.220.241.50:35212.service - OpenSSH per-connection server daemon (68.220.241.50:35212). Mar 3 13:42:24.094801 sshd[6016]: Accepted publickey for core from 68.220.241.50 port 35212 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:42:24.098144 sshd-session[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:42:24.111623 systemd-logind[1964]: New session 10 of user core. Mar 3 13:42:24.114390 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 3 13:42:24.191422 kubelet[3342]: I0303 13:42:24.191378 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 3 13:42:25.196127 sshd[6019]: Connection closed by 68.220.241.50 port 35212 Mar 3 13:42:25.197676 sshd-session[6016]: pam_unix(sshd:session): session closed for user core Mar 3 13:42:25.209375 systemd[1]: sshd@9-172.31.29.215:22-68.220.241.50:35212.service: Deactivated successfully. Mar 3 13:42:25.214927 systemd[1]: session-10.scope: Deactivated successfully. Mar 3 13:42:25.222195 systemd-logind[1964]: Session 10 logged out. Waiting for processes to exit. Mar 3 13:42:25.226751 systemd-logind[1964]: Removed session 10. Mar 3 13:42:26.062525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3532175115.mount: Deactivated successfully. Mar 3 13:42:26.966058 containerd[1977]: time="2026-03-03T13:42:26.945461310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:26.979132 containerd[1977]: time="2026-03-03T13:42:26.978249608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 3 13:42:27.015089 containerd[1977]: time="2026-03-03T13:42:27.015037050Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:27.020223 containerd[1977]: time="2026-03-03T13:42:27.020175823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:27.022232 containerd[1977]: time="2026-03-03T13:42:27.022006524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.795318499s" Mar 3 13:42:27.022232 containerd[1977]: time="2026-03-03T13:42:27.022051609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 3 13:42:27.028881 containerd[1977]: time="2026-03-03T13:42:27.027696081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 3 13:42:27.070979 containerd[1977]: time="2026-03-03T13:42:27.070766643Z" level=info msg="CreateContainer within sandbox \"940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 3 13:42:27.121760 containerd[1977]: time="2026-03-03T13:42:27.121692598Z" level=info msg="Container 6f55a97ab34139fba3fee9909d58fdd5c0a381a04cbfa4e6acf957c5af79a95b: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:42:27.258489 containerd[1977]: time="2026-03-03T13:42:27.258345887Z" level=info msg="CreateContainer within sandbox \"940b161cf31a4bfec127de782a849e1276fab3b4873ee98942dc51fcb23bc9f2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6f55a97ab34139fba3fee9909d58fdd5c0a381a04cbfa4e6acf957c5af79a95b\"" Mar 3 13:42:27.270625 containerd[1977]: time="2026-03-03T13:42:27.270566654Z" level=info msg="StartContainer for \"6f55a97ab34139fba3fee9909d58fdd5c0a381a04cbfa4e6acf957c5af79a95b\"" Mar 3 13:42:27.334203 containerd[1977]: time="2026-03-03T13:42:27.333458710Z" level=info msg="connecting to shim 6f55a97ab34139fba3fee9909d58fdd5c0a381a04cbfa4e6acf957c5af79a95b" address="unix:///run/containerd/s/4809d8ff0f5907b507ba13d9d547f9a8a6d9783c2f33edb1445658c624cf176f" protocol=ttrpc version=3 Mar 3 13:42:27.447068 systemd[1]: Started cri-containerd-6f55a97ab34139fba3fee9909d58fdd5c0a381a04cbfa4e6acf957c5af79a95b.scope - libcontainer container 6f55a97ab34139fba3fee9909d58fdd5c0a381a04cbfa4e6acf957c5af79a95b. Mar 3 13:42:27.668288 containerd[1977]: time="2026-03-03T13:42:27.668257509Z" level=info msg="StartContainer for \"6f55a97ab34139fba3fee9909d58fdd5c0a381a04cbfa4e6acf957c5af79a95b\" returns successfully" Mar 3 13:42:28.870349 containerd[1977]: time="2026-03-03T13:42:28.870300946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:28.871519 containerd[1977]: time="2026-03-03T13:42:28.871480546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 3 13:42:28.872981 containerd[1977]: time="2026-03-03T13:42:28.872929719Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:28.877983 containerd[1977]: time="2026-03-03T13:42:28.877927168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:28.878630 containerd[1977]: time="2026-03-03T13:42:28.878489236Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.850760698s" Mar 3 13:42:28.878630 containerd[1977]: time="2026-03-03T13:42:28.878519610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 3 13:42:28.940662 containerd[1977]: time="2026-03-03T13:42:28.940624232Z" level=info msg="CreateContainer within sandbox \"19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 3 13:42:28.989622 containerd[1977]: time="2026-03-03T13:42:28.988810883Z" level=info msg="Container 65edeaa3b537311871779fe58a4077111de314d3418039e2edb8b3986418fd2b: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:42:28.998231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1789450920.mount: Deactivated successfully. Mar 3 13:42:29.015599 containerd[1977]: time="2026-03-03T13:42:29.015532966Z" level=info msg="CreateContainer within sandbox \"19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"65edeaa3b537311871779fe58a4077111de314d3418039e2edb8b3986418fd2b\"" Mar 3 13:42:29.016742 containerd[1977]: time="2026-03-03T13:42:29.016716312Z" level=info msg="StartContainer for \"65edeaa3b537311871779fe58a4077111de314d3418039e2edb8b3986418fd2b\"" Mar 3 13:42:29.018804 containerd[1977]: time="2026-03-03T13:42:29.018751397Z" level=info msg="connecting to shim 65edeaa3b537311871779fe58a4077111de314d3418039e2edb8b3986418fd2b" address="unix:///run/containerd/s/40e8e927f2968488850be68049f2c0407040f23ee115b362e8382999cf672ea9" protocol=ttrpc version=3 Mar 3 13:42:29.046977 systemd[1]: Started cri-containerd-65edeaa3b537311871779fe58a4077111de314d3418039e2edb8b3986418fd2b.scope - libcontainer container 65edeaa3b537311871779fe58a4077111de314d3418039e2edb8b3986418fd2b. Mar 3 13:42:29.138979 containerd[1977]: time="2026-03-03T13:42:29.138866400Z" level=info msg="StartContainer for \"65edeaa3b537311871779fe58a4077111de314d3418039e2edb8b3986418fd2b\" returns successfully" Mar 3 13:42:29.149065 containerd[1977]: time="2026-03-03T13:42:29.149026363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 3 13:42:30.283514 systemd[1]: Started sshd@10-172.31.29.215:22-68.220.241.50:35222.service - OpenSSH per-connection server daemon (68.220.241.50:35222). Mar 3 13:42:30.829868 sshd[6174]: Accepted publickey for core from 68.220.241.50 port 35222 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:42:30.835157 sshd-session[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:42:30.849331 systemd-logind[1964]: New session 11 of user core. Mar 3 13:42:30.853196 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 3 13:42:30.943205 kubelet[3342]: I0303 13:42:30.896003 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-vnbqd" podStartSLOduration=44.723932231 podStartE2EDuration="57.891335283s" podCreationTimestamp="2026-03-03 13:41:33 +0000 UTC" firstStartedPulling="2026-03-03 13:42:13.859669737 +0000 UTC m=+61.090143269" lastFinishedPulling="2026-03-03 13:42:27.027072789 +0000 UTC m=+74.257546321" observedRunningTime="2026-03-03 13:42:28.575695231 +0000 UTC m=+75.806168784" watchObservedRunningTime="2026-03-03 13:42:30.891335283 +0000 UTC m=+78.121808837" Mar 3 13:42:30.970969 containerd[1977]: time="2026-03-03T13:42:30.970908584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:30.972066 containerd[1977]: time="2026-03-03T13:42:30.971815290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 3 13:42:30.973657 containerd[1977]: time="2026-03-03T13:42:30.973587013Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:30.991500 containerd[1977]: time="2026-03-03T13:42:30.991440014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:30.996721 containerd[1977]: time="2026-03-03T13:42:30.995194842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.845725332s" Mar 3 13:42:30.996721 containerd[1977]: time="2026-03-03T13:42:30.995246211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 3 13:42:31.006583 containerd[1977]: time="2026-03-03T13:42:31.006522269Z" level=info msg="CreateContainer within sandbox \"19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 3 13:42:31.019049 containerd[1977]: time="2026-03-03T13:42:31.019009928Z" level=info msg="Container 9f0661c840991b288a37468f4838b439edae3e945a625c403c0185bca39fcc0c: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:42:31.027426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2615141335.mount: Deactivated successfully. Mar 3 13:42:31.054918 containerd[1977]: time="2026-03-03T13:42:31.054752902Z" level=info msg="CreateContainer within sandbox \"19d51d46f35864cefd3be70afe6ab7bafded1bc7ab468ccd9289fbee97576f45\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9f0661c840991b288a37468f4838b439edae3e945a625c403c0185bca39fcc0c\"" Mar 3 13:42:31.056148 containerd[1977]: time="2026-03-03T13:42:31.056084125Z" level=info msg="StartContainer for \"9f0661c840991b288a37468f4838b439edae3e945a625c403c0185bca39fcc0c\"" Mar 3 13:42:31.059007 containerd[1977]: time="2026-03-03T13:42:31.058845531Z" level=info msg="connecting to shim 9f0661c840991b288a37468f4838b439edae3e945a625c403c0185bca39fcc0c" address="unix:///run/containerd/s/40e8e927f2968488850be68049f2c0407040f23ee115b362e8382999cf672ea9" protocol=ttrpc version=3 Mar 3 13:42:31.112230 systemd[1]: Started cri-containerd-9f0661c840991b288a37468f4838b439edae3e945a625c403c0185bca39fcc0c.scope - libcontainer container 9f0661c840991b288a37468f4838b439edae3e945a625c403c0185bca39fcc0c. Mar 3 13:42:31.257044 containerd[1977]: time="2026-03-03T13:42:31.256913398Z" level=info msg="StartContainer for \"9f0661c840991b288a37468f4838b439edae3e945a625c403c0185bca39fcc0c\" returns successfully" Mar 3 13:42:31.729969 kubelet[3342]: I0303 13:42:31.729838 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vzcdk" podStartSLOduration=41.228003755 podStartE2EDuration="57.729820159s" podCreationTimestamp="2026-03-03 13:41:34 +0000 UTC" firstStartedPulling="2026-03-03 13:42:14.496426425 +0000 UTC m=+61.726899960" lastFinishedPulling="2026-03-03 13:42:30.998242819 +0000 UTC m=+78.228716364" observedRunningTime="2026-03-03 13:42:31.729637466 +0000 UTC m=+78.960111017" watchObservedRunningTime="2026-03-03 13:42:31.729820159 +0000 UTC m=+78.960293709" Mar 3 13:42:31.805820 sshd[6203]: Connection closed by 68.220.241.50 port 35222 Mar 3 13:42:31.806070 sshd-session[6174]: pam_unix(sshd:session): session closed for user core Mar 3 13:42:31.814558 systemd-logind[1964]: Session 11 logged out. Waiting for processes to exit. Mar 3 13:42:31.814978 systemd[1]: sshd@10-172.31.29.215:22-68.220.241.50:35222.service: Deactivated successfully. Mar 3 13:42:31.820191 systemd[1]: session-11.scope: Deactivated successfully. Mar 3 13:42:31.824212 systemd-logind[1964]: Removed session 11. Mar 3 13:42:31.904408 systemd[1]: Started sshd@11-172.31.29.215:22-68.220.241.50:35224.service - OpenSSH per-connection server daemon (68.220.241.50:35224). Mar 3 13:42:32.428368 sshd[6275]: Accepted publickey for core from 68.220.241.50 port 35224 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:42:32.430323 sshd-session[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:42:32.437347 systemd-logind[1964]: New session 12 of user core. Mar 3 13:42:32.441213 kubelet[3342]: I0303 13:42:32.433709 3342 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 3 13:42:32.442015 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 3 13:42:32.456793 kubelet[3342]: I0303 13:42:32.456598 3342 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 3 13:42:33.101416 sshd[6278]: Connection closed by 68.220.241.50 port 35224 Mar 3 13:42:33.103690 sshd-session[6275]: pam_unix(sshd:session): session closed for user core Mar 3 13:42:33.111148 systemd[1]: sshd@11-172.31.29.215:22-68.220.241.50:35224.service: Deactivated successfully. Mar 3 13:42:33.122350 systemd[1]: session-12.scope: Deactivated successfully. Mar 3 13:42:33.136108 systemd-logind[1964]: Session 12 logged out. Waiting for processes to exit. Mar 3 13:42:33.140176 systemd-logind[1964]: Removed session 12. Mar 3 13:42:33.200626 systemd[1]: Started sshd@12-172.31.29.215:22-68.220.241.50:53082.service - OpenSSH per-connection server daemon (68.220.241.50:53082). Mar 3 13:42:33.709467 sshd[6288]: Accepted publickey for core from 68.220.241.50 port 53082 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:42:33.711443 sshd-session[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:42:33.718044 systemd-logind[1964]: New session 13 of user core. Mar 3 13:42:33.722006 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 3 13:42:34.152552 sshd[6291]: Connection closed by 68.220.241.50 port 53082 Mar 3 13:42:34.153443 sshd-session[6288]: pam_unix(sshd:session): session closed for user core Mar 3 13:42:34.158399 systemd[1]: sshd@12-172.31.29.215:22-68.220.241.50:53082.service: Deactivated successfully. Mar 3 13:42:34.160611 systemd[1]: session-13.scope: Deactivated successfully. Mar 3 13:42:34.161506 systemd-logind[1964]: Session 13 logged out. Waiting for processes to exit. Mar 3 13:42:34.164101 systemd-logind[1964]: Removed session 13. Mar 3 13:42:39.246383 systemd[1]: Started sshd@13-172.31.29.215:22-68.220.241.50:53096.service - OpenSSH per-connection server daemon (68.220.241.50:53096). Mar 3 13:42:39.742638 sshd[6314]: Accepted publickey for core from 68.220.241.50 port 53096 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:42:39.744118 sshd-session[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:42:39.750256 systemd-logind[1964]: New session 14 of user core. Mar 3 13:42:39.754080 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 3 13:42:40.215903 sshd[6317]: Connection closed by 68.220.241.50 port 53096 Mar 3 13:42:40.219227 sshd-session[6314]: pam_unix(sshd:session): session closed for user core Mar 3 13:42:40.228364 systemd[1]: sshd@13-172.31.29.215:22-68.220.241.50:53096.service: Deactivated successfully. Mar 3 13:42:40.230634 systemd[1]: session-14.scope: Deactivated successfully. Mar 3 13:42:40.232038 systemd-logind[1964]: Session 14 logged out. Waiting for processes to exit. Mar 3 13:42:40.234369 systemd-logind[1964]: Removed session 14. Mar 3 13:42:40.310245 systemd[1]: Started sshd@14-172.31.29.215:22-68.220.241.50:53102.service - OpenSSH per-connection server daemon (68.220.241.50:53102). Mar 3 13:42:40.779538 sshd[6329]: Accepted publickey for core from 68.220.241.50 port 53102 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:42:40.781619 sshd-session[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:42:40.790668 systemd-logind[1964]: New session 15 of user core. Mar 3 13:42:40.795024 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 3 13:42:41.520028 sshd[6332]: Connection closed by 68.220.241.50 port 53102 Mar 3 13:42:41.521097 sshd-session[6329]: pam_unix(sshd:session): session closed for user core Mar 3 13:42:41.525926 systemd-logind[1964]: Session 15 logged out. Waiting for processes to exit. Mar 3 13:42:41.527112 systemd[1]: sshd@14-172.31.29.215:22-68.220.241.50:53102.service: Deactivated successfully. Mar 3 13:42:41.529619 systemd[1]: session-15.scope: Deactivated successfully. Mar 3 13:42:41.531169 systemd-logind[1964]: Removed session 15. Mar 3 13:42:41.606072 systemd[1]: Started sshd@15-172.31.29.215:22-68.220.241.50:53114.service - OpenSSH per-connection server daemon (68.220.241.50:53114). Mar 3 13:42:42.046449 sshd[6342]: Accepted publickey for core from 68.220.241.50 port 53114 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:42:42.049409 sshd-session[6342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:42:42.058172 systemd-logind[1964]: New session 16 of user core. Mar 3 13:42:42.064081 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 3 13:42:43.134230 sshd[6345]: Connection closed by 68.220.241.50 port 53114 Mar 3 13:42:43.171875 sshd-session[6342]: pam_unix(sshd:session): session closed for user core Mar 3 13:42:43.178038 systemd-logind[1964]: Session 16 logged out. Waiting for processes to exit. Mar 3 13:42:43.179281 systemd[1]: sshd@15-172.31.29.215:22-68.220.241.50:53114.service: Deactivated successfully. Mar 3 13:42:43.182204 systemd[1]: session-16.scope: Deactivated successfully. Mar 3 13:42:43.184598 systemd-logind[1964]: Removed session 16. Mar 3 13:42:43.226851 systemd[1]: Started sshd@16-172.31.29.215:22-68.220.241.50:36244.service - OpenSSH per-connection server daemon (68.220.241.50:36244). Mar 3 13:42:43.722984 sshd[6372]: Accepted publickey for core from 68.220.241.50 port 36244 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:42:43.724756 sshd-session[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:42:43.730120 systemd-logind[1964]: New session 17 of user core. Mar 3 13:42:43.736032 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 3 13:42:44.462805 kubelet[3342]: I0303 13:42:44.460607 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 3 13:42:44.807809 sshd[6376]: Connection closed by 68.220.241.50 port 36244 Mar 3 13:42:44.808549 sshd-session[6372]: pam_unix(sshd:session): session closed for user core Mar 3 13:42:44.824504 systemd[1]: sshd@16-172.31.29.215:22-68.220.241.50:36244.service: Deactivated successfully. Mar 3 13:42:44.829260 systemd[1]: session-17.scope: Deactivated successfully. Mar 3 13:42:44.832832 systemd-logind[1964]: Session 17 logged out. Waiting for processes to exit. Mar 3 13:42:44.836374 systemd-logind[1964]: Removed session 17. Mar 3 13:42:44.901196 systemd[1]: Started sshd@17-172.31.29.215:22-68.220.241.50:36258.service - OpenSSH per-connection server daemon (68.220.241.50:36258). Mar 3 13:42:45.368594 sshd[6388]: Accepted publickey for core from 68.220.241.50 port 36258 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:42:45.372164 sshd-session[6388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:42:45.380542 systemd-logind[1964]: New session 18 of user core. Mar 3 13:42:45.386019 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 3 13:42:45.769456 sshd[6393]: Connection closed by 68.220.241.50 port 36258 Mar 3 13:42:45.770999 sshd-session[6388]: pam_unix(sshd:session): session closed for user core Mar 3 13:42:45.779841 systemd[1]: sshd@17-172.31.29.215:22-68.220.241.50:36258.service: Deactivated successfully. Mar 3 13:42:45.787439 systemd[1]: session-18.scope: Deactivated successfully. Mar 3 13:42:45.790077 systemd-logind[1964]: Session 18 logged out. Waiting for processes to exit. Mar 3 13:42:45.792890 systemd-logind[1964]: Removed session 18. Mar 3 13:42:50.869196 systemd[1]: Started sshd@18-172.31.29.215:22-68.220.241.50:36266.service - OpenSSH per-connection server daemon (68.220.241.50:36266). Mar 3 13:42:51.369872 sshd[6424]: Accepted publickey for core from 68.220.241.50 port 36266 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:42:51.371682 sshd-session[6424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:42:51.378668 systemd-logind[1964]: New session 19 of user core. Mar 3 13:42:51.386999 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 3 13:42:52.055427 sshd[6429]: Connection closed by 68.220.241.50 port 36266 Mar 3 13:42:52.056690 sshd-session[6424]: pam_unix(sshd:session): session closed for user core Mar 3 13:42:52.066467 systemd-logind[1964]: Session 19 logged out. Waiting for processes to exit. Mar 3 13:42:52.066693 systemd[1]: sshd@18-172.31.29.215:22-68.220.241.50:36266.service: Deactivated successfully. Mar 3 13:42:52.069255 systemd[1]: session-19.scope: Deactivated successfully. Mar 3 13:42:52.071498 systemd-logind[1964]: Removed session 19. Mar 3 13:42:55.293205 kubelet[3342]: I0303 13:42:55.292796 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 3 13:42:57.137305 systemd[1]: Started sshd@19-172.31.29.215:22-68.220.241.50:40750.service - OpenSSH per-connection server daemon (68.220.241.50:40750). Mar 3 13:42:57.640552 sshd[6471]: Accepted publickey for core from 68.220.241.50 port 40750 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:42:57.642753 sshd-session[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:42:57.648365 systemd-logind[1964]: New session 20 of user core. Mar 3 13:42:57.653980 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 3 13:42:58.458028 sshd[6474]: Connection closed by 68.220.241.50 port 40750 Mar 3 13:42:58.459956 sshd-session[6471]: pam_unix(sshd:session): session closed for user core Mar 3 13:42:58.463974 systemd-logind[1964]: Session 20 logged out. Waiting for processes to exit. Mar 3 13:42:58.464757 systemd[1]: sshd@19-172.31.29.215:22-68.220.241.50:40750.service: Deactivated successfully. Mar 3 13:42:58.466823 systemd[1]: session-20.scope: Deactivated successfully. Mar 3 13:42:58.469172 systemd-logind[1964]: Removed session 20. Mar 3 13:43:03.545481 systemd[1]: Started sshd@20-172.31.29.215:22-68.220.241.50:41386.service - OpenSSH per-connection server daemon (68.220.241.50:41386). Mar 3 13:43:04.074478 sshd[6534]: Accepted publickey for core from 68.220.241.50 port 41386 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:43:04.075606 sshd-session[6534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:43:04.082443 systemd-logind[1964]: New session 21 of user core. Mar 3 13:43:04.091990 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 3 13:43:05.182229 sshd[6537]: Connection closed by 68.220.241.50 port 41386 Mar 3 13:43:05.184099 sshd-session[6534]: pam_unix(sshd:session): session closed for user core Mar 3 13:43:05.191128 systemd-logind[1964]: Session 21 logged out. Waiting for processes to exit. Mar 3 13:43:05.191618 systemd[1]: sshd@20-172.31.29.215:22-68.220.241.50:41386.service: Deactivated successfully. Mar 3 13:43:05.195725 systemd[1]: session-21.scope: Deactivated successfully. Mar 3 13:43:05.200917 systemd-logind[1964]: Removed session 21. Mar 3 13:43:10.270348 systemd[1]: Started sshd@21-172.31.29.215:22-68.220.241.50:41388.service - OpenSSH per-connection server daemon (68.220.241.50:41388). Mar 3 13:43:10.732640 sshd[6550]: Accepted publickey for core from 68.220.241.50 port 41388 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:43:10.735087 sshd-session[6550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:43:10.743577 systemd-logind[1964]: New session 22 of user core. Mar 3 13:43:10.746965 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 3 13:43:11.196849 sshd[6553]: Connection closed by 68.220.241.50 port 41388 Mar 3 13:43:11.198263 sshd-session[6550]: pam_unix(sshd:session): session closed for user core Mar 3 13:43:11.205064 systemd[1]: sshd@21-172.31.29.215:22-68.220.241.50:41388.service: Deactivated successfully. Mar 3 13:43:11.208357 systemd[1]: session-22.scope: Deactivated successfully. Mar 3 13:43:11.210295 systemd-logind[1964]: Session 22 logged out. Waiting for processes to exit. Mar 3 13:43:11.213212 systemd-logind[1964]: Removed session 22. Mar 3 13:43:16.284273 systemd[1]: Started sshd@22-172.31.29.215:22-68.220.241.50:36018.service - OpenSSH per-connection server daemon (68.220.241.50:36018). Mar 3 13:43:16.773823 sshd[6570]: Accepted publickey for core from 68.220.241.50 port 36018 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:43:16.775252 sshd-session[6570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:43:16.781886 systemd-logind[1964]: New session 23 of user core. Mar 3 13:43:16.784971 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 3 13:43:17.685682 sshd[6573]: Connection closed by 68.220.241.50 port 36018 Mar 3 13:43:17.689557 sshd-session[6570]: pam_unix(sshd:session): session closed for user core Mar 3 13:43:17.705111 systemd[1]: sshd@22-172.31.29.215:22-68.220.241.50:36018.service: Deactivated successfully. Mar 3 13:43:17.707701 systemd[1]: session-23.scope: Deactivated successfully. Mar 3 13:43:17.709634 systemd-logind[1964]: Session 23 logged out. Waiting for processes to exit. Mar 3 13:43:17.711529 systemd-logind[1964]: Removed session 23.