Mar 3 13:38:05.877151 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 10:59:45 -00 2026 Mar 3 13:38:05.877187 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:38:05.877206 kernel: BIOS-provided physical RAM map: Mar 3 13:38:05.877218 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 3 13:38:05.877229 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 3 13:38:05.877241 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 3 13:38:05.877256 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 3 13:38:05.877269 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 3 13:38:05.877281 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 3 13:38:05.877293 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 3 13:38:05.877305 kernel: NX (Execute Disable) protection: active Mar 3 13:38:05.877321 kernel: APIC: Static calls initialized Mar 3 13:38:05.877333 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Mar 3 13:38:05.877346 kernel: extended physical RAM map: Mar 3 13:38:05.877361 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 3 13:38:05.877373 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Mar 3 13:38:05.877390 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Mar 3 13:38:05.877403 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Mar 3 13:38:05.877417 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 3 13:38:05.877431 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 3 13:38:05.877444 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 3 13:38:05.877458 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 3 13:38:05.877471 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 3 13:38:05.877485 kernel: efi: EFI v2.7 by EDK II Mar 3 13:38:05.877498 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Mar 3 13:38:05.877512 kernel: secureboot: Secure boot disabled Mar 3 13:38:05.877525 kernel: SMBIOS 2.7 present. Mar 3 13:38:05.877542 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 3 13:38:05.877556 kernel: DMI: Memory slots populated: 1/1 Mar 3 13:38:05.877569 kernel: Hypervisor detected: KVM Mar 3 13:38:05.877583 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 3 13:38:05.877597 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 3 13:38:05.877610 kernel: kvm-clock: using sched offset of 6015850678 cycles Mar 3 13:38:05.877625 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 3 13:38:05.877639 kernel: tsc: Detected 2499.998 MHz processor Mar 3 13:38:05.877653 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 3 13:38:05.877667 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 3 13:38:05.877684 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 3 13:38:05.877698 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 3 13:38:05.877712 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 3 13:38:05.877731 kernel: Using GB pages for direct mapping Mar 3 13:38:05.877747 kernel: ACPI: Early table checksum verification disabled Mar 3 13:38:05.877761 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 3 13:38:05.877776 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 3 13:38:05.877794 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 3 13:38:05.877808 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 3 13:38:05.877823 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 3 13:38:05.877855 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 3 13:38:05.877871 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 3 13:38:05.877886 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 3 13:38:05.877901 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 3 13:38:05.877916 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 3 13:38:05.877934 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 3 13:38:05.877948 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 3 13:38:05.877963 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 3 13:38:05.877977 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 3 13:38:05.877992 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 3 13:38:05.878006 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 3 13:38:05.878021 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 3 13:38:05.878035 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 3 13:38:05.878053 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 3 13:38:05.878068 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 3 13:38:05.878082 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 3 13:38:05.878095 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 3 13:38:05.878109 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 3 13:38:05.878123 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 3 13:38:05.878137 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 3 13:38:05.878150 kernel: NUMA: Initialized distance table, cnt=1 Mar 3 13:38:05.878162 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Mar 3 13:38:05.878183 kernel: Zone ranges: Mar 3 13:38:05.878201 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 3 13:38:05.878221 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 3 13:38:05.878240 kernel: Normal empty Mar 3 13:38:05.878259 kernel: Device empty Mar 3 13:38:05.878273 kernel: Movable zone start for each node Mar 3 13:38:05.878286 kernel: Early memory node ranges Mar 3 13:38:05.878301 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 3 13:38:05.878316 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 3 13:38:05.878331 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 3 13:38:05.878348 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 3 13:38:05.878363 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 3 13:38:05.878378 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 3 13:38:05.878393 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 3 13:38:05.878407 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 3 13:38:05.878422 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 3 13:38:05.878436 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 3 13:38:05.878451 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 3 13:38:05.878465 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 3 13:38:05.878483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 3 13:38:05.878497 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 3 13:38:05.878512 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 3 13:38:05.878527 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 3 13:38:05.878541 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 3 13:38:05.878556 kernel: TSC deadline timer available Mar 3 13:38:05.878570 kernel: CPU topo: Max. logical packages: 1 Mar 3 13:38:05.878585 kernel: CPU topo: Max. logical dies: 1 Mar 3 13:38:05.878599 kernel: CPU topo: Max. dies per package: 1 Mar 3 13:38:05.878616 kernel: CPU topo: Max. threads per core: 2 Mar 3 13:38:05.878631 kernel: CPU topo: Num. cores per package: 1 Mar 3 13:38:05.878645 kernel: CPU topo: Num. threads per package: 2 Mar 3 13:38:05.878660 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Mar 3 13:38:05.878674 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 3 13:38:05.878689 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 3 13:38:05.878703 kernel: Booting paravirtualized kernel on KVM Mar 3 13:38:05.878718 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 3 13:38:05.878733 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 3 13:38:05.878748 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Mar 3 13:38:05.878766 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Mar 3 13:38:05.878783 kernel: pcpu-alloc: [0] 0 1 Mar 3 13:38:05.878797 kernel: kvm-guest: PV spinlocks enabled Mar 3 13:38:05.878812 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 3 13:38:05.878829 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:38:05.878863 kernel: random: crng init done Mar 3 13:38:05.878878 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 3 13:38:05.878896 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 3 13:38:05.878911 kernel: Fallback order for Node 0: 0 Mar 3 13:38:05.878926 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Mar 3 13:38:05.878941 kernel: Policy zone: DMA32 Mar 3 13:38:05.878967 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 3 13:38:05.878985 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 3 13:38:05.879001 kernel: Kernel/User page tables isolation: enabled Mar 3 13:38:05.879016 kernel: ftrace: allocating 40099 entries in 157 pages Mar 3 13:38:05.879032 kernel: ftrace: allocated 157 pages with 5 groups Mar 3 13:38:05.879047 kernel: Dynamic Preempt: voluntary Mar 3 13:38:05.879063 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 3 13:38:05.879079 kernel: rcu: RCU event tracing is enabled. Mar 3 13:38:05.879098 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 3 13:38:05.879114 kernel: Trampoline variant of Tasks RCU enabled. Mar 3 13:38:05.879129 kernel: Rude variant of Tasks RCU enabled. Mar 3 13:38:05.879145 kernel: Tracing variant of Tasks RCU enabled. Mar 3 13:38:05.879160 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 3 13:38:05.879176 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 3 13:38:05.879195 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 3 13:38:05.879210 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 3 13:38:05.879226 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 3 13:38:05.879242 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 3 13:38:05.879257 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 3 13:38:05.879273 kernel: Console: colour dummy device 80x25 Mar 3 13:38:05.879288 kernel: printk: legacy console [tty0] enabled Mar 3 13:38:05.879304 kernel: printk: legacy console [ttyS0] enabled Mar 3 13:38:05.879322 kernel: ACPI: Core revision 20240827 Mar 3 13:38:05.879338 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 3 13:38:05.879354 kernel: APIC: Switch to symmetric I/O mode setup Mar 3 13:38:05.879369 kernel: x2apic enabled Mar 3 13:38:05.879385 kernel: APIC: Switched APIC routing to: physical x2apic Mar 3 13:38:05.879401 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 3 13:38:05.879416 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 3 13:38:05.879432 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 3 13:38:05.879448 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 3 13:38:05.879463 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 3 13:38:05.879482 kernel: Spectre V2 : Mitigation: Retpolines Mar 3 13:38:05.879497 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 3 13:38:05.879512 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 3 13:38:05.879528 kernel: RETBleed: Vulnerable Mar 3 13:38:05.879544 kernel: Speculative Store Bypass: Vulnerable Mar 3 13:38:05.879559 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 3 13:38:05.879575 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 3 13:38:05.879590 kernel: GDS: Unknown: Dependent on hypervisor status Mar 3 13:38:05.879606 kernel: active return thunk: its_return_thunk Mar 3 13:38:05.879621 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 3 13:38:05.879636 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 3 13:38:05.879655 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 3 13:38:05.879671 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 3 13:38:05.879686 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 3 13:38:05.879701 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 3 13:38:05.879717 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 3 13:38:05.879732 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 3 13:38:05.879748 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 3 13:38:05.879763 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 3 13:38:05.879779 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 3 13:38:05.879794 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 3 13:38:05.879812 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 3 13:38:05.879828 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 3 13:38:05.879865 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 3 13:38:05.879880 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 3 13:38:05.879896 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 3 13:38:05.879911 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 3 13:38:05.879927 kernel: Freeing SMP alternatives memory: 32K Mar 3 13:38:05.879943 kernel: pid_max: default: 32768 minimum: 301 Mar 3 13:38:05.879958 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 3 13:38:05.879974 kernel: landlock: Up and running. Mar 3 13:38:05.879990 kernel: SELinux: Initializing. Mar 3 13:38:05.880013 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 3 13:38:05.880033 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 3 13:38:05.880049 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 3 13:38:05.880065 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 3 13:38:05.880081 kernel: signal: max sigframe size: 3632 Mar 3 13:38:05.880096 kernel: rcu: Hierarchical SRCU implementation. Mar 3 13:38:05.880112 kernel: rcu: Max phase no-delay instances is 400. Mar 3 13:38:05.880128 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 3 13:38:05.880144 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 3 13:38:05.880159 kernel: smp: Bringing up secondary CPUs ... Mar 3 13:38:05.880177 kernel: smpboot: x86: Booting SMP configuration: Mar 3 13:38:05.880194 kernel: .... node #0, CPUs: #1 Mar 3 13:38:05.880210 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 3 13:38:05.880227 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 3 13:38:05.880242 kernel: smp: Brought up 1 node, 2 CPUs Mar 3 13:38:05.880258 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 3 13:38:05.880274 kernel: Memory: 1899856K/2037804K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 133384K reserved, 0K cma-reserved) Mar 3 13:38:05.880290 kernel: devtmpfs: initialized Mar 3 13:38:05.880305 kernel: x86/mm: Memory block size: 128MB Mar 3 13:38:05.880324 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 3 13:38:05.880340 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 3 13:38:05.880356 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 3 13:38:05.880372 kernel: pinctrl core: initialized pinctrl subsystem Mar 3 13:38:05.880387 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 3 13:38:05.880403 kernel: audit: initializing netlink subsys (disabled) Mar 3 13:38:05.880418 kernel: audit: type=2000 audit(1772545083.213:1): state=initialized audit_enabled=0 res=1 Mar 3 13:38:05.880433 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 3 13:38:05.880452 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 3 13:38:05.880468 kernel: cpuidle: using governor menu Mar 3 13:38:05.880483 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 3 13:38:05.880498 kernel: dca service started, version 1.12.1 Mar 3 13:38:05.880514 kernel: PCI: Using configuration type 1 for base access Mar 3 13:38:05.880530 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 3 13:38:05.880545 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 3 13:38:05.880561 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 3 13:38:05.880576 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 3 13:38:05.880595 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 3 13:38:05.880610 kernel: ACPI: Added _OSI(Module Device) Mar 3 13:38:05.880626 kernel: ACPI: Added _OSI(Processor Device) Mar 3 13:38:05.880641 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 3 13:38:05.880657 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 3 13:38:05.880672 kernel: ACPI: Interpreter enabled Mar 3 13:38:05.880687 kernel: ACPI: PM: (supports S0 S5) Mar 3 13:38:05.880703 kernel: ACPI: Using IOAPIC for interrupt routing Mar 3 13:38:05.880719 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 3 13:38:05.880735 kernel: PCI: Using E820 reservations for host bridge windows Mar 3 13:38:05.880753 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 3 13:38:05.880768 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 3 13:38:05.882035 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 3 13:38:05.882191 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 3 13:38:05.882326 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 3 13:38:05.882342 kernel: acpiphp: Slot [3] registered Mar 3 13:38:05.882356 kernel: acpiphp: Slot [4] registered Mar 3 13:38:05.882374 kernel: acpiphp: Slot [5] registered Mar 3 13:38:05.882387 kernel: acpiphp: Slot [6] registered Mar 3 13:38:05.882402 kernel: acpiphp: Slot [7] registered Mar 3 13:38:05.882417 kernel: acpiphp: Slot [8] registered Mar 3 13:38:05.882431 kernel: acpiphp: Slot [9] registered Mar 3 13:38:05.882446 kernel: acpiphp: Slot [10] registered Mar 3 13:38:05.882461 kernel: acpiphp: Slot [11] registered Mar 3 13:38:05.882476 kernel: acpiphp: Slot [12] registered Mar 3 13:38:05.882491 kernel: acpiphp: Slot [13] registered Mar 3 13:38:05.882510 kernel: acpiphp: Slot [14] registered Mar 3 13:38:05.882525 kernel: acpiphp: Slot [15] registered Mar 3 13:38:05.882539 kernel: acpiphp: Slot [16] registered Mar 3 13:38:05.882555 kernel: acpiphp: Slot [17] registered Mar 3 13:38:05.882570 kernel: acpiphp: Slot [18] registered Mar 3 13:38:05.882585 kernel: acpiphp: Slot [19] registered Mar 3 13:38:05.882600 kernel: acpiphp: Slot [20] registered Mar 3 13:38:05.882615 kernel: acpiphp: Slot [21] registered Mar 3 13:38:05.882629 kernel: acpiphp: Slot [22] registered Mar 3 13:38:05.882645 kernel: acpiphp: Slot [23] registered Mar 3 13:38:05.882662 kernel: acpiphp: Slot [24] registered Mar 3 13:38:05.882677 kernel: acpiphp: Slot [25] registered Mar 3 13:38:05.882692 kernel: acpiphp: Slot [26] registered Mar 3 13:38:05.882707 kernel: acpiphp: Slot [27] registered Mar 3 13:38:05.882722 kernel: acpiphp: Slot [28] registered Mar 3 13:38:05.882737 kernel: acpiphp: Slot [29] registered Mar 3 13:38:05.882752 kernel: acpiphp: Slot [30] registered Mar 3 13:38:05.882767 kernel: acpiphp: Slot [31] registered Mar 3 13:38:05.882782 kernel: PCI host bridge to bus 0000:00 Mar 3 13:38:05.884017 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 3 13:38:05.884163 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 3 13:38:05.884288 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 3 13:38:05.884406 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 3 13:38:05.884521 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 3 13:38:05.884634 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 3 13:38:05.884787 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Mar 3 13:38:05.884950 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Mar 3 13:38:05.885111 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Mar 3 13:38:05.885242 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 3 13:38:05.885370 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 3 13:38:05.885497 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 3 13:38:05.885623 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 3 13:38:05.885754 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 3 13:38:05.888954 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 3 13:38:05.889103 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 3 13:38:05.889245 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Mar 3 13:38:05.889381 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Mar 3 13:38:05.889514 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 3 13:38:05.889646 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 3 13:38:05.889790 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Mar 3 13:38:05.890034 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Mar 3 13:38:05.890185 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Mar 3 13:38:05.890347 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Mar 3 13:38:05.890368 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 3 13:38:05.890384 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 3 13:38:05.890399 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 3 13:38:05.890420 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 3 13:38:05.890435 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 3 13:38:05.890450 kernel: iommu: Default domain type: Translated Mar 3 13:38:05.890466 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 3 13:38:05.890481 kernel: efivars: Registered efivars operations Mar 3 13:38:05.890496 kernel: PCI: Using ACPI for IRQ routing Mar 3 13:38:05.890512 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 3 13:38:05.890527 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Mar 3 13:38:05.890542 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 3 13:38:05.890559 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 3 13:38:05.890706 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 3 13:38:05.891275 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 3 13:38:05.891431 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 3 13:38:05.891450 kernel: vgaarb: loaded Mar 3 13:38:05.891465 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 3 13:38:05.891480 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 3 13:38:05.891494 kernel: clocksource: Switched to clocksource kvm-clock Mar 3 13:38:05.891512 kernel: VFS: Disk quotas dquot_6.6.0 Mar 3 13:38:05.891526 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 3 13:38:05.891540 kernel: pnp: PnP ACPI init Mar 3 13:38:05.891554 kernel: pnp: PnP ACPI: found 5 devices Mar 3 13:38:05.891568 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 3 13:38:05.891583 kernel: NET: Registered PF_INET protocol family Mar 3 13:38:05.891597 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 3 13:38:05.891611 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 3 13:38:05.891625 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 3 13:38:05.891642 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 3 13:38:05.891656 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 3 13:38:05.891670 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 3 13:38:05.891684 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 3 13:38:05.891698 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 3 13:38:05.891711 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 3 13:38:05.891725 kernel: NET: Registered PF_XDP protocol family Mar 3 13:38:05.891860 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 3 13:38:05.891977 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 3 13:38:05.892570 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 3 13:38:05.892703 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 3 13:38:05.892826 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 3 13:38:05.894016 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 3 13:38:05.894041 kernel: PCI: CLS 0 bytes, default 64 Mar 3 13:38:05.894058 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 3 13:38:05.894074 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 3 13:38:05.894091 kernel: clocksource: Switched to clocksource tsc Mar 3 13:38:05.894111 kernel: Initialise system trusted keyrings Mar 3 13:38:05.894127 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 3 13:38:05.894143 kernel: Key type asymmetric registered Mar 3 13:38:05.894157 kernel: Asymmetric key parser 'x509' registered Mar 3 13:38:05.894172 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 3 13:38:05.894187 kernel: io scheduler mq-deadline registered Mar 3 13:38:05.894202 kernel: io scheduler kyber registered Mar 3 13:38:05.894217 kernel: io scheduler bfq registered Mar 3 13:38:05.894232 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 3 13:38:05.894251 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 3 13:38:05.894266 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 3 13:38:05.894281 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 3 13:38:05.894296 kernel: i8042: Warning: Keylock active Mar 3 13:38:05.894312 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 3 13:38:05.894327 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 3 13:38:05.894482 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 3 13:38:05.894608 kernel: rtc_cmos 00:00: registered as rtc0 Mar 3 13:38:05.894737 kernel: rtc_cmos 00:00: setting system clock to 2026-03-03T13:38:05 UTC (1772545085) Mar 3 13:38:05.895732 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 3 13:38:05.895783 kernel: intel_pstate: CPU model not supported Mar 3 13:38:05.895804 kernel: efifb: probing for efifb Mar 3 13:38:05.895821 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Mar 3 13:38:05.895858 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 3 13:38:05.895873 kernel: efifb: scrolling: redraw Mar 3 13:38:05.895888 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 3 13:38:05.895905 kernel: Console: switching to colour frame buffer device 100x37 Mar 3 13:38:05.895924 kernel: fb0: EFI VGA frame buffer device Mar 3 13:38:05.895939 kernel: pstore: Using crash dump compression: deflate Mar 3 13:38:05.895956 kernel: pstore: Registered efi_pstore as persistent store backend Mar 3 13:38:05.895974 kernel: NET: Registered PF_INET6 protocol family Mar 3 13:38:05.895994 kernel: Segment Routing with IPv6 Mar 3 13:38:05.896020 kernel: In-situ OAM (IOAM) with IPv6 Mar 3 13:38:05.896037 kernel: NET: Registered PF_PACKET protocol family Mar 3 13:38:05.896054 kernel: Key type dns_resolver registered Mar 3 13:38:05.896070 kernel: IPI shorthand broadcast: enabled Mar 3 13:38:05.896091 kernel: sched_clock: Marking stable (2582001887, 148788598)->(2824074886, -93284401) Mar 3 13:38:05.896107 kernel: registered taskstats version 1 Mar 3 13:38:05.896124 kernel: Loading compiled-in X.509 certificates Mar 3 13:38:05.896138 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: bf135b2a3d3664cc6742f4e1848867384c1e52f1' Mar 3 13:38:05.896154 kernel: Demotion targets for Node 0: null Mar 3 13:38:05.896169 kernel: Key type .fscrypt registered Mar 3 13:38:05.896184 kernel: Key type fscrypt-provisioning registered Mar 3 13:38:05.896200 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 3 13:38:05.896216 kernel: ima: Allocated hash algorithm: sha1 Mar 3 13:38:05.896234 kernel: ima: No architecture policies found Mar 3 13:38:05.896250 kernel: clk: Disabling unused clocks Mar 3 13:38:05.896266 kernel: Warning: unable to open an initial console. Mar 3 13:38:05.896282 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 3 13:38:05.896298 kernel: Write protecting the kernel read-only data: 40960k Mar 3 13:38:05.896319 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 3 13:38:05.896335 kernel: Run /init as init process Mar 3 13:38:05.896351 kernel: with arguments: Mar 3 13:38:05.896367 kernel: /init Mar 3 13:38:05.896382 kernel: with environment: Mar 3 13:38:05.896397 kernel: HOME=/ Mar 3 13:38:05.896413 kernel: TERM=linux Mar 3 13:38:05.896431 systemd[1]: Successfully made /usr/ read-only. Mar 3 13:38:05.896451 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 13:38:05.896471 systemd[1]: Detected virtualization amazon. Mar 3 13:38:05.896487 systemd[1]: Detected architecture x86-64. Mar 3 13:38:05.896502 systemd[1]: Running in initrd. Mar 3 13:38:05.896518 systemd[1]: No hostname configured, using default hostname. Mar 3 13:38:05.896535 systemd[1]: Hostname set to . Mar 3 13:38:05.896551 systemd[1]: Initializing machine ID from VM UUID. Mar 3 13:38:05.896567 systemd[1]: Queued start job for default target initrd.target. Mar 3 13:38:05.896587 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:38:05.896603 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:38:05.896621 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 3 13:38:05.896637 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 13:38:05.896654 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 3 13:38:05.896672 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 3 13:38:05.896689 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 3 13:38:05.896709 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 3 13:38:05.896726 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:38:05.896743 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:38:05.896759 systemd[1]: Reached target paths.target - Path Units. Mar 3 13:38:05.896775 systemd[1]: Reached target slices.target - Slice Units. Mar 3 13:38:05.896792 systemd[1]: Reached target swap.target - Swaps. Mar 3 13:38:05.896808 systemd[1]: Reached target timers.target - Timer Units. Mar 3 13:38:05.896824 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 13:38:05.898862 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 13:38:05.898890 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 3 13:38:05.898906 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 3 13:38:05.898922 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:38:05.898938 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 13:38:05.898954 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:38:05.898969 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 13:38:05.898985 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 3 13:38:05.899001 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 13:38:05.899021 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 3 13:38:05.899038 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 3 13:38:05.899053 systemd[1]: Starting systemd-fsck-usr.service... Mar 3 13:38:05.899069 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 13:38:05.899085 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 13:38:05.899101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:38:05.899145 systemd-journald[188]: Collecting audit messages is disabled. Mar 3 13:38:05.899184 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 3 13:38:05.899204 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:38:05.899223 systemd[1]: Finished systemd-fsck-usr.service. Mar 3 13:38:05.899239 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 3 13:38:05.899255 systemd-journald[188]: Journal started Mar 3 13:38:05.899286 systemd-journald[188]: Runtime Journal (/run/log/journal/ec2fd2d58ed7eada49a1ae46033c71a1) is 4.7M, max 38.1M, 33.3M free. Mar 3 13:38:05.895330 systemd-modules-load[189]: Inserted module 'overlay' Mar 3 13:38:05.915872 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 13:38:05.921374 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:38:05.926126 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 3 13:38:05.931986 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 13:38:05.942270 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 13:38:05.952829 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 3 13:38:05.952885 kernel: Bridge firewalling registered Mar 3 13:38:05.950078 systemd-modules-load[189]: Inserted module 'br_netfilter' Mar 3 13:38:05.951087 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 13:38:05.959494 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 13:38:05.967994 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:38:05.969053 systemd-tmpfiles[205]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 3 13:38:05.975566 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:38:05.980907 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:38:05.990114 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 13:38:05.992448 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 3 13:38:06.003180 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:38:06.004964 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 3 13:38:06.007323 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 13:38:06.020941 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:38:06.070796 systemd-resolved[229]: Positive Trust Anchors: Mar 3 13:38:06.071735 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 13:38:06.071803 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 13:38:06.080510 systemd-resolved[229]: Defaulting to hostname 'linux'. Mar 3 13:38:06.081832 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 13:38:06.083217 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:38:06.118877 kernel: SCSI subsystem initialized Mar 3 13:38:06.127876 kernel: Loading iSCSI transport class v2.0-870. Mar 3 13:38:06.138863 kernel: iscsi: registered transport (tcp) Mar 3 13:38:06.159889 kernel: iscsi: registered transport (qla4xxx) Mar 3 13:38:06.159968 kernel: QLogic iSCSI HBA Driver Mar 3 13:38:06.179430 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 13:38:06.195025 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:38:06.196046 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 13:38:06.241156 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 3 13:38:06.243227 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 3 13:38:06.295872 kernel: raid6: avx512x4 gen() 17976 MB/s Mar 3 13:38:06.313865 kernel: raid6: avx512x2 gen() 18038 MB/s Mar 3 13:38:06.331866 kernel: raid6: avx512x1 gen() 17951 MB/s Mar 3 13:38:06.349864 kernel: raid6: avx2x4 gen() 17884 MB/s Mar 3 13:38:06.367863 kernel: raid6: avx2x2 gen() 17945 MB/s Mar 3 13:38:06.386110 kernel: raid6: avx2x1 gen() 13775 MB/s Mar 3 13:38:06.386158 kernel: raid6: using algorithm avx512x2 gen() 18038 MB/s Mar 3 13:38:06.405049 kernel: raid6: .... xor() 24742 MB/s, rmw enabled Mar 3 13:38:06.405121 kernel: raid6: using avx512x2 recovery algorithm Mar 3 13:38:06.425877 kernel: xor: automatically using best checksumming function avx Mar 3 13:38:06.592880 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 3 13:38:06.599424 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 3 13:38:06.601509 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:38:06.625558 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 3 13:38:06.632154 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:38:06.636092 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 3 13:38:06.664557 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Mar 3 13:38:06.690804 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 13:38:06.692789 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 13:38:06.753771 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:38:06.756999 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 3 13:38:06.852257 kernel: cryptd: max_cpu_qlen set to 1000 Mar 3 13:38:06.874711 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:38:06.875657 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:38:06.876969 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:38:06.884335 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:38:06.895753 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 3 13:38:06.896016 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 3 13:38:06.896161 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 3 13:38:06.896318 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Mar 3 13:38:06.896337 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 3 13:38:06.899478 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:38:06.901585 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 3 13:38:06.910277 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:e5:11:c9:e0:59 Mar 3 13:38:06.913997 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 3 13:38:06.912352 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:38:06.912480 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:38:06.915171 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:38:06.917516 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:38:06.924504 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 3 13:38:06.924580 kernel: GPT:9289727 != 33554431 Mar 3 13:38:06.924610 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 3 13:38:06.924629 kernel: GPT:9289727 != 33554431 Mar 3 13:38:06.924647 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 3 13:38:06.924665 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 3 13:38:06.932146 (udev-worker)[485]: Network interface NamePolicy= disabled on kernel command line. Mar 3 13:38:06.946513 kernel: AES CTR mode by8 optimization enabled Mar 3 13:38:06.976470 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:38:06.986918 kernel: nvme nvme0: using unchecked data buffer Mar 3 13:38:07.117443 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 3 13:38:07.118463 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 3 13:38:07.131899 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 3 13:38:07.132735 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 3 13:38:07.152276 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 3 13:38:07.163180 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 3 13:38:07.163814 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 13:38:07.165099 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:38:07.166223 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 13:38:07.167859 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 3 13:38:07.171980 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 3 13:38:07.189747 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 3 13:38:07.193987 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 3 13:38:07.194023 disk-uuid[675]: Primary Header is updated. Mar 3 13:38:07.194023 disk-uuid[675]: Secondary Entries is updated. Mar 3 13:38:07.194023 disk-uuid[675]: Secondary Header is updated. Mar 3 13:38:08.210891 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 3 13:38:08.212274 disk-uuid[682]: The operation has completed successfully. Mar 3 13:38:08.342512 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 3 13:38:08.342643 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 3 13:38:08.380644 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 3 13:38:08.399319 sh[943]: Success Mar 3 13:38:08.419426 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 3 13:38:08.419508 kernel: device-mapper: uevent: version 1.0.3 Mar 3 13:38:08.420125 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 3 13:38:08.432101 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Mar 3 13:38:08.507157 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 3 13:38:08.510920 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 3 13:38:08.526485 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 3 13:38:08.546893 kernel: BTRFS: device fsid f550cb98-648e-4600-9237-4b15eb09827b devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (966) Mar 3 13:38:08.549862 kernel: BTRFS info (device dm-0): first mount of filesystem f550cb98-648e-4600-9237-4b15eb09827b Mar 3 13:38:08.549907 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:38:08.678901 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 3 13:38:08.678982 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 3 13:38:08.679003 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 3 13:38:08.691084 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 3 13:38:08.692007 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 3 13:38:08.692536 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 3 13:38:08.693278 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 3 13:38:08.695955 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 3 13:38:08.723893 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (999) Mar 3 13:38:08.728829 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:38:08.729204 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:38:08.743476 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 3 13:38:08.743538 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 3 13:38:08.749943 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:38:08.751064 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 3 13:38:08.754034 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 3 13:38:08.794401 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 13:38:08.797517 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 13:38:08.835154 systemd-networkd[1135]: lo: Link UP Mar 3 13:38:08.835166 systemd-networkd[1135]: lo: Gained carrier Mar 3 13:38:08.836854 systemd-networkd[1135]: Enumeration completed Mar 3 13:38:08.837276 systemd-networkd[1135]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:38:08.837282 systemd-networkd[1135]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 13:38:08.838486 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 13:38:08.839140 systemd[1]: Reached target network.target - Network. Mar 3 13:38:08.841754 systemd-networkd[1135]: eth0: Link UP Mar 3 13:38:08.841763 systemd-networkd[1135]: eth0: Gained carrier Mar 3 13:38:08.841782 systemd-networkd[1135]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:38:08.859390 systemd-networkd[1135]: eth0: DHCPv4 address 172.31.31.254/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 3 13:38:09.149061 ignition[1080]: Ignition 2.22.0 Mar 3 13:38:09.149075 ignition[1080]: Stage: fetch-offline Mar 3 13:38:09.149245 ignition[1080]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:38:09.149253 ignition[1080]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 13:38:09.149444 ignition[1080]: Ignition finished successfully Mar 3 13:38:09.151356 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 13:38:09.153069 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 3 13:38:09.185916 ignition[1145]: Ignition 2.22.0 Mar 3 13:38:09.185933 ignition[1145]: Stage: fetch Mar 3 13:38:09.186293 ignition[1145]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:38:09.186305 ignition[1145]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 13:38:09.186418 ignition[1145]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 13:38:09.236081 ignition[1145]: PUT result: OK Mar 3 13:38:09.238872 ignition[1145]: parsed url from cmdline: "" Mar 3 13:38:09.238883 ignition[1145]: no config URL provided Mar 3 13:38:09.238893 ignition[1145]: reading system config file "/usr/lib/ignition/user.ign" Mar 3 13:38:09.238909 ignition[1145]: no config at "/usr/lib/ignition/user.ign" Mar 3 13:38:09.238939 ignition[1145]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 13:38:09.240157 ignition[1145]: PUT result: OK Mar 3 13:38:09.240209 ignition[1145]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 3 13:38:09.241321 ignition[1145]: GET result: OK Mar 3 13:38:09.241485 ignition[1145]: parsing config with SHA512: 390004e7c5b747f5a526a8b1357bb4e4d9ce4b5b5b63217b96af9b0c3d754ee0e38f6c30b39b8613952d460e646dbe1217de3cc3e77beff560750d14a26986ad Mar 3 13:38:09.249165 unknown[1145]: fetched base config from "system" Mar 3 13:38:09.249178 unknown[1145]: fetched base config from "system" Mar 3 13:38:09.249683 ignition[1145]: fetch: fetch complete Mar 3 13:38:09.249185 unknown[1145]: fetched user config from "aws" Mar 3 13:38:09.249690 ignition[1145]: fetch: fetch passed Mar 3 13:38:09.249749 ignition[1145]: Ignition finished successfully Mar 3 13:38:09.253005 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 3 13:38:09.254421 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 3 13:38:09.286543 ignition[1152]: Ignition 2.22.0 Mar 3 13:38:09.286560 ignition[1152]: Stage: kargs Mar 3 13:38:09.286934 ignition[1152]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:38:09.286947 ignition[1152]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 13:38:09.287065 ignition[1152]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 13:38:09.288723 ignition[1152]: PUT result: OK Mar 3 13:38:09.290722 ignition[1152]: kargs: kargs passed Mar 3 13:38:09.290794 ignition[1152]: Ignition finished successfully Mar 3 13:38:09.293307 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 3 13:38:09.294773 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 3 13:38:09.324452 ignition[1159]: Ignition 2.22.0 Mar 3 13:38:09.324467 ignition[1159]: Stage: disks Mar 3 13:38:09.324923 ignition[1159]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:38:09.324956 ignition[1159]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 13:38:09.325519 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 13:38:09.326405 ignition[1159]: PUT result: OK Mar 3 13:38:09.328515 ignition[1159]: disks: disks passed Mar 3 13:38:09.328587 ignition[1159]: Ignition finished successfully Mar 3 13:38:09.330547 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 3 13:38:09.331147 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 3 13:38:09.331514 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 3 13:38:09.332142 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 13:38:09.332674 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 13:38:09.333245 systemd[1]: Reached target basic.target - Basic System. Mar 3 13:38:09.334826 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 3 13:38:09.377358 systemd-fsck[1167]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 3 13:38:09.380131 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 3 13:38:09.381818 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 3 13:38:09.528869 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f0c751de-febc-4e57-b330-c926d38ed5ec r/w with ordered data mode. Quota mode: none. Mar 3 13:38:09.530059 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 3 13:38:09.530946 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 3 13:38:09.532746 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 13:38:09.534385 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 3 13:38:09.536641 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 3 13:38:09.536959 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 3 13:38:09.536984 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 13:38:09.546149 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 3 13:38:09.548394 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 3 13:38:09.560862 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1186) Mar 3 13:38:09.564952 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:38:09.565015 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:38:09.570904 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 3 13:38:09.570967 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 3 13:38:09.573450 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 13:38:09.922886 initrd-setup-root[1210]: cut: /sysroot/etc/passwd: No such file or directory Mar 3 13:38:09.948567 initrd-setup-root[1217]: cut: /sysroot/etc/group: No such file or directory Mar 3 13:38:09.975899 initrd-setup-root[1224]: cut: /sysroot/etc/shadow: No such file or directory Mar 3 13:38:09.980163 initrd-setup-root[1231]: cut: /sysroot/etc/gshadow: No such file or directory Mar 3 13:38:10.040989 systemd-networkd[1135]: eth0: Gained IPv6LL Mar 3 13:38:10.290551 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 3 13:38:10.292488 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 3 13:38:10.294096 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 3 13:38:10.306690 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 3 13:38:10.308862 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:38:10.331298 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 3 13:38:10.337425 ignition[1298]: INFO : Ignition 2.22.0 Mar 3 13:38:10.337425 ignition[1298]: INFO : Stage: mount Mar 3 13:38:10.338594 ignition[1298]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:38:10.338594 ignition[1298]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 13:38:10.338594 ignition[1298]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 13:38:10.339664 ignition[1298]: INFO : PUT result: OK Mar 3 13:38:10.341884 ignition[1298]: INFO : mount: mount passed Mar 3 13:38:10.343191 ignition[1298]: INFO : Ignition finished successfully Mar 3 13:38:10.344199 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 3 13:38:10.345817 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 3 13:38:10.363296 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 13:38:10.388887 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1311) Mar 3 13:38:10.392999 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:38:10.393062 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:38:10.400033 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 3 13:38:10.400102 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 3 13:38:10.402901 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 13:38:10.433900 ignition[1327]: INFO : Ignition 2.22.0 Mar 3 13:38:10.433900 ignition[1327]: INFO : Stage: files Mar 3 13:38:10.434927 ignition[1327]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:38:10.434927 ignition[1327]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 13:38:10.434927 ignition[1327]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 13:38:10.436065 ignition[1327]: INFO : PUT result: OK Mar 3 13:38:10.439112 ignition[1327]: DEBUG : files: compiled without relabeling support, skipping Mar 3 13:38:10.440861 ignition[1327]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 3 13:38:10.440861 ignition[1327]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 3 13:38:10.452675 ignition[1327]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 3 13:38:10.453611 ignition[1327]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 3 13:38:10.454505 ignition[1327]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 3 13:38:10.453917 unknown[1327]: wrote ssh authorized keys file for user: core Mar 3 13:38:10.456016 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 3 13:38:10.456657 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 3 13:38:10.552236 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 3 13:38:10.791908 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 3 13:38:10.792821 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 3 13:38:10.792821 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 3 13:38:10.792821 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 3 13:38:10.792821 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 3 13:38:10.792821 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 13:38:10.792821 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 13:38:10.792821 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 13:38:10.792821 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 13:38:10.798358 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 13:38:10.798358 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 13:38:10.798358 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 3 13:38:10.801299 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 3 13:38:10.801299 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 3 13:38:10.801299 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 3 13:38:11.314211 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 3 13:38:13.538295 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 3 13:38:13.538295 ignition[1327]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 3 13:38:13.551568 ignition[1327]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 13:38:13.556081 ignition[1327]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 13:38:13.556081 ignition[1327]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 3 13:38:13.556081 ignition[1327]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 3 13:38:13.558534 ignition[1327]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 3 13:38:13.558534 ignition[1327]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 3 13:38:13.558534 ignition[1327]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 3 13:38:13.558534 ignition[1327]: INFO : files: files passed Mar 3 13:38:13.558534 ignition[1327]: INFO : Ignition finished successfully Mar 3 13:38:13.558336 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 3 13:38:13.559445 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 3 13:38:13.563408 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 3 13:38:13.572201 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 3 13:38:13.572929 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 3 13:38:13.585987 initrd-setup-root-after-ignition[1357]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:38:13.585987 initrd-setup-root-after-ignition[1357]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:38:13.588024 initrd-setup-root-after-ignition[1361]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:38:13.588629 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 13:38:13.589572 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 3 13:38:13.590886 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 3 13:38:13.643469 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 3 13:38:13.643611 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 3 13:38:13.644978 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 3 13:38:13.646087 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 3 13:38:13.646912 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 3 13:38:13.648148 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 3 13:38:13.686606 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 13:38:13.688553 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 3 13:38:13.713835 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:38:13.715073 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:38:13.715748 systemd[1]: Stopped target timers.target - Timer Units. Mar 3 13:38:13.716741 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 3 13:38:13.716983 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 13:38:13.718119 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 3 13:38:13.719007 systemd[1]: Stopped target basic.target - Basic System. Mar 3 13:38:13.719787 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 3 13:38:13.720643 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 13:38:13.721437 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 3 13:38:13.722172 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 3 13:38:13.723021 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 3 13:38:13.723833 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 13:38:13.724727 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 3 13:38:13.725830 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 3 13:38:13.726630 systemd[1]: Stopped target swap.target - Swaps. Mar 3 13:38:13.727353 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 3 13:38:13.727575 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 3 13:38:13.728652 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:38:13.729483 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:38:13.730128 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 3 13:38:13.730859 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:38:13.731311 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 3 13:38:13.731485 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 3 13:38:13.733059 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 3 13:38:13.733310 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 13:38:13.734017 systemd[1]: ignition-files.service: Deactivated successfully. Mar 3 13:38:13.734213 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 3 13:38:13.736964 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 3 13:38:13.737557 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 3 13:38:13.737780 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:38:13.740162 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 3 13:38:13.742947 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 3 13:38:13.743198 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:38:13.744045 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 3 13:38:13.744194 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 13:38:13.751298 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 3 13:38:13.754951 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 3 13:38:13.777585 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 3 13:38:13.782162 ignition[1381]: INFO : Ignition 2.22.0 Mar 3 13:38:13.782162 ignition[1381]: INFO : Stage: umount Mar 3 13:38:13.783711 ignition[1381]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:38:13.783711 ignition[1381]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 13:38:13.783711 ignition[1381]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 13:38:13.783711 ignition[1381]: INFO : PUT result: OK Mar 3 13:38:13.786343 ignition[1381]: INFO : umount: umount passed Mar 3 13:38:13.786877 ignition[1381]: INFO : Ignition finished successfully Mar 3 13:38:13.788584 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 3 13:38:13.788733 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 3 13:38:13.790056 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 3 13:38:13.790122 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 3 13:38:13.790557 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 3 13:38:13.790615 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 3 13:38:13.791199 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 3 13:38:13.791256 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 3 13:38:13.791822 systemd[1]: Stopped target network.target - Network. Mar 3 13:38:13.792543 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 3 13:38:13.792603 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 13:38:13.793205 systemd[1]: Stopped target paths.target - Path Units. Mar 3 13:38:13.793761 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 3 13:38:13.797912 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:38:13.798284 systemd[1]: Stopped target slices.target - Slice Units. Mar 3 13:38:13.799226 systemd[1]: Stopped target sockets.target - Socket Units. Mar 3 13:38:13.799879 systemd[1]: iscsid.socket: Deactivated successfully. Mar 3 13:38:13.799942 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 13:38:13.800571 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 3 13:38:13.800619 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 13:38:13.801190 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 3 13:38:13.801268 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 3 13:38:13.802801 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 3 13:38:13.802874 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 3 13:38:13.803595 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 3 13:38:13.804754 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 3 13:38:13.811030 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 3 13:38:13.811215 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 3 13:38:13.815543 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 3 13:38:13.816263 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 3 13:38:13.816400 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 3 13:38:13.818800 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 3 13:38:13.819681 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 3 13:38:13.820738 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 3 13:38:13.820832 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:38:13.822565 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 3 13:38:13.823106 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 3 13:38:13.823169 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 13:38:13.825966 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 3 13:38:13.826023 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:38:13.827902 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 3 13:38:13.827966 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 3 13:38:13.829132 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 3 13:38:13.829934 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:38:13.831063 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:38:13.837249 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 3 13:38:13.837356 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:38:13.851420 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 3 13:38:13.851892 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:38:13.853987 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 3 13:38:13.854112 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 3 13:38:13.855335 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 3 13:38:13.855404 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 3 13:38:13.855787 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 3 13:38:13.855818 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:38:13.857290 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 3 13:38:13.857341 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 3 13:38:13.858295 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 3 13:38:13.858341 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 3 13:38:13.859137 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 3 13:38:13.859179 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 13:38:13.861968 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 3 13:38:13.862428 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 3 13:38:13.862478 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:38:13.864999 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 3 13:38:13.865039 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:38:13.865634 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:38:13.865674 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:38:13.867801 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 3 13:38:13.867863 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 3 13:38:13.867902 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:38:13.882003 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 3 13:38:13.882108 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 3 13:38:13.919836 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 3 13:38:13.920024 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 3 13:38:13.921342 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 3 13:38:13.922090 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 3 13:38:13.922164 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 3 13:38:13.923720 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 3 13:38:13.941333 systemd[1]: Switching root. Mar 3 13:38:13.984051 systemd-journald[188]: Journal stopped Mar 3 13:38:15.620327 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Mar 3 13:38:15.620398 kernel: SELinux: policy capability network_peer_controls=1 Mar 3 13:38:15.620418 kernel: SELinux: policy capability open_perms=1 Mar 3 13:38:15.620434 kernel: SELinux: policy capability extended_socket_class=1 Mar 3 13:38:15.620446 kernel: SELinux: policy capability always_check_network=0 Mar 3 13:38:15.620457 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 3 13:38:15.620469 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 3 13:38:15.620480 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 3 13:38:15.620495 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 3 13:38:15.620507 kernel: SELinux: policy capability userspace_initial_context=0 Mar 3 13:38:15.620518 kernel: audit: type=1403 audit(1772545094.365:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 3 13:38:15.620531 systemd[1]: Successfully loaded SELinux policy in 72.539ms. Mar 3 13:38:15.620562 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.455ms. Mar 3 13:38:15.620576 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 13:38:15.620590 systemd[1]: Detected virtualization amazon. Mar 3 13:38:15.620602 systemd[1]: Detected architecture x86-64. Mar 3 13:38:15.620614 systemd[1]: Detected first boot. Mar 3 13:38:15.620627 systemd[1]: Initializing machine ID from VM UUID. Mar 3 13:38:15.620639 zram_generator::config[1424]: No configuration found. Mar 3 13:38:15.620656 kernel: Guest personality initialized and is inactive Mar 3 13:38:15.620670 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 3 13:38:15.620682 kernel: Initialized host personality Mar 3 13:38:15.620693 kernel: NET: Registered PF_VSOCK protocol family Mar 3 13:38:15.620705 systemd[1]: Populated /etc with preset unit settings. Mar 3 13:38:15.620717 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 3 13:38:15.620730 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 3 13:38:15.620741 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 3 13:38:15.620754 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 3 13:38:15.620767 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 3 13:38:15.620783 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 3 13:38:15.620795 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 3 13:38:15.620807 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 3 13:38:15.620820 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 3 13:38:15.620832 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 3 13:38:15.621884 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 3 13:38:15.621905 systemd[1]: Created slice user.slice - User and Session Slice. Mar 3 13:38:15.621919 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:38:15.621936 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:38:15.621948 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 3 13:38:15.621961 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 3 13:38:15.621973 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 3 13:38:15.621986 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 13:38:15.621998 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 3 13:38:15.622011 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:38:15.622023 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:38:15.622038 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 3 13:38:15.622051 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 3 13:38:15.622063 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 3 13:38:15.623546 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 3 13:38:15.623575 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:38:15.623588 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 13:38:15.623601 systemd[1]: Reached target slices.target - Slice Units. Mar 3 13:38:15.623614 systemd[1]: Reached target swap.target - Swaps. Mar 3 13:38:15.623626 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 3 13:38:15.623642 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 3 13:38:15.623656 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 3 13:38:15.623668 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:38:15.623681 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 13:38:15.623693 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:38:15.623705 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 3 13:38:15.623717 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 3 13:38:15.623729 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 3 13:38:15.623741 systemd[1]: Mounting media.mount - External Media Directory... Mar 3 13:38:15.623756 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:38:15.623768 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 3 13:38:15.623780 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 3 13:38:15.623792 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 3 13:38:15.623805 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 3 13:38:15.623818 systemd[1]: Reached target machines.target - Containers. Mar 3 13:38:15.623830 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 3 13:38:15.623852 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:38:15.623867 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 13:38:15.623879 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 3 13:38:15.623892 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:38:15.623910 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 13:38:15.623923 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:38:15.623936 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 3 13:38:15.623948 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:38:15.623961 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 3 13:38:15.623973 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 3 13:38:15.623988 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 3 13:38:15.624000 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 3 13:38:15.624012 systemd[1]: Stopped systemd-fsck-usr.service. Mar 3 13:38:15.624025 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:38:15.624037 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 13:38:15.624050 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 13:38:15.624063 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 13:38:15.624074 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 3 13:38:15.624087 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 3 13:38:15.624101 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 13:38:15.624118 systemd[1]: verity-setup.service: Deactivated successfully. Mar 3 13:38:15.624130 systemd[1]: Stopped verity-setup.service. Mar 3 13:38:15.624143 kernel: fuse: init (API version 7.41) Mar 3 13:38:15.624156 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:38:15.624681 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 3 13:38:15.624693 kernel: loop: module loaded Mar 3 13:38:15.624706 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 3 13:38:15.624719 systemd[1]: Mounted media.mount - External Media Directory. Mar 3 13:38:15.624731 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 3 13:38:15.624781 systemd-journald[1503]: Collecting audit messages is disabled. Mar 3 13:38:15.624810 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 3 13:38:15.624823 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 3 13:38:15.624836 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:38:15.624861 systemd-journald[1503]: Journal started Mar 3 13:38:15.624886 systemd-journald[1503]: Runtime Journal (/run/log/journal/ec2fd2d58ed7eada49a1ae46033c71a1) is 4.7M, max 38.1M, 33.3M free. Mar 3 13:38:15.650563 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 3 13:38:15.650627 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 3 13:38:15.650643 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:38:15.650659 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:38:15.650674 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:38:15.650688 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:38:15.650709 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 3 13:38:15.650724 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 3 13:38:15.352201 systemd[1]: Queued start job for default target multi-user.target. Mar 3 13:38:15.654026 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 13:38:15.368176 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 3 13:38:15.368668 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 3 13:38:15.655814 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:38:15.658040 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:38:15.658778 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 3 13:38:15.669656 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 3 13:38:15.680951 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 3 13:38:15.685924 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 3 13:38:15.688902 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 3 13:38:15.688940 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 13:38:15.690274 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 3 13:38:15.696056 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 3 13:38:15.696739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:38:15.701027 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 3 13:38:15.711040 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 3 13:38:15.711541 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 13:38:15.714192 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 3 13:38:15.714710 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 13:38:15.725525 kernel: ACPI: bus type drm_connector registered Mar 3 13:38:15.723025 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 3 13:38:15.726282 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 13:38:15.727018 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:38:15.728175 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 13:38:15.728332 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 13:38:15.729978 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 3 13:38:15.730466 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 3 13:38:15.736164 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 3 13:38:15.740799 systemd-journald[1503]: Time spent on flushing to /var/log/journal/ec2fd2d58ed7eada49a1ae46033c71a1 is 61.146ms for 1016 entries. Mar 3 13:38:15.740799 systemd-journald[1503]: System Journal (/var/log/journal/ec2fd2d58ed7eada49a1ae46033c71a1) is 8M, max 195.6M, 187.6M free. Mar 3 13:38:15.824235 systemd-journald[1503]: Received client request to flush runtime journal. Mar 3 13:38:15.824289 kernel: loop0: detected capacity change from 0 to 128560 Mar 3 13:38:15.739247 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 13:38:15.743008 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:38:15.745569 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 3 13:38:15.766397 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 3 13:38:15.766956 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 3 13:38:15.769078 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 3 13:38:15.792910 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:38:15.823333 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:38:15.826314 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 3 13:38:15.829445 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 3 13:38:15.837021 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 3 13:38:15.838962 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 13:38:15.869929 systemd-tmpfiles[1575]: ACLs are not supported, ignoring. Mar 3 13:38:15.870253 systemd-tmpfiles[1575]: ACLs are not supported, ignoring. Mar 3 13:38:15.874245 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:38:15.962878 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 3 13:38:15.993449 kernel: loop1: detected capacity change from 0 to 110984 Mar 3 13:38:16.096876 kernel: loop2: detected capacity change from 0 to 72368 Mar 3 13:38:16.210877 kernel: loop3: detected capacity change from 0 to 217752 Mar 3 13:38:16.353873 kernel: loop4: detected capacity change from 0 to 128560 Mar 3 13:38:16.371169 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 3 13:38:16.388298 kernel: loop5: detected capacity change from 0 to 110984 Mar 3 13:38:16.419866 kernel: loop6: detected capacity change from 0 to 72368 Mar 3 13:38:16.434471 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 3 13:38:16.438149 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:38:16.443874 kernel: loop7: detected capacity change from 0 to 217752 Mar 3 13:38:16.467436 (sd-merge)[1585]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 3 13:38:16.468297 (sd-merge)[1585]: Merged extensions into '/usr'. Mar 3 13:38:16.478968 systemd-udevd[1587]: Using default interface naming scheme 'v255'. Mar 3 13:38:16.481753 systemd[1]: Reload requested from client PID 1553 ('systemd-sysext') (unit systemd-sysext.service)... Mar 3 13:38:16.481935 systemd[1]: Reloading... Mar 3 13:38:16.563873 zram_generator::config[1609]: No configuration found. Mar 3 13:38:16.900556 (udev-worker)[1637]: Network interface NamePolicy= disabled on kernel command line. Mar 3 13:38:17.031928 kernel: mousedev: PS/2 mouse device common for all mice Mar 3 13:38:17.032034 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 3 13:38:17.040409 kernel: ACPI: button: Power Button [PWRF] Mar 3 13:38:17.040499 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Mar 3 13:38:17.041530 kernel: ACPI: button: Sleep Button [SLPF] Mar 3 13:38:17.055871 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 3 13:38:17.099332 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 3 13:38:17.099728 systemd[1]: Reloading finished in 617 ms. Mar 3 13:38:17.113339 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:38:17.116485 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 3 13:38:17.144033 systemd[1]: Starting ensure-sysext.service... Mar 3 13:38:17.153194 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 13:38:17.160130 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 13:38:17.216981 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 3 13:38:17.218045 systemd[1]: Reload requested from client PID 1710 ('systemctl') (unit ensure-sysext.service)... Mar 3 13:38:17.218887 systemd[1]: Reloading... Mar 3 13:38:17.264010 systemd-tmpfiles[1714]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 3 13:38:17.264050 systemd-tmpfiles[1714]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 3 13:38:17.264430 systemd-tmpfiles[1714]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 3 13:38:17.267710 systemd-tmpfiles[1714]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 3 13:38:17.270738 systemd-tmpfiles[1714]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 3 13:38:17.272398 systemd-tmpfiles[1714]: ACLs are not supported, ignoring. Mar 3 13:38:17.272493 systemd-tmpfiles[1714]: ACLs are not supported, ignoring. Mar 3 13:38:17.282022 systemd-tmpfiles[1714]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 13:38:17.283020 systemd-tmpfiles[1714]: Skipping /boot Mar 3 13:38:17.308414 systemd-tmpfiles[1714]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 13:38:17.308635 systemd-tmpfiles[1714]: Skipping /boot Mar 3 13:38:17.413867 zram_generator::config[1744]: No configuration found. Mar 3 13:38:17.519684 ldconfig[1542]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 3 13:38:17.825370 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 3 13:38:17.826508 systemd[1]: Reloading finished in 607 ms. Mar 3 13:38:17.847356 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 3 13:38:17.848196 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 3 13:38:17.860175 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:38:17.914474 systemd[1]: Finished ensure-sysext.service. Mar 3 13:38:17.916822 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:38:17.918665 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 13:38:17.922112 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 3 13:38:17.923114 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:38:17.929980 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:38:17.931670 systemd-networkd[1711]: lo: Link UP Mar 3 13:38:17.932253 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 13:38:17.934908 systemd-networkd[1711]: lo: Gained carrier Mar 3 13:38:17.935131 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:38:17.937246 systemd-networkd[1711]: Enumeration completed Mar 3 13:38:17.938004 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:38:17.938525 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:38:17.940006 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 3 13:38:17.940166 systemd-networkd[1711]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:38:17.940170 systemd-networkd[1711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 13:38:17.940392 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:38:17.942199 systemd-networkd[1711]: eth0: Link UP Mar 3 13:38:17.942321 systemd-networkd[1711]: eth0: Gained carrier Mar 3 13:38:17.942341 systemd-networkd[1711]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:38:17.944127 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 3 13:38:17.949866 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 13:38:17.950275 systemd[1]: Reached target time-set.target - System Time Set. Mar 3 13:38:17.952946 systemd-networkd[1711]: eth0: DHCPv4 address 172.31.31.254/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 3 13:38:17.956076 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 3 13:38:17.961110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:38:17.961914 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:38:17.962398 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 13:38:17.964251 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:38:17.969285 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:38:17.970560 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:38:17.971093 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:38:17.981040 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 3 13:38:17.983550 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 3 13:38:17.984328 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 13:38:17.990591 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 13:38:17.991296 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 13:38:17.994561 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:38:17.995072 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:38:17.997151 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 13:38:18.018151 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 3 13:38:18.031987 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 3 13:38:18.036292 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 3 13:38:18.038012 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 3 13:38:18.043087 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 3 13:38:18.072257 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 3 13:38:18.074214 augenrules[1930]: No rules Mar 3 13:38:18.072990 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 13:38:18.073175 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 13:38:18.089522 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 3 13:38:18.091316 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 3 13:38:18.096687 systemd-resolved[1899]: Positive Trust Anchors: Mar 3 13:38:18.096705 systemd-resolved[1899]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 13:38:18.096742 systemd-resolved[1899]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 13:38:18.101422 systemd-resolved[1899]: Defaulting to hostname 'linux'. Mar 3 13:38:18.102971 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 13:38:18.103468 systemd[1]: Reached target network.target - Network. Mar 3 13:38:18.103857 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:38:18.124122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:38:18.124837 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 13:38:18.125396 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 3 13:38:18.125781 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 3 13:38:18.126168 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 3 13:38:18.126640 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 3 13:38:18.127096 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 3 13:38:18.127416 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 3 13:38:18.127716 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 3 13:38:18.127753 systemd[1]: Reached target paths.target - Path Units. Mar 3 13:38:18.128203 systemd[1]: Reached target timers.target - Timer Units. Mar 3 13:38:18.129572 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 3 13:38:18.131459 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 3 13:38:18.134053 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 3 13:38:18.134550 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 3 13:38:18.134904 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 3 13:38:18.137180 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 3 13:38:18.137860 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 3 13:38:18.138835 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 3 13:38:18.140195 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 13:38:18.140530 systemd[1]: Reached target basic.target - Basic System. Mar 3 13:38:18.140923 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 3 13:38:18.140952 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 3 13:38:18.141950 systemd[1]: Starting containerd.service - containerd container runtime... Mar 3 13:38:18.144980 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 3 13:38:18.146730 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 3 13:38:18.148059 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 3 13:38:18.152994 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 3 13:38:18.156968 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 3 13:38:18.157368 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 3 13:38:18.158345 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 3 13:38:18.164966 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 3 13:38:18.167267 systemd[1]: Started ntpd.service - Network Time Service. Mar 3 13:38:18.169947 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 3 13:38:18.172701 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 3 13:38:18.176199 jq[1947]: false Mar 3 13:38:18.181025 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 3 13:38:18.186026 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 3 13:38:18.192159 oslogin_cache_refresh[1949]: Refreshing passwd entry cache Mar 3 13:38:18.199344 google_oslogin_nss_cache[1949]: oslogin_cache_refresh[1949]: Refreshing passwd entry cache Mar 3 13:38:18.199994 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 3 13:38:18.202264 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 3 13:38:18.202762 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 3 13:38:18.204730 google_oslogin_nss_cache[1949]: oslogin_cache_refresh[1949]: Failure getting users, quitting Mar 3 13:38:18.204730 google_oslogin_nss_cache[1949]: oslogin_cache_refresh[1949]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 3 13:38:18.204730 google_oslogin_nss_cache[1949]: oslogin_cache_refresh[1949]: Refreshing group entry cache Mar 3 13:38:18.203312 oslogin_cache_refresh[1949]: Failure getting users, quitting Mar 3 13:38:18.203330 oslogin_cache_refresh[1949]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 3 13:38:18.204994 systemd[1]: Starting update-engine.service - Update Engine... Mar 3 13:38:18.203367 oslogin_cache_refresh[1949]: Refreshing group entry cache Mar 3 13:38:18.207979 google_oslogin_nss_cache[1949]: oslogin_cache_refresh[1949]: Failure getting groups, quitting Mar 3 13:38:18.207979 google_oslogin_nss_cache[1949]: oslogin_cache_refresh[1949]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 3 13:38:18.207969 oslogin_cache_refresh[1949]: Failure getting groups, quitting Mar 3 13:38:18.207981 oslogin_cache_refresh[1949]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 3 13:38:18.210232 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 3 13:38:18.215585 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 3 13:38:18.218191 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 3 13:38:18.218380 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 3 13:38:18.218634 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 3 13:38:18.220192 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 3 13:38:18.226309 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 3 13:38:18.233182 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 3 13:38:18.247048 extend-filesystems[1948]: Found /dev/nvme0n1p6 Mar 3 13:38:18.260798 extend-filesystems[1948]: Found /dev/nvme0n1p9 Mar 3 13:38:18.278927 jq[1959]: true Mar 3 13:38:18.281100 extend-filesystems[1948]: Checking size of /dev/nvme0n1p9 Mar 3 13:38:18.294120 update_engine[1958]: I20260303 13:38:18.292989 1958 main.cc:92] Flatcar Update Engine starting Mar 3 13:38:18.298945 (ntainerd)[1982]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 3 13:38:18.307836 ntpd[1951]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:22:55 UTC 2026 (1): Starting Mar 3 13:38:18.308573 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:22:55 UTC 2026 (1): Starting Mar 3 13:38:18.308573 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 13:38:18.308573 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: ---------------------------------------------------- Mar 3 13:38:18.308573 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: ntp-4 is maintained by Network Time Foundation, Mar 3 13:38:18.308573 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 13:38:18.308573 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: corporation. Support and training for ntp-4 are Mar 3 13:38:18.308573 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: available at https://www.nwtime.org/support Mar 3 13:38:18.308573 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: ---------------------------------------------------- Mar 3 13:38:18.307923 ntpd[1951]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 13:38:18.307930 ntpd[1951]: ---------------------------------------------------- Mar 3 13:38:18.307937 ntpd[1951]: ntp-4 is maintained by Network Time Foundation, Mar 3 13:38:18.307944 ntpd[1951]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 13:38:18.307951 ntpd[1951]: corporation. Support and training for ntp-4 are Mar 3 13:38:18.307958 ntpd[1951]: available at https://www.nwtime.org/support Mar 3 13:38:18.307964 ntpd[1951]: ---------------------------------------------------- Mar 3 13:38:18.316329 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 3 13:38:18.319923 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: proto: precision = 0.056 usec (-24) Mar 3 13:38:18.315697 ntpd[1951]: proto: precision = 0.056 usec (-24) Mar 3 13:38:18.320692 ntpd[1951]: basedate set to 2026-02-19 Mar 3 13:38:18.320715 ntpd[1951]: gps base set to 2026-02-22 (week 2407) Mar 3 13:38:18.320806 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: basedate set to 2026-02-19 Mar 3 13:38:18.320806 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: gps base set to 2026-02-22 (week 2407) Mar 3 13:38:18.320863 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 13:38:18.320821 ntpd[1951]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 13:38:18.320999 ntpd[1951]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 13:38:18.321396 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 13:38:18.322039 ntpd[1951]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 13:38:18.322713 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 13:38:18.322713 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: Listen normally on 3 eth0 172.31.31.254:123 Mar 3 13:38:18.322713 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: Listen normally on 4 lo [::1]:123 Mar 3 13:38:18.322713 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: bind(21) AF_INET6 [fe80::4e5:11ff:fec9:e059%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 13:38:18.322713 ntpd[1951]: 3 Mar 13:38:18 ntpd[1951]: unable to create socket on eth0 (5) for [fe80::4e5:11ff:fec9:e059%2]:123 Mar 3 13:38:18.322065 ntpd[1951]: Listen normally on 3 eth0 172.31.31.254:123 Mar 3 13:38:18.322088 ntpd[1951]: Listen normally on 4 lo [::1]:123 Mar 3 13:38:18.322110 ntpd[1951]: bind(21) AF_INET6 [fe80::4e5:11ff:fec9:e059%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 13:38:18.322124 ntpd[1951]: unable to create socket on eth0 (5) for [fe80::4e5:11ff:fec9:e059%2]:123 Mar 3 13:38:18.324106 coreos-metadata[1944]: Mar 03 13:38:18.323 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 3 13:38:18.325856 kernel: ntpd[1951]: segfault at 24 ip 000055861e1f5aeb sp 00007ffddbc90ed0 error 4 in ntpd[68aeb,55861e193000+80000] likely on CPU 0 (core 0, socket 0) Mar 3 13:38:18.325896 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Mar 3 13:38:18.326390 jq[1988]: true Mar 3 13:38:18.328253 systemd[1]: motdgen.service: Deactivated successfully. Mar 3 13:38:18.328453 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 3 13:38:18.342911 coreos-metadata[1944]: Mar 03 13:38:18.341 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 3 13:38:18.343012 coreos-metadata[1944]: Mar 03 13:38:18.342 INFO Fetch successful Mar 3 13:38:18.343035 coreos-metadata[1944]: Mar 03 13:38:18.343 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 3 13:38:18.343301 tar[1967]: linux-amd64/LICENSE Mar 3 13:38:18.343542 tar[1967]: linux-amd64/helm Mar 3 13:38:18.344934 coreos-metadata[1944]: Mar 03 13:38:18.344 INFO Fetch successful Mar 3 13:38:18.344934 coreos-metadata[1944]: Mar 03 13:38:18.344 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 3 13:38:18.349501 coreos-metadata[1944]: Mar 03 13:38:18.348 INFO Fetch successful Mar 3 13:38:18.349501 coreos-metadata[1944]: Mar 03 13:38:18.348 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 3 13:38:18.351017 coreos-metadata[1944]: Mar 03 13:38:18.350 INFO Fetch successful Mar 3 13:38:18.351059 coreos-metadata[1944]: Mar 03 13:38:18.351 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 3 13:38:18.351450 extend-filesystems[1948]: Resized partition /dev/nvme0n1p9 Mar 3 13:38:18.352732 coreos-metadata[1944]: Mar 03 13:38:18.352 INFO Fetch failed with 404: resource not found Mar 3 13:38:18.352783 coreos-metadata[1944]: Mar 03 13:38:18.352 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 3 13:38:18.356906 coreos-metadata[1944]: Mar 03 13:38:18.354 INFO Fetch successful Mar 3 13:38:18.356906 coreos-metadata[1944]: Mar 03 13:38:18.354 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 3 13:38:18.359586 systemd-coredump[2005]: Process 1951 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 3 13:38:18.360355 coreos-metadata[1944]: Mar 03 13:38:18.359 INFO Fetch successful Mar 3 13:38:18.360355 coreos-metadata[1944]: Mar 03 13:38:18.359 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 3 13:38:18.367937 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 3 13:38:18.367991 extend-filesystems[2006]: resize2fs 1.47.3 (8-Jul-2025) Mar 3 13:38:18.366942 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Mar 3 13:38:18.371584 coreos-metadata[1944]: Mar 03 13:38:18.360 INFO Fetch successful Mar 3 13:38:18.371584 coreos-metadata[1944]: Mar 03 13:38:18.360 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 3 13:38:18.371584 coreos-metadata[1944]: Mar 03 13:38:18.361 INFO Fetch successful Mar 3 13:38:18.371584 coreos-metadata[1944]: Mar 03 13:38:18.361 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 3 13:38:18.371584 coreos-metadata[1944]: Mar 03 13:38:18.365 INFO Fetch successful Mar 3 13:38:18.372924 systemd[1]: Started systemd-coredump@0-2005-0.service - Process Core Dump (PID 2005/UID 0). Mar 3 13:38:18.387197 dbus-daemon[1945]: [system] SELinux support is enabled Mar 3 13:38:18.389026 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 3 13:38:18.394281 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 3 13:38:18.394314 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 3 13:38:18.398170 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 3 13:38:18.398186 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 3 13:38:18.400981 systemd-logind[1956]: Watching system buttons on /dev/input/event2 (Power Button) Mar 3 13:38:18.401002 systemd-logind[1956]: Watching system buttons on /dev/input/event3 (Sleep Button) Mar 3 13:38:18.401020 systemd-logind[1956]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 3 13:38:18.403228 systemd-logind[1956]: New seat seat0. Mar 3 13:38:18.406170 systemd[1]: Started systemd-logind.service - User Login Management. Mar 3 13:38:18.408250 systemd[1]: Started update-engine.service - Update Engine. Mar 3 13:38:18.411180 dbus-daemon[1945]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1711 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 3 13:38:18.411777 update_engine[1958]: I20260303 13:38:18.408907 1958 update_check_scheduler.cc:74] Next update check in 10m13s Mar 3 13:38:18.416005 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 3 13:38:18.425709 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 3 13:38:18.485200 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 3 13:38:18.486134 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 3 13:38:18.525292 bash[2029]: Updated "/home/core/.ssh/authorized_keys" Mar 3 13:38:18.532010 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 3 13:38:18.538751 systemd[1]: Starting sshkeys.service... Mar 3 13:38:18.551907 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 3 13:38:18.579138 extend-filesystems[2006]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 3 13:38:18.579138 extend-filesystems[2006]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 3 13:38:18.579138 extend-filesystems[2006]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 3 13:38:18.592022 extend-filesystems[1948]: Resized filesystem in /dev/nvme0n1p9 Mar 3 13:38:18.580569 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 3 13:38:18.581913 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 3 13:38:18.593579 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 3 13:38:18.597802 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 3 13:38:18.648795 systemd-coredump[2008]: Process 1951 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1951: #0 0x000055861e1f5aeb n/a (ntpd + 0x68aeb) #1 0x000055861e19ecdf n/a (ntpd + 0x11cdf) #2 0x000055861e19f575 n/a (ntpd + 0x12575) #3 0x000055861e19ad8a n/a (ntpd + 0xdd8a) #4 0x000055861e19c5d3 n/a (ntpd + 0xf5d3) #5 0x000055861e1a4fd1 n/a (ntpd + 0x17fd1) #6 0x000055861e195c2d n/a (ntpd + 0x8c2d) #7 0x00007f3859b1016c n/a (libc.so.6 + 0x2716c) #8 0x00007f3859b10229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055861e195c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Mar 3 13:38:18.657349 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 3 13:38:18.657481 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 3 13:38:18.666538 systemd[1]: systemd-coredump@0-2005-0.service: Deactivated successfully. Mar 3 13:38:18.674701 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 3 13:38:18.680756 dbus-daemon[1945]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 3 13:38:18.682316 dbus-daemon[1945]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2021 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 3 13:38:18.690475 systemd[1]: Starting polkit.service - Authorization Manager... Mar 3 13:38:18.796819 locksmithd[2018]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 3 13:38:18.809071 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Mar 3 13:38:18.814372 systemd[1]: Started ntpd.service - Network Time Service. Mar 3 13:38:18.841728 coreos-metadata[2048]: Mar 03 13:38:18.840 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 3 13:38:18.843276 coreos-metadata[2048]: Mar 03 13:38:18.843 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 3 13:38:18.844907 coreos-metadata[2048]: Mar 03 13:38:18.844 INFO Fetch successful Mar 3 13:38:18.844907 coreos-metadata[2048]: Mar 03 13:38:18.844 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 3 13:38:18.846695 coreos-metadata[2048]: Mar 03 13:38:18.846 INFO Fetch successful Mar 3 13:38:18.847703 unknown[2048]: wrote ssh authorized keys file for user: core Mar 3 13:38:18.893184 ntpd[2130]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:22:55 UTC 2026 (1): Starting Mar 3 13:38:18.894244 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:22:55 UTC 2026 (1): Starting Mar 3 13:38:18.894244 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 13:38:18.894244 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: ---------------------------------------------------- Mar 3 13:38:18.894244 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: ntp-4 is maintained by Network Time Foundation, Mar 3 13:38:18.894244 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 13:38:18.894244 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: corporation. Support and training for ntp-4 are Mar 3 13:38:18.894244 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: available at https://www.nwtime.org/support Mar 3 13:38:18.894244 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: ---------------------------------------------------- Mar 3 13:38:18.894244 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: proto: precision = 0.056 usec (-24) Mar 3 13:38:18.893240 ntpd[2130]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 13:38:18.893248 ntpd[2130]: ---------------------------------------------------- Mar 3 13:38:18.893254 ntpd[2130]: ntp-4 is maintained by Network Time Foundation, Mar 3 13:38:18.893260 ntpd[2130]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 13:38:18.893267 ntpd[2130]: corporation. Support and training for ntp-4 are Mar 3 13:38:18.893273 ntpd[2130]: available at https://www.nwtime.org/support Mar 3 13:38:18.893280 ntpd[2130]: ---------------------------------------------------- Mar 3 13:38:18.893783 ntpd[2130]: proto: precision = 0.056 usec (-24) Mar 3 13:38:18.895081 ntpd[2130]: basedate set to 2026-02-19 Mar 3 13:38:18.895420 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: basedate set to 2026-02-19 Mar 3 13:38:18.895420 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: gps base set to 2026-02-22 (week 2407) Mar 3 13:38:18.895420 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 13:38:18.895420 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 13:38:18.895094 ntpd[2130]: gps base set to 2026-02-22 (week 2407) Mar 3 13:38:18.895165 ntpd[2130]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 13:38:18.895185 ntpd[2130]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 13:38:18.901870 kernel: ntpd[2130]: segfault at 24 ip 00005653ba763aeb sp 00007ffcdfbccb70 error 4 in ntpd[68aeb,5653ba701000+80000] likely on CPU 1 (core 0, socket 0) Mar 3 13:38:18.901968 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Mar 3 13:38:18.895759 ntpd[2130]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 13:38:18.902052 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 13:38:18.902052 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: Listen normally on 3 eth0 172.31.31.254:123 Mar 3 13:38:18.902052 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: Listen normally on 4 lo [::1]:123 Mar 3 13:38:18.902052 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: bind(21) AF_INET6 [fe80::4e5:11ff:fec9:e059%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 13:38:18.902052 ntpd[2130]: 3 Mar 13:38:18 ntpd[2130]: unable to create socket on eth0 (5) for [fe80::4e5:11ff:fec9:e059%2]:123 Mar 3 13:38:18.895787 ntpd[2130]: Listen normally on 3 eth0 172.31.31.254:123 Mar 3 13:38:18.895809 ntpd[2130]: Listen normally on 4 lo [::1]:123 Mar 3 13:38:18.895832 ntpd[2130]: bind(21) AF_INET6 [fe80::4e5:11ff:fec9:e059%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 13:38:18.895992 ntpd[2130]: unable to create socket on eth0 (5) for [fe80::4e5:11ff:fec9:e059%2]:123 Mar 3 13:38:18.912335 polkitd[2096]: Started polkitd version 126 Mar 3 13:38:18.917435 update-ssh-keys[2137]: Updated "/home/core/.ssh/authorized_keys" Mar 3 13:38:18.919746 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 3 13:38:18.928638 systemd-coredump[2149]: Process 2130 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 3 13:38:18.928902 systemd[1]: Finished sshkeys.service. Mar 3 13:38:18.934644 containerd[1982]: time="2026-03-03T13:38:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 3 13:38:18.936183 systemd[1]: Started systemd-coredump@1-2149-0.service - Process Core Dump (PID 2149/UID 0). Mar 3 13:38:18.937082 containerd[1982]: time="2026-03-03T13:38:18.937048531Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 3 13:38:18.968378 containerd[1982]: time="2026-03-03T13:38:18.964975467Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.112µs" Mar 3 13:38:18.968378 containerd[1982]: time="2026-03-03T13:38:18.965011282Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 3 13:38:18.968378 containerd[1982]: time="2026-03-03T13:38:18.965030724Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 3 13:38:18.968378 containerd[1982]: time="2026-03-03T13:38:18.965166513Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 3 13:38:18.968378 containerd[1982]: time="2026-03-03T13:38:18.965178705Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 3 13:38:18.968378 containerd[1982]: time="2026-03-03T13:38:18.965203693Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 13:38:18.968378 containerd[1982]: time="2026-03-03T13:38:18.965261693Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 13:38:18.968378 containerd[1982]: time="2026-03-03T13:38:18.965272402Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 13:38:18.968378 containerd[1982]: time="2026-03-03T13:38:18.965513036Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 13:38:18.968378 containerd[1982]: time="2026-03-03T13:38:18.965524998Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 13:38:18.968378 containerd[1982]: time="2026-03-03T13:38:18.965537034Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 13:38:18.968378 containerd[1982]: time="2026-03-03T13:38:18.965545099Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 3 13:38:18.968693 containerd[1982]: time="2026-03-03T13:38:18.965608131Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 3 13:38:18.968693 containerd[1982]: time="2026-03-03T13:38:18.965787135Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 13:38:18.968693 containerd[1982]: time="2026-03-03T13:38:18.965811473Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 13:38:18.968693 containerd[1982]: time="2026-03-03T13:38:18.965820430Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 3 13:38:18.969042 containerd[1982]: time="2026-03-03T13:38:18.968929736Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 3 13:38:18.970661 containerd[1982]: time="2026-03-03T13:38:18.969174913Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 3 13:38:18.970661 containerd[1982]: time="2026-03-03T13:38:18.969248969Z" level=info msg="metadata content store policy set" policy=shared Mar 3 13:38:18.978868 containerd[1982]: time="2026-03-03T13:38:18.977262301Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 3 13:38:18.978868 containerd[1982]: time="2026-03-03T13:38:18.977320449Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 3 13:38:18.978868 containerd[1982]: time="2026-03-03T13:38:18.977333779Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 3 13:38:18.978868 containerd[1982]: time="2026-03-03T13:38:18.977346061Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 3 13:38:18.978868 containerd[1982]: time="2026-03-03T13:38:18.977364471Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 3 13:38:18.978868 containerd[1982]: time="2026-03-03T13:38:18.977375155Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 3 13:38:18.978868 containerd[1982]: time="2026-03-03T13:38:18.977388943Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 3 13:38:18.978868 containerd[1982]: time="2026-03-03T13:38:18.977401666Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 3 13:38:18.978868 containerd[1982]: time="2026-03-03T13:38:18.977415920Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 3 13:38:18.978868 containerd[1982]: time="2026-03-03T13:38:18.977426000Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 3 13:38:18.978868 containerd[1982]: time="2026-03-03T13:38:18.977435075Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 3 13:38:18.978868 containerd[1982]: time="2026-03-03T13:38:18.977449348Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 3 13:38:18.978868 containerd[1982]: time="2026-03-03T13:38:18.977557572Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 3 13:38:18.978868 containerd[1982]: time="2026-03-03T13:38:18.977574874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 3 13:38:18.979210 containerd[1982]: time="2026-03-03T13:38:18.977588466Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 3 13:38:18.979210 containerd[1982]: time="2026-03-03T13:38:18.977600524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 3 13:38:18.979210 containerd[1982]: time="2026-03-03T13:38:18.977612367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 3 13:38:18.979210 containerd[1982]: time="2026-03-03T13:38:18.977622734Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 3 13:38:18.979210 containerd[1982]: time="2026-03-03T13:38:18.977633795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 3 13:38:18.979210 containerd[1982]: time="2026-03-03T13:38:18.977643242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 3 13:38:18.979210 containerd[1982]: time="2026-03-03T13:38:18.977653529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 3 13:38:18.979210 containerd[1982]: time="2026-03-03T13:38:18.977663573Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 3 13:38:18.979210 containerd[1982]: time="2026-03-03T13:38:18.977674020Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 3 13:38:18.979210 containerd[1982]: time="2026-03-03T13:38:18.977721584Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 3 13:38:18.979210 containerd[1982]: time="2026-03-03T13:38:18.977734085Z" level=info msg="Start snapshots syncer" Mar 3 13:38:18.979210 containerd[1982]: time="2026-03-03T13:38:18.977747933Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 3 13:38:18.979464 containerd[1982]: time="2026-03-03T13:38:18.978001662Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 3 13:38:18.979464 containerd[1982]: time="2026-03-03T13:38:18.978049750Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 3 13:38:18.979561 containerd[1982]: time="2026-03-03T13:38:18.978090856Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 3 13:38:18.979561 containerd[1982]: time="2026-03-03T13:38:18.978174848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 3 13:38:18.979561 containerd[1982]: time="2026-03-03T13:38:18.978194007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 3 13:38:18.979561 containerd[1982]: time="2026-03-03T13:38:18.978204849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 3 13:38:18.979561 containerd[1982]: time="2026-03-03T13:38:18.978214564Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 3 13:38:18.979561 containerd[1982]: time="2026-03-03T13:38:18.978227452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 3 13:38:18.979561 containerd[1982]: time="2026-03-03T13:38:18.978238145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 3 13:38:18.979561 containerd[1982]: time="2026-03-03T13:38:18.978248499Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 3 13:38:18.979561 containerd[1982]: time="2026-03-03T13:38:18.978269135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 3 13:38:18.979561 containerd[1982]: time="2026-03-03T13:38:18.978280035Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 3 13:38:18.979561 containerd[1982]: time="2026-03-03T13:38:18.978289560Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 3 13:38:18.979561 containerd[1982]: time="2026-03-03T13:38:18.978308216Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 13:38:18.979561 containerd[1982]: time="2026-03-03T13:38:18.978322549Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 13:38:18.979561 containerd[1982]: time="2026-03-03T13:38:18.978331107Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 13:38:18.980869 containerd[1982]: time="2026-03-03T13:38:18.978339833Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 13:38:18.980869 containerd[1982]: time="2026-03-03T13:38:18.978347256Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 3 13:38:18.980869 containerd[1982]: time="2026-03-03T13:38:18.978357492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 3 13:38:18.980869 containerd[1982]: time="2026-03-03T13:38:18.978373625Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 3 13:38:18.980869 containerd[1982]: time="2026-03-03T13:38:18.978391477Z" level=info msg="runtime interface created" Mar 3 13:38:18.980869 containerd[1982]: time="2026-03-03T13:38:18.978396560Z" level=info msg="created NRI interface" Mar 3 13:38:18.980869 containerd[1982]: time="2026-03-03T13:38:18.978405090Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 3 13:38:18.980869 containerd[1982]: time="2026-03-03T13:38:18.978416516Z" level=info msg="Connect containerd service" Mar 3 13:38:18.980869 containerd[1982]: time="2026-03-03T13:38:18.978432658Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 3 13:38:18.984445 polkitd[2096]: Loading rules from directory /etc/polkit-1/rules.d Mar 3 13:38:18.987625 polkitd[2096]: Loading rules from directory /run/polkit-1/rules.d Mar 3 13:38:18.987699 polkitd[2096]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 3 13:38:18.988051 polkitd[2096]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 3 13:38:18.988077 polkitd[2096]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 3 13:38:18.988112 polkitd[2096]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 3 13:38:18.991834 polkitd[2096]: Finished loading, compiling and executing 2 rules Mar 3 13:38:18.992111 systemd[1]: Started polkit.service - Authorization Manager. Mar 3 13:38:18.993473 containerd[1982]: time="2026-03-03T13:38:18.993440678Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 3 13:38:19.002299 dbus-daemon[1945]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 3 13:38:19.005274 polkitd[2096]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 3 13:38:19.092247 systemd-hostnamed[2021]: Hostname set to (transient) Mar 3 13:38:19.092797 systemd-resolved[1899]: System hostname changed to 'ip-172-31-31-254'. Mar 3 13:38:19.105289 systemd-coredump[2151]: Process 2130 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2130: #0 0x00005653ba763aeb n/a (ntpd + 0x68aeb) #1 0x00005653ba70ccdf n/a (ntpd + 0x11cdf) #2 0x00005653ba70d575 n/a (ntpd + 0x12575) #3 0x00005653ba708d8a n/a (ntpd + 0xdd8a) #4 0x00005653ba70a5d3 n/a (ntpd + 0xf5d3) #5 0x00005653ba712fd1 n/a (ntpd + 0x17fd1) #6 0x00005653ba703c2d n/a (ntpd + 0x8c2d) #7 0x00007fbaf654816c n/a (libc.so.6 + 0x2716c) #8 0x00007fbaf6548229 __libc_start_main (libc.so.6 + 0x27229) #9 0x00005653ba703c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Mar 3 13:38:19.108785 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 3 13:38:19.108950 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 3 13:38:19.116595 systemd[1]: systemd-coredump@1-2149-0.service: Deactivated successfully. Mar 3 13:38:19.140068 sshd_keygen[1995]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 3 13:38:19.165360 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 3 13:38:19.169115 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 3 13:38:19.185698 systemd[1]: issuegen.service: Deactivated successfully. Mar 3 13:38:19.187520 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 3 13:38:19.194142 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 3 13:38:19.214399 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Mar 3 13:38:19.215645 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 3 13:38:19.220185 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 3 13:38:19.224764 systemd[1]: Started ntpd.service - Network Time Service. Mar 3 13:38:19.228960 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 3 13:38:19.232120 systemd[1]: Reached target getty.target - Login Prompts. Mar 3 13:38:19.251377 containerd[1982]: time="2026-03-03T13:38:19.251343561Z" level=info msg="Start subscribing containerd event" Mar 3 13:38:19.251533 containerd[1982]: time="2026-03-03T13:38:19.251509094Z" level=info msg="Start recovering state" Mar 3 13:38:19.253853 containerd[1982]: time="2026-03-03T13:38:19.253096433Z" level=info msg="Start event monitor" Mar 3 13:38:19.253853 containerd[1982]: time="2026-03-03T13:38:19.253119634Z" level=info msg="Start cni network conf syncer for default" Mar 3 13:38:19.253853 containerd[1982]: time="2026-03-03T13:38:19.253132244Z" level=info msg="Start streaming server" Mar 3 13:38:19.253853 containerd[1982]: time="2026-03-03T13:38:19.253141270Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 3 13:38:19.253853 containerd[1982]: time="2026-03-03T13:38:19.253147709Z" level=info msg="runtime interface starting up..." Mar 3 13:38:19.253853 containerd[1982]: time="2026-03-03T13:38:19.253153206Z" level=info msg="starting plugins..." Mar 3 13:38:19.253853 containerd[1982]: time="2026-03-03T13:38:19.253164430Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 3 13:38:19.253853 containerd[1982]: time="2026-03-03T13:38:19.251782895Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 3 13:38:19.253853 containerd[1982]: time="2026-03-03T13:38:19.253287103Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 3 13:38:19.253411 systemd[1]: Started containerd.service - containerd container runtime. Mar 3 13:38:19.254515 containerd[1982]: time="2026-03-03T13:38:19.254489824Z" level=info msg="containerd successfully booted in 0.320172s" Mar 3 13:38:19.264892 ntpd[2207]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:22:55 UTC 2026 (1): Starting Mar 3 13:38:19.264949 ntpd[2207]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 13:38:19.265245 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:22:55 UTC 2026 (1): Starting Mar 3 13:38:19.265245 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 13:38:19.265245 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: ---------------------------------------------------- Mar 3 13:38:19.265245 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: ntp-4 is maintained by Network Time Foundation, Mar 3 13:38:19.265245 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 13:38:19.265245 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: corporation. Support and training for ntp-4 are Mar 3 13:38:19.265245 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: available at https://www.nwtime.org/support Mar 3 13:38:19.265245 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: ---------------------------------------------------- Mar 3 13:38:19.264956 ntpd[2207]: ---------------------------------------------------- Mar 3 13:38:19.265605 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: proto: precision = 0.056 usec (-24) Mar 3 13:38:19.264963 ntpd[2207]: ntp-4 is maintained by Network Time Foundation, Mar 3 13:38:19.264970 ntpd[2207]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 13:38:19.264976 ntpd[2207]: corporation. Support and training for ntp-4 are Mar 3 13:38:19.264983 ntpd[2207]: available at https://www.nwtime.org/support Mar 3 13:38:19.265742 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: basedate set to 2026-02-19 Mar 3 13:38:19.265742 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: gps base set to 2026-02-22 (week 2407) Mar 3 13:38:19.264989 ntpd[2207]: ---------------------------------------------------- Mar 3 13:38:19.265812 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 13:38:19.265812 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 13:38:19.265490 ntpd[2207]: proto: precision = 0.056 usec (-24) Mar 3 13:38:19.265682 ntpd[2207]: basedate set to 2026-02-19 Mar 3 13:38:19.265691 ntpd[2207]: gps base set to 2026-02-22 (week 2407) Mar 3 13:38:19.265753 ntpd[2207]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 13:38:19.265773 ntpd[2207]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 13:38:19.267801 ntpd[2207]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 13:38:19.267835 ntpd[2207]: Listen normally on 3 eth0 172.31.31.254:123 Mar 3 13:38:19.267961 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 13:38:19.267961 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: Listen normally on 3 eth0 172.31.31.254:123 Mar 3 13:38:19.267961 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: Listen normally on 4 lo [::1]:123 Mar 3 13:38:19.267961 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: bind(21) AF_INET6 [fe80::4e5:11ff:fec9:e059%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 13:38:19.267961 ntpd[2207]: 3 Mar 13:38:19 ntpd[2207]: unable to create socket on eth0 (5) for [fe80::4e5:11ff:fec9:e059%2]:123 Mar 3 13:38:19.267880 ntpd[2207]: Listen normally on 4 lo [::1]:123 Mar 3 13:38:19.267905 ntpd[2207]: bind(21) AF_INET6 [fe80::4e5:11ff:fec9:e059%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 13:38:19.267920 ntpd[2207]: unable to create socket on eth0 (5) for [fe80::4e5:11ff:fec9:e059%2]:123 Mar 3 13:38:19.270446 kernel: ntpd[2207]: segfault at 24 ip 000056491faf5aeb sp 00007fffdf3212b0 error 4 in ntpd[68aeb,56491fa93000+80000] likely on CPU 1 (core 0, socket 0) Mar 3 13:38:19.270514 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Mar 3 13:38:19.277696 systemd-coredump[2210]: Process 2207 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 3 13:38:19.282261 systemd[1]: Started systemd-coredump@2-2210-0.service - Process Core Dump (PID 2210/UID 0). Mar 3 13:38:19.347357 tar[1967]: linux-amd64/README.md Mar 3 13:38:19.363689 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 3 13:38:19.367797 systemd-coredump[2211]: Process 2207 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2207: #0 0x000056491faf5aeb n/a (ntpd + 0x68aeb) #1 0x000056491fa9ecdf n/a (ntpd + 0x11cdf) #2 0x000056491fa9f575 n/a (ntpd + 0x12575) #3 0x000056491fa9ad8a n/a (ntpd + 0xdd8a) #4 0x000056491fa9c5d3 n/a (ntpd + 0xf5d3) #5 0x000056491faa4fd1 n/a (ntpd + 0x17fd1) #6 0x000056491fa95c2d n/a (ntpd + 0x8c2d) #7 0x00007f0deb44116c n/a (libc.so.6 + 0x2716c) #8 0x00007f0deb441229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000056491fa95c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Mar 3 13:38:19.368653 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 3 13:38:19.368790 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 3 13:38:19.371916 systemd[1]: systemd-coredump@2-2210-0.service: Deactivated successfully. Mar 3 13:38:19.449037 systemd-networkd[1711]: eth0: Gained IPv6LL Mar 3 13:38:19.451330 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 3 13:38:19.453028 systemd[1]: Reached target network-online.target - Network is Online. Mar 3 13:38:19.455003 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 3 13:38:19.459072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:38:19.465097 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 3 13:38:19.472088 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 3. Mar 3 13:38:19.477457 systemd[1]: Started ntpd.service - Network Time Service. Mar 3 13:38:19.501020 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 3 13:38:19.510058 ntpd[2230]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:22:55 UTC 2026 (1): Starting Mar 3 13:38:19.510138 ntpd[2230]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 13:38:19.510545 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:22:55 UTC 2026 (1): Starting Mar 3 13:38:19.510545 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 13:38:19.510545 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: ---------------------------------------------------- Mar 3 13:38:19.510545 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: ntp-4 is maintained by Network Time Foundation, Mar 3 13:38:19.510545 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 13:38:19.510545 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: corporation. Support and training for ntp-4 are Mar 3 13:38:19.510545 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: available at https://www.nwtime.org/support Mar 3 13:38:19.510545 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: ---------------------------------------------------- Mar 3 13:38:19.510150 ntpd[2230]: ---------------------------------------------------- Mar 3 13:38:19.511216 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: proto: precision = 0.098 usec (-23) Mar 3 13:38:19.510159 ntpd[2230]: ntp-4 is maintained by Network Time Foundation, Mar 3 13:38:19.511300 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: basedate set to 2026-02-19 Mar 3 13:38:19.511300 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: gps base set to 2026-02-22 (week 2407) Mar 3 13:38:19.510168 ntpd[2230]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 13:38:19.511426 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 13:38:19.511426 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 13:38:19.510177 ntpd[2230]: corporation. Support and training for ntp-4 are Mar 3 13:38:19.510186 ntpd[2230]: available at https://www.nwtime.org/support Mar 3 13:38:19.510196 ntpd[2230]: ---------------------------------------------------- Mar 3 13:38:19.511651 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 13:38:19.511651 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: Listen normally on 3 eth0 172.31.31.254:123 Mar 3 13:38:19.511651 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: Listen normally on 4 lo [::1]:123 Mar 3 13:38:19.511651 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: Listen normally on 5 eth0 [fe80::4e5:11ff:fec9:e059%2]:123 Mar 3 13:38:19.510970 ntpd[2230]: proto: precision = 0.098 usec (-23) Mar 3 13:38:19.511900 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: Listening on routing socket on fd #22 for interface updates Mar 3 13:38:19.511239 ntpd[2230]: basedate set to 2026-02-19 Mar 3 13:38:19.511252 ntpd[2230]: gps base set to 2026-02-22 (week 2407) Mar 3 13:38:19.511341 ntpd[2230]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 13:38:19.511369 ntpd[2230]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 13:38:19.511557 ntpd[2230]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 13:38:19.511585 ntpd[2230]: Listen normally on 3 eth0 172.31.31.254:123 Mar 3 13:38:19.511615 ntpd[2230]: Listen normally on 4 lo [::1]:123 Mar 3 13:38:19.511641 ntpd[2230]: Listen normally on 5 eth0 [fe80::4e5:11ff:fec9:e059%2]:123 Mar 3 13:38:19.511667 ntpd[2230]: Listening on routing socket on fd #22 for interface updates Mar 3 13:38:19.513790 ntpd[2230]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 13:38:19.515368 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 13:38:19.515368 ntpd[2230]: 3 Mar 13:38:19 ntpd[2230]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 13:38:19.513824 ntpd[2230]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 13:38:19.586898 amazon-ssm-agent[2225]: Initializing new seelog logger Mar 3 13:38:19.587405 amazon-ssm-agent[2225]: New Seelog Logger Creation Complete Mar 3 13:38:19.587520 amazon-ssm-agent[2225]: 2026/03/03 13:38:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:38:19.587569 amazon-ssm-agent[2225]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:38:19.587954 amazon-ssm-agent[2225]: 2026/03/03 13:38:19 processing appconfig overrides Mar 3 13:38:19.588331 amazon-ssm-agent[2225]: 2026/03/03 13:38:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:38:19.588388 amazon-ssm-agent[2225]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:38:19.588497 amazon-ssm-agent[2225]: 2026/03/03 13:38:19 processing appconfig overrides Mar 3 13:38:19.588765 amazon-ssm-agent[2225]: 2026/03/03 13:38:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:38:19.588805 amazon-ssm-agent[2225]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:38:19.588905 amazon-ssm-agent[2225]: 2026/03/03 13:38:19 processing appconfig overrides Mar 3 13:38:19.589028 amazon-ssm-agent[2225]: 2026-03-03 13:38:19.5882 INFO Proxy environment variables: Mar 3 13:38:19.591168 amazon-ssm-agent[2225]: 2026/03/03 13:38:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:38:19.591168 amazon-ssm-agent[2225]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:38:19.591249 amazon-ssm-agent[2225]: 2026/03/03 13:38:19 processing appconfig overrides Mar 3 13:38:19.689915 amazon-ssm-agent[2225]: 2026-03-03 13:38:19.5882 INFO https_proxy: Mar 3 13:38:19.786934 amazon-ssm-agent[2225]: 2026-03-03 13:38:19.5882 INFO http_proxy: Mar 3 13:38:19.885719 amazon-ssm-agent[2225]: 2026-03-03 13:38:19.5882 INFO no_proxy: Mar 3 13:38:19.984758 amazon-ssm-agent[2225]: 2026-03-03 13:38:19.5885 INFO Checking if agent identity type OnPrem can be assumed Mar 3 13:38:20.057062 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 3 13:38:20.061932 systemd[1]: Started sshd@0-172.31.31.254:22-68.220.241.50:60426.service - OpenSSH per-connection server daemon (68.220.241.50:60426). Mar 3 13:38:20.084273 amazon-ssm-agent[2225]: 2026-03-03 13:38:19.5886 INFO Checking if agent identity type EC2 can be assumed Mar 3 13:38:20.183509 amazon-ssm-agent[2225]: 2026-03-03 13:38:19.6270 INFO Agent will take identity from EC2 Mar 3 13:38:20.281514 amazon-ssm-agent[2225]: 2026/03/03 13:38:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:38:20.281514 amazon-ssm-agent[2225]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 13:38:20.281514 amazon-ssm-agent[2225]: 2026/03/03 13:38:20 processing appconfig overrides Mar 3 13:38:20.282437 amazon-ssm-agent[2225]: 2026-03-03 13:38:19.6296 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Mar 3 13:38:20.306561 amazon-ssm-agent[2225]: 2026-03-03 13:38:19.6297 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 3 13:38:20.306561 amazon-ssm-agent[2225]: 2026-03-03 13:38:19.6297 INFO [amazon-ssm-agent] Starting Core Agent Mar 3 13:38:20.306561 amazon-ssm-agent[2225]: 2026-03-03 13:38:19.6297 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Mar 3 13:38:20.306561 amazon-ssm-agent[2225]: 2026-03-03 13:38:19.6297 INFO [Registrar] Starting registrar module Mar 3 13:38:20.306561 amazon-ssm-agent[2225]: 2026-03-03 13:38:19.6316 INFO [EC2Identity] Checking disk for registration info Mar 3 13:38:20.306561 amazon-ssm-agent[2225]: 2026-03-03 13:38:19.6316 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Mar 3 13:38:20.306561 amazon-ssm-agent[2225]: 2026-03-03 13:38:19.6316 INFO [EC2Identity] Generating registration keypair Mar 3 13:38:20.306561 amazon-ssm-agent[2225]: 2026-03-03 13:38:20.2466 INFO [EC2Identity] Checking write access before registering Mar 3 13:38:20.306561 amazon-ssm-agent[2225]: 2026-03-03 13:38:20.2469 INFO [EC2Identity] Registering EC2 instance with Systems Manager Mar 3 13:38:20.306954 amazon-ssm-agent[2225]: 2026-03-03 13:38:20.2811 INFO [EC2Identity] EC2 registration was successful. Mar 3 13:38:20.306954 amazon-ssm-agent[2225]: 2026-03-03 13:38:20.2811 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Mar 3 13:38:20.306954 amazon-ssm-agent[2225]: 2026-03-03 13:38:20.2812 INFO [CredentialRefresher] credentialRefresher has started Mar 3 13:38:20.306954 amazon-ssm-agent[2225]: 2026-03-03 13:38:20.2812 INFO [CredentialRefresher] Starting credentials refresher loop Mar 3 13:38:20.306954 amazon-ssm-agent[2225]: 2026-03-03 13:38:20.3063 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 3 13:38:20.306954 amazon-ssm-agent[2225]: 2026-03-03 13:38:20.3065 INFO [CredentialRefresher] Credentials ready Mar 3 13:38:20.380555 amazon-ssm-agent[2225]: 2026-03-03 13:38:20.3066 INFO [CredentialRefresher] Next credential rotation will be in 29.999994604166666 minutes Mar 3 13:38:20.551956 sshd[2248]: Accepted publickey for core from 68.220.241.50 port 60426 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:38:20.555212 sshd-session[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:38:20.562829 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 3 13:38:20.564505 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 3 13:38:20.572636 systemd-logind[1956]: New session 1 of user core. Mar 3 13:38:20.583188 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 3 13:38:20.586308 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 3 13:38:20.598890 (systemd)[2253]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 3 13:38:20.604809 systemd-logind[1956]: New session c1 of user core. Mar 3 13:38:20.755146 systemd[2253]: Queued start job for default target default.target. Mar 3 13:38:20.759945 systemd[2253]: Created slice app.slice - User Application Slice. Mar 3 13:38:20.759983 systemd[2253]: Reached target paths.target - Paths. Mar 3 13:38:20.760194 systemd[2253]: Reached target timers.target - Timers. Mar 3 13:38:20.761724 systemd[2253]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 3 13:38:20.773370 systemd[2253]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 3 13:38:20.773771 systemd[2253]: Reached target sockets.target - Sockets. Mar 3 13:38:20.773833 systemd[2253]: Reached target basic.target - Basic System. Mar 3 13:38:20.773891 systemd[2253]: Reached target default.target - Main User Target. Mar 3 13:38:20.773924 systemd[2253]: Startup finished in 161ms. Mar 3 13:38:20.773961 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 3 13:38:20.784106 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 3 13:38:21.030732 systemd[1]: Started sshd@1-172.31.31.254:22-68.220.241.50:60438.service - OpenSSH per-connection server daemon (68.220.241.50:60438). Mar 3 13:38:21.320642 amazon-ssm-agent[2225]: 2026-03-03 13:38:21.3204 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 3 13:38:21.421566 amazon-ssm-agent[2225]: 2026-03-03 13:38:21.3227 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2269) started Mar 3 13:38:21.466923 sshd[2264]: Accepted publickey for core from 68.220.241.50 port 60438 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:38:21.468114 sshd-session[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:38:21.472893 systemd-logind[1956]: New session 2 of user core. Mar 3 13:38:21.478035 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 3 13:38:21.510309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:38:21.511762 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 3 13:38:21.512713 systemd[1]: Startup finished in 2.642s (kernel) + 8.717s (initrd) + 7.218s (userspace) = 18.577s. Mar 3 13:38:21.518396 (kubelet)[2286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:38:21.522629 amazon-ssm-agent[2225]: 2026-03-03 13:38:21.3228 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 3 13:38:21.702229 sshd[2280]: Connection closed by 68.220.241.50 port 60438 Mar 3 13:38:21.703147 sshd-session[2264]: pam_unix(sshd:session): session closed for user core Mar 3 13:38:21.709124 systemd-logind[1956]: Session 2 logged out. Waiting for processes to exit. Mar 3 13:38:21.710283 systemd[1]: sshd@1-172.31.31.254:22-68.220.241.50:60438.service: Deactivated successfully. Mar 3 13:38:21.712762 systemd[1]: session-2.scope: Deactivated successfully. Mar 3 13:38:21.716023 systemd-logind[1956]: Removed session 2. Mar 3 13:38:21.795093 systemd[1]: Started sshd@2-172.31.31.254:22-68.220.241.50:37504.service - OpenSSH per-connection server daemon (68.220.241.50:37504). Mar 3 13:38:22.222511 sshd[2300]: Accepted publickey for core from 68.220.241.50 port 37504 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:38:22.224382 sshd-session[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:38:22.230471 systemd-logind[1956]: New session 3 of user core. Mar 3 13:38:22.236123 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 3 13:38:22.415777 kubelet[2286]: E0303 13:38:22.415720 2286 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:38:22.418418 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:38:22.418569 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:38:22.418896 systemd[1]: kubelet.service: Consumed 910ms CPU time, 257M memory peak. Mar 3 13:38:22.457824 sshd[2303]: Connection closed by 68.220.241.50 port 37504 Mar 3 13:38:22.458908 sshd-session[2300]: pam_unix(sshd:session): session closed for user core Mar 3 13:38:22.462664 systemd-logind[1956]: Session 3 logged out. Waiting for processes to exit. Mar 3 13:38:22.463277 systemd[1]: sshd@2-172.31.31.254:22-68.220.241.50:37504.service: Deactivated successfully. Mar 3 13:38:22.465516 systemd[1]: session-3.scope: Deactivated successfully. Mar 3 13:38:22.467649 systemd-logind[1956]: Removed session 3. Mar 3 13:38:22.545459 systemd[1]: Started sshd@3-172.31.31.254:22-68.220.241.50:37506.service - OpenSSH per-connection server daemon (68.220.241.50:37506). Mar 3 13:38:22.976701 sshd[2311]: Accepted publickey for core from 68.220.241.50 port 37506 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:38:22.977913 sshd-session[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:38:22.982834 systemd-logind[1956]: New session 4 of user core. Mar 3 13:38:22.994070 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 3 13:38:23.216055 sshd[2314]: Connection closed by 68.220.241.50 port 37506 Mar 3 13:38:23.216606 sshd-session[2311]: pam_unix(sshd:session): session closed for user core Mar 3 13:38:23.220228 systemd-logind[1956]: Session 4 logged out. Waiting for processes to exit. Mar 3 13:38:23.221234 systemd[1]: sshd@3-172.31.31.254:22-68.220.241.50:37506.service: Deactivated successfully. Mar 3 13:38:23.223016 systemd[1]: session-4.scope: Deactivated successfully. Mar 3 13:38:23.224247 systemd-logind[1956]: Removed session 4. Mar 3 13:38:23.308637 systemd[1]: Started sshd@4-172.31.31.254:22-68.220.241.50:37516.service - OpenSSH per-connection server daemon (68.220.241.50:37516). Mar 3 13:38:23.742990 sshd[2320]: Accepted publickey for core from 68.220.241.50 port 37516 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:38:23.744200 sshd-session[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:38:23.749368 systemd-logind[1956]: New session 5 of user core. Mar 3 13:38:23.754040 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 3 13:38:23.956714 sudo[2324]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 3 13:38:23.957025 sudo[2324]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:38:23.970982 sudo[2324]: pam_unix(sudo:session): session closed for user root Mar 3 13:38:24.049419 sshd[2323]: Connection closed by 68.220.241.50 port 37516 Mar 3 13:38:24.051053 sshd-session[2320]: pam_unix(sshd:session): session closed for user core Mar 3 13:38:24.055176 systemd-logind[1956]: Session 5 logged out. Waiting for processes to exit. Mar 3 13:38:24.055696 systemd[1]: sshd@4-172.31.31.254:22-68.220.241.50:37516.service: Deactivated successfully. Mar 3 13:38:24.057809 systemd[1]: session-5.scope: Deactivated successfully. Mar 3 13:38:24.059502 systemd-logind[1956]: Removed session 5. Mar 3 13:38:24.136743 systemd[1]: Started sshd@5-172.31.31.254:22-68.220.241.50:37518.service - OpenSSH per-connection server daemon (68.220.241.50:37518). Mar 3 13:38:24.561424 sshd[2330]: Accepted publickey for core from 68.220.241.50 port 37518 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:38:24.562797 sshd-session[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:38:24.567979 systemd-logind[1956]: New session 6 of user core. Mar 3 13:38:24.576039 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 3 13:38:24.720108 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 3 13:38:24.720369 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:38:24.725125 sudo[2335]: pam_unix(sudo:session): session closed for user root Mar 3 13:38:24.730571 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 3 13:38:24.730828 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:38:24.740544 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 13:38:24.776594 augenrules[2357]: No rules Mar 3 13:38:24.777633 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 13:38:24.777854 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 13:38:24.779455 sudo[2334]: pam_unix(sudo:session): session closed for user root Mar 3 13:38:24.856298 sshd[2333]: Connection closed by 68.220.241.50 port 37518 Mar 3 13:38:24.857062 sshd-session[2330]: pam_unix(sshd:session): session closed for user core Mar 3 13:38:24.861133 systemd[1]: sshd@5-172.31.31.254:22-68.220.241.50:37518.service: Deactivated successfully. Mar 3 13:38:24.863155 systemd[1]: session-6.scope: Deactivated successfully. Mar 3 13:38:24.864143 systemd-logind[1956]: Session 6 logged out. Waiting for processes to exit. Mar 3 13:38:24.866281 systemd-logind[1956]: Removed session 6. Mar 3 13:38:24.960533 systemd[1]: Started sshd@6-172.31.31.254:22-68.220.241.50:37526.service - OpenSSH per-connection server daemon (68.220.241.50:37526). Mar 3 13:38:25.423087 sshd[2366]: Accepted publickey for core from 68.220.241.50 port 37526 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:38:25.424742 sshd-session[2366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:38:25.429663 systemd-logind[1956]: New session 7 of user core. Mar 3 13:38:25.435013 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 3 13:38:25.594017 sudo[2370]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 3 13:38:25.594309 sudo[2370]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:38:26.185246 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 3 13:38:26.196251 (dockerd)[2390]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 3 13:38:26.974998 systemd-resolved[1899]: Clock change detected. Flushing caches. Mar 3 13:38:27.150498 dockerd[2390]: time="2026-03-03T13:38:27.150255539Z" level=info msg="Starting up" Mar 3 13:38:27.153343 dockerd[2390]: time="2026-03-03T13:38:27.153308681Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 3 13:38:27.165180 dockerd[2390]: time="2026-03-03T13:38:27.165139333Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 3 13:38:27.201549 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport972829481-merged.mount: Deactivated successfully. Mar 3 13:38:27.267138 dockerd[2390]: time="2026-03-03T13:38:27.266878304Z" level=info msg="Loading containers: start." Mar 3 13:38:27.278899 kernel: Initializing XFRM netlink socket Mar 3 13:38:27.602874 (udev-worker)[2412]: Network interface NamePolicy= disabled on kernel command line. Mar 3 13:38:27.644042 systemd-networkd[1711]: docker0: Link UP Mar 3 13:38:27.655676 dockerd[2390]: time="2026-03-03T13:38:27.655619673Z" level=info msg="Loading containers: done." Mar 3 13:38:27.678022 dockerd[2390]: time="2026-03-03T13:38:27.677971844Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 3 13:38:27.678200 dockerd[2390]: time="2026-03-03T13:38:27.678062109Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 3 13:38:27.678200 dockerd[2390]: time="2026-03-03T13:38:27.678145711Z" level=info msg="Initializing buildkit" Mar 3 13:38:27.717587 dockerd[2390]: time="2026-03-03T13:38:27.717452167Z" level=info msg="Completed buildkit initialization" Mar 3 13:38:27.724599 dockerd[2390]: time="2026-03-03T13:38:27.724552766Z" level=info msg="Daemon has completed initialization" Mar 3 13:38:27.724932 dockerd[2390]: time="2026-03-03T13:38:27.724722982Z" level=info msg="API listen on /run/docker.sock" Mar 3 13:38:27.724799 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 3 13:38:28.617996 containerd[1982]: time="2026-03-03T13:38:28.617947129Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 3 13:38:29.074405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1123256856.mount: Deactivated successfully. Mar 3 13:38:30.859581 containerd[1982]: time="2026-03-03T13:38:30.859525669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:30.860533 containerd[1982]: time="2026-03-03T13:38:30.860502065Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 3 13:38:30.861863 containerd[1982]: time="2026-03-03T13:38:30.861816708Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:30.864732 containerd[1982]: time="2026-03-03T13:38:30.864669768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:30.865229 containerd[1982]: time="2026-03-03T13:38:30.865202430Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 2.247218835s" Mar 3 13:38:30.865285 containerd[1982]: time="2026-03-03T13:38:30.865236316Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 3 13:38:30.865709 containerd[1982]: time="2026-03-03T13:38:30.865685574Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 3 13:38:32.582689 containerd[1982]: time="2026-03-03T13:38:32.582633594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:32.583712 containerd[1982]: time="2026-03-03T13:38:32.583515064Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 3 13:38:32.584603 containerd[1982]: time="2026-03-03T13:38:32.584580000Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:32.587158 containerd[1982]: time="2026-03-03T13:38:32.587134134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:32.588566 containerd[1982]: time="2026-03-03T13:38:32.587999871Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 1.722287374s" Mar 3 13:38:32.588566 containerd[1982]: time="2026-03-03T13:38:32.588029152Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 3 13:38:32.588704 containerd[1982]: time="2026-03-03T13:38:32.588677763Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 3 13:38:32.893639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 3 13:38:32.895366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:38:33.259585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:38:33.269228 (kubelet)[2672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:38:33.307540 kubelet[2672]: E0303 13:38:33.307467 2672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:38:33.311114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:38:33.311258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:38:33.311580 systemd[1]: kubelet.service: Consumed 165ms CPU time, 110.8M memory peak. Mar 3 13:38:34.216142 containerd[1982]: time="2026-03-03T13:38:34.216073703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:34.217201 containerd[1982]: time="2026-03-03T13:38:34.216989631Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 3 13:38:34.218167 containerd[1982]: time="2026-03-03T13:38:34.218139227Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:34.220667 containerd[1982]: time="2026-03-03T13:38:34.220631784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:34.221463 containerd[1982]: time="2026-03-03T13:38:34.221433784Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.632684588s" Mar 3 13:38:34.221533 containerd[1982]: time="2026-03-03T13:38:34.221464889Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 3 13:38:34.222000 containerd[1982]: time="2026-03-03T13:38:34.221982047Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 3 13:38:35.516611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3919779886.mount: Deactivated successfully. Mar 3 13:38:35.834451 containerd[1982]: time="2026-03-03T13:38:35.834340089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:35.835607 containerd[1982]: time="2026-03-03T13:38:35.835561863Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 3 13:38:35.836724 containerd[1982]: time="2026-03-03T13:38:35.836670184Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:35.838542 containerd[1982]: time="2026-03-03T13:38:35.838492525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:35.839206 containerd[1982]: time="2026-03-03T13:38:35.839022612Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 1.617011271s" Mar 3 13:38:35.839206 containerd[1982]: time="2026-03-03T13:38:35.839058393Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 3 13:38:35.839651 containerd[1982]: time="2026-03-03T13:38:35.839624020Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 3 13:38:36.399357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount365008550.mount: Deactivated successfully. Mar 3 13:38:38.234739 containerd[1982]: time="2026-03-03T13:38:38.234684775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:38.236087 containerd[1982]: time="2026-03-03T13:38:38.235815589Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 3 13:38:38.237178 containerd[1982]: time="2026-03-03T13:38:38.237143114Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:38.240374 containerd[1982]: time="2026-03-03T13:38:38.240343821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:38.241833 containerd[1982]: time="2026-03-03T13:38:38.241405968Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 2.401744539s" Mar 3 13:38:38.241833 containerd[1982]: time="2026-03-03T13:38:38.241441696Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 3 13:38:38.242314 containerd[1982]: time="2026-03-03T13:38:38.242057039Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 3 13:38:38.688009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3136237916.mount: Deactivated successfully. Mar 3 13:38:38.692189 containerd[1982]: time="2026-03-03T13:38:38.692144841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:38.693235 containerd[1982]: time="2026-03-03T13:38:38.693076884Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 3 13:38:38.694159 containerd[1982]: time="2026-03-03T13:38:38.694129207Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:38.696077 containerd[1982]: time="2026-03-03T13:38:38.696028627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:38.697274 containerd[1982]: time="2026-03-03T13:38:38.696745526Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 454.653764ms" Mar 3 13:38:38.697274 containerd[1982]: time="2026-03-03T13:38:38.696784463Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 3 13:38:38.697404 containerd[1982]: time="2026-03-03T13:38:38.697332076Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 3 13:38:39.223066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2239350635.mount: Deactivated successfully. Mar 3 13:38:40.243778 containerd[1982]: time="2026-03-03T13:38:40.243667127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:40.245572 containerd[1982]: time="2026-03-03T13:38:40.245386514Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 3 13:38:40.247870 containerd[1982]: time="2026-03-03T13:38:40.247839001Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:40.254070 containerd[1982]: time="2026-03-03T13:38:40.254035996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:38:40.254795 containerd[1982]: time="2026-03-03T13:38:40.254769777Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.557411274s" Mar 3 13:38:40.255059 containerd[1982]: time="2026-03-03T13:38:40.254902725Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 3 13:38:41.609367 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:38:41.609622 systemd[1]: kubelet.service: Consumed 165ms CPU time, 110.8M memory peak. Mar 3 13:38:41.612538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:38:41.647631 systemd[1]: Reload requested from client PID 2839 ('systemctl') (unit session-7.scope)... Mar 3 13:38:41.647649 systemd[1]: Reloading... Mar 3 13:38:41.765782 zram_generator::config[2884]: No configuration found. Mar 3 13:38:42.014673 systemd[1]: Reloading finished in 366 ms. Mar 3 13:38:42.070533 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 3 13:38:42.070634 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 3 13:38:42.071053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:38:42.071113 systemd[1]: kubelet.service: Consumed 113ms CPU time, 98M memory peak. Mar 3 13:38:42.073544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:38:42.329279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:38:42.339312 (kubelet)[2947]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 13:38:42.379802 kubelet[2947]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 13:38:42.625976 kubelet[2947]: I0303 13:38:42.625636 2947 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 3 13:38:42.625976 kubelet[2947]: I0303 13:38:42.625689 2947 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 13:38:42.625976 kubelet[2947]: I0303 13:38:42.625706 2947 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 3 13:38:42.625976 kubelet[2947]: I0303 13:38:42.625711 2947 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 13:38:42.626165 kubelet[2947]: I0303 13:38:42.626153 2947 server.go:951] "Client rotation is on, will bootstrap in background" Mar 3 13:38:42.639524 kubelet[2947]: I0303 13:38:42.639482 2947 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 13:38:42.641121 kubelet[2947]: E0303 13:38:42.640913 2947 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.254:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.254:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 3 13:38:42.645384 kubelet[2947]: I0303 13:38:42.645358 2947 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 13:38:42.652586 kubelet[2947]: I0303 13:38:42.652553 2947 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 3 13:38:42.655867 kubelet[2947]: I0303 13:38:42.655809 2947 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 13:38:42.658086 kubelet[2947]: I0303 13:38:42.655864 2947 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-254","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 13:38:42.658086 kubelet[2947]: I0303 13:38:42.658086 2947 topology_manager.go:143] "Creating topology manager with none policy" Mar 3 13:38:42.658278 kubelet[2947]: I0303 13:38:42.658099 2947 container_manager_linux.go:308] "Creating device plugin manager" Mar 3 13:38:42.658278 kubelet[2947]: I0303 13:38:42.658209 2947 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 3 13:38:42.660262 kubelet[2947]: I0303 13:38:42.660238 2947 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 3 13:38:42.660421 kubelet[2947]: I0303 13:38:42.660406 2947 kubelet.go:482] "Attempting to sync node with API server" Mar 3 13:38:42.660468 kubelet[2947]: I0303 13:38:42.660422 2947 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 13:38:42.660468 kubelet[2947]: I0303 13:38:42.660446 2947 kubelet.go:394] "Adding apiserver pod source" Mar 3 13:38:42.660468 kubelet[2947]: I0303 13:38:42.660455 2947 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 13:38:42.665691 kubelet[2947]: I0303 13:38:42.665653 2947 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 13:38:42.668389 kubelet[2947]: I0303 13:38:42.668363 2947 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 13:38:42.668451 kubelet[2947]: I0303 13:38:42.668402 2947 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 3 13:38:42.669754 kubelet[2947]: W0303 13:38:42.669710 2947 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 3 13:38:42.672817 kubelet[2947]: I0303 13:38:42.672715 2947 server.go:1257] "Started kubelet" Mar 3 13:38:42.674325 kubelet[2947]: I0303 13:38:42.674306 2947 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 3 13:38:42.680524 kubelet[2947]: E0303 13:38:42.678841 2947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.254:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.254:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-254.18995865d1413b34 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-254,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-254,},FirstTimestamp:2026-03-03 13:38:42.672655156 +0000 UTC m=+0.328896904,LastTimestamp:2026-03-03 13:38:42.672655156 +0000 UTC m=+0.328896904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-254,}" Mar 3 13:38:42.681135 kubelet[2947]: I0303 13:38:42.681108 2947 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 13:38:42.681577 kubelet[2947]: I0303 13:38:42.681537 2947 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 13:38:42.681657 kubelet[2947]: I0303 13:38:42.681648 2947 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 3 13:38:42.683822 kubelet[2947]: I0303 13:38:42.683809 2947 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 13:38:42.684249 kubelet[2947]: I0303 13:38:42.684209 2947 server.go:317] "Adding debug handlers to kubelet server" Mar 3 13:38:42.687576 kubelet[2947]: I0303 13:38:42.686940 2947 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 13:38:42.689480 kubelet[2947]: I0303 13:38:42.689164 2947 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 3 13:38:42.689480 kubelet[2947]: E0303 13:38:42.689343 2947 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-254\" not found" Mar 3 13:38:42.695556 kubelet[2947]: I0303 13:38:42.695525 2947 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 3 13:38:42.695652 kubelet[2947]: I0303 13:38:42.695595 2947 reconciler.go:29] "Reconciler: start to sync state" Mar 3 13:38:42.697011 kubelet[2947]: E0303 13:38:42.696984 2947 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-254?timeout=10s\": dial tcp 172.31.31.254:6443: connect: connection refused" interval="200ms" Mar 3 13:38:42.697325 kubelet[2947]: I0303 13:38:42.697299 2947 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 13:38:42.699364 kubelet[2947]: I0303 13:38:42.699344 2947 factory.go:223] Registration of the containerd container factory successfully Mar 3 13:38:42.699364 kubelet[2947]: I0303 13:38:42.699359 2947 factory.go:223] Registration of the systemd container factory successfully Mar 3 13:38:42.709708 kubelet[2947]: I0303 13:38:42.709592 2947 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 3 13:38:42.710876 kubelet[2947]: I0303 13:38:42.710841 2947 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 3 13:38:42.710876 kubelet[2947]: I0303 13:38:42.710864 2947 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 3 13:38:42.711003 kubelet[2947]: I0303 13:38:42.710912 2947 kubelet.go:2501] "Starting kubelet main sync loop" Mar 3 13:38:42.711003 kubelet[2947]: E0303 13:38:42.710955 2947 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 13:38:42.718273 kubelet[2947]: E0303 13:38:42.718245 2947 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 13:38:42.723278 kubelet[2947]: I0303 13:38:42.723083 2947 cpu_manager.go:225] "Starting" policy="none" Mar 3 13:38:42.723278 kubelet[2947]: I0303 13:38:42.723094 2947 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 3 13:38:42.723278 kubelet[2947]: I0303 13:38:42.723110 2947 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 3 13:38:42.725405 kubelet[2947]: I0303 13:38:42.725385 2947 policy_none.go:50] "Start" Mar 3 13:38:42.725506 kubelet[2947]: I0303 13:38:42.725498 2947 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 3 13:38:42.725554 kubelet[2947]: I0303 13:38:42.725548 2947 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 3 13:38:42.727026 kubelet[2947]: I0303 13:38:42.727012 2947 policy_none.go:44] "Start" Mar 3 13:38:42.731267 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 3 13:38:42.746048 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 3 13:38:42.750208 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 3 13:38:42.761070 kubelet[2947]: E0303 13:38:42.761039 2947 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 13:38:42.761357 kubelet[2947]: I0303 13:38:42.761346 2947 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 3 13:38:42.761470 kubelet[2947]: I0303 13:38:42.761440 2947 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 13:38:42.761737 kubelet[2947]: I0303 13:38:42.761726 2947 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 3 13:38:42.763144 kubelet[2947]: E0303 13:38:42.763127 2947 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 13:38:42.763553 kubelet[2947]: E0303 13:38:42.763536 2947 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-254\" not found" Mar 3 13:38:42.825226 systemd[1]: Created slice kubepods-burstable-pode464b983c558b5bbd267fdc98d15db55.slice - libcontainer container kubepods-burstable-pode464b983c558b5bbd267fdc98d15db55.slice. Mar 3 13:38:42.836120 kubelet[2947]: E0303 13:38:42.836056 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-254\" not found" node="ip-172-31-31-254" Mar 3 13:38:42.840086 systemd[1]: Created slice kubepods-burstable-podd60386d0c0f710dbababcbdad3ede565.slice - libcontainer container kubepods-burstable-podd60386d0c0f710dbababcbdad3ede565.slice. Mar 3 13:38:42.854489 kubelet[2947]: E0303 13:38:42.854453 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-254\" not found" node="ip-172-31-31-254" Mar 3 13:38:42.858412 systemd[1]: Created slice kubepods-burstable-pod3f6cd686b72720a43dbc97d5dda819f5.slice - libcontainer container kubepods-burstable-pod3f6cd686b72720a43dbc97d5dda819f5.slice. Mar 3 13:38:42.860523 kubelet[2947]: E0303 13:38:42.860495 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-254\" not found" node="ip-172-31-31-254" Mar 3 13:38:42.863841 kubelet[2947]: I0303 13:38:42.863815 2947 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-31-254" Mar 3 13:38:42.864204 kubelet[2947]: E0303 13:38:42.864172 2947 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.31.254:6443/api/v1/nodes\": dial tcp 172.31.31.254:6443: connect: connection refused" node="ip-172-31-31-254" Mar 3 13:38:42.896774 kubelet[2947]: I0303 13:38:42.896736 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d60386d0c0f710dbababcbdad3ede565-ca-certs\") pod \"kube-apiserver-ip-172-31-31-254\" (UID: \"d60386d0c0f710dbababcbdad3ede565\") " pod="kube-system/kube-apiserver-ip-172-31-31-254" Mar 3 13:38:42.896972 kubelet[2947]: I0303 13:38:42.896832 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d60386d0c0f710dbababcbdad3ede565-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-254\" (UID: \"d60386d0c0f710dbababcbdad3ede565\") " pod="kube-system/kube-apiserver-ip-172-31-31-254" Mar 3 13:38:42.896972 kubelet[2947]: I0303 13:38:42.896875 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f6cd686b72720a43dbc97d5dda819f5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-254\" (UID: \"3f6cd686b72720a43dbc97d5dda819f5\") " pod="kube-system/kube-controller-manager-ip-172-31-31-254" Mar 3 13:38:42.896972 kubelet[2947]: I0303 13:38:42.896927 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f6cd686b72720a43dbc97d5dda819f5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-254\" (UID: \"3f6cd686b72720a43dbc97d5dda819f5\") " pod="kube-system/kube-controller-manager-ip-172-31-31-254" Mar 3 13:38:42.896972 kubelet[2947]: I0303 13:38:42.896946 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f6cd686b72720a43dbc97d5dda819f5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-254\" (UID: \"3f6cd686b72720a43dbc97d5dda819f5\") " pod="kube-system/kube-controller-manager-ip-172-31-31-254" Mar 3 13:38:42.897082 kubelet[2947]: I0303 13:38:42.896987 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e464b983c558b5bbd267fdc98d15db55-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-254\" (UID: \"e464b983c558b5bbd267fdc98d15db55\") " pod="kube-system/kube-scheduler-ip-172-31-31-254" Mar 3 13:38:42.897082 kubelet[2947]: I0303 13:38:42.897003 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d60386d0c0f710dbababcbdad3ede565-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-254\" (UID: \"d60386d0c0f710dbababcbdad3ede565\") " pod="kube-system/kube-apiserver-ip-172-31-31-254" Mar 3 13:38:42.897082 kubelet[2947]: I0303 13:38:42.897017 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f6cd686b72720a43dbc97d5dda819f5-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-254\" (UID: \"3f6cd686b72720a43dbc97d5dda819f5\") " pod="kube-system/kube-controller-manager-ip-172-31-31-254" Mar 3 13:38:42.897082 kubelet[2947]: I0303 13:38:42.897030 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f6cd686b72720a43dbc97d5dda819f5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-254\" (UID: \"3f6cd686b72720a43dbc97d5dda819f5\") " pod="kube-system/kube-controller-manager-ip-172-31-31-254" Mar 3 13:38:42.897420 kubelet[2947]: E0303 13:38:42.897394 2947 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-254?timeout=10s\": dial tcp 172.31.31.254:6443: connect: connection refused" interval="400ms" Mar 3 13:38:43.066569 kubelet[2947]: I0303 13:38:43.066539 2947 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-31-254" Mar 3 13:38:43.066898 kubelet[2947]: E0303 13:38:43.066849 2947 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.31.254:6443/api/v1/nodes\": dial tcp 172.31.31.254:6443: connect: connection refused" node="ip-172-31-31-254" Mar 3 13:38:43.140139 containerd[1982]: time="2026-03-03T13:38:43.140093329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-254,Uid:e464b983c558b5bbd267fdc98d15db55,Namespace:kube-system,Attempt:0,}" Mar 3 13:38:43.158574 containerd[1982]: time="2026-03-03T13:38:43.158491553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-254,Uid:d60386d0c0f710dbababcbdad3ede565,Namespace:kube-system,Attempt:0,}" Mar 3 13:38:43.165317 containerd[1982]: time="2026-03-03T13:38:43.165271364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-254,Uid:3f6cd686b72720a43dbc97d5dda819f5,Namespace:kube-system,Attempt:0,}" Mar 3 13:38:43.298401 kubelet[2947]: E0303 13:38:43.298283 2947 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-254?timeout=10s\": dial tcp 172.31.31.254:6443: connect: connection refused" interval="800ms" Mar 3 13:38:43.468712 kubelet[2947]: I0303 13:38:43.468582 2947 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-31-254" Mar 3 13:38:43.469288 kubelet[2947]: E0303 13:38:43.468993 2947 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.31.254:6443/api/v1/nodes\": dial tcp 172.31.31.254:6443: connect: connection refused" node="ip-172-31-31-254" Mar 3 13:38:43.622763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2457001411.mount: Deactivated successfully. Mar 3 13:38:43.637956 containerd[1982]: time="2026-03-03T13:38:43.637910001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:38:43.645834 containerd[1982]: time="2026-03-03T13:38:43.645787992Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 3 13:38:43.647872 containerd[1982]: time="2026-03-03T13:38:43.647832186Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:38:43.649933 containerd[1982]: time="2026-03-03T13:38:43.649737835Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:38:43.659441 containerd[1982]: time="2026-03-03T13:38:43.659389157Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:38:43.660159 containerd[1982]: time="2026-03-03T13:38:43.660128338Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 3 13:38:43.661922 containerd[1982]: time="2026-03-03T13:38:43.661866891Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 3 13:38:43.663850 containerd[1982]: time="2026-03-03T13:38:43.663809945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:38:43.664916 containerd[1982]: time="2026-03-03T13:38:43.664380752Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 522.415419ms" Mar 3 13:38:43.668201 containerd[1982]: time="2026-03-03T13:38:43.668164066Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 507.528735ms" Mar 3 13:38:43.684346 containerd[1982]: time="2026-03-03T13:38:43.684291990Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 516.968264ms" Mar 3 13:38:43.702257 containerd[1982]: time="2026-03-03T13:38:43.702020550Z" level=info msg="connecting to shim de87b1d9fd82b84c4080122b5a7cdd9412cc0eaaf996b8d0e85e600593d343aa" address="unix:///run/containerd/s/9a5f46024fc9e77d1b4c838e3250a39a41211d9e3261601b6cf8fd2b4e0b19c7" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:38:43.710698 containerd[1982]: time="2026-03-03T13:38:43.710651616Z" level=info msg="connecting to shim ca29a9b9e8d5e922254c161c13dc7fe517c7c7f7281a4d3416de061ae700d8b5" address="unix:///run/containerd/s/28508d135c294dc2703d113ae360b5a736111873201fe0bc871401cbce6b0551" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:38:43.738642 containerd[1982]: time="2026-03-03T13:38:43.738532042Z" level=info msg="connecting to shim a2e28f77a4a4ed3379362cb1145e198f80b36630e7384c9c6279d1e1e6b89bf5" address="unix:///run/containerd/s/0f112d6c542cb42fa50de37a3e30e2921563d4642cf80091c4927c93847c5488" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:38:43.748859 systemd[1]: Started cri-containerd-de87b1d9fd82b84c4080122b5a7cdd9412cc0eaaf996b8d0e85e600593d343aa.scope - libcontainer container de87b1d9fd82b84c4080122b5a7cdd9412cc0eaaf996b8d0e85e600593d343aa. Mar 3 13:38:43.767330 systemd[1]: Started cri-containerd-ca29a9b9e8d5e922254c161c13dc7fe517c7c7f7281a4d3416de061ae700d8b5.scope - libcontainer container ca29a9b9e8d5e922254c161c13dc7fe517c7c7f7281a4d3416de061ae700d8b5. Mar 3 13:38:43.818301 systemd[1]: Started cri-containerd-a2e28f77a4a4ed3379362cb1145e198f80b36630e7384c9c6279d1e1e6b89bf5.scope - libcontainer container a2e28f77a4a4ed3379362cb1145e198f80b36630e7384c9c6279d1e1e6b89bf5. Mar 3 13:38:43.841016 containerd[1982]: time="2026-03-03T13:38:43.840980045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-254,Uid:e464b983c558b5bbd267fdc98d15db55,Namespace:kube-system,Attempt:0,} returns sandbox id \"de87b1d9fd82b84c4080122b5a7cdd9412cc0eaaf996b8d0e85e600593d343aa\"" Mar 3 13:38:43.855435 containerd[1982]: time="2026-03-03T13:38:43.855390963Z" level=info msg="CreateContainer within sandbox \"de87b1d9fd82b84c4080122b5a7cdd9412cc0eaaf996b8d0e85e600593d343aa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 3 13:38:43.882542 containerd[1982]: time="2026-03-03T13:38:43.882506404Z" level=info msg="Container a2ce8cf82fc914c4515ce3e6120c26b6ab784e9627595763b54b21cce3b55b22: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:38:43.884232 containerd[1982]: time="2026-03-03T13:38:43.883959600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-254,Uid:d60386d0c0f710dbababcbdad3ede565,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca29a9b9e8d5e922254c161c13dc7fe517c7c7f7281a4d3416de061ae700d8b5\"" Mar 3 13:38:43.895381 containerd[1982]: time="2026-03-03T13:38:43.895341193Z" level=info msg="CreateContainer within sandbox \"ca29a9b9e8d5e922254c161c13dc7fe517c7c7f7281a4d3416de061ae700d8b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 3 13:38:43.901278 containerd[1982]: time="2026-03-03T13:38:43.901123331Z" level=info msg="CreateContainer within sandbox \"de87b1d9fd82b84c4080122b5a7cdd9412cc0eaaf996b8d0e85e600593d343aa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a2ce8cf82fc914c4515ce3e6120c26b6ab784e9627595763b54b21cce3b55b22\"" Mar 3 13:38:43.902035 containerd[1982]: time="2026-03-03T13:38:43.902009803Z" level=info msg="StartContainer for \"a2ce8cf82fc914c4515ce3e6120c26b6ab784e9627595763b54b21cce3b55b22\"" Mar 3 13:38:43.904504 containerd[1982]: time="2026-03-03T13:38:43.904444520Z" level=info msg="connecting to shim a2ce8cf82fc914c4515ce3e6120c26b6ab784e9627595763b54b21cce3b55b22" address="unix:///run/containerd/s/9a5f46024fc9e77d1b4c838e3250a39a41211d9e3261601b6cf8fd2b4e0b19c7" protocol=ttrpc version=3 Mar 3 13:38:43.908944 containerd[1982]: time="2026-03-03T13:38:43.908697809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-254,Uid:3f6cd686b72720a43dbc97d5dda819f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2e28f77a4a4ed3379362cb1145e198f80b36630e7384c9c6279d1e1e6b89bf5\"" Mar 3 13:38:43.915536 containerd[1982]: time="2026-03-03T13:38:43.915498558Z" level=info msg="CreateContainer within sandbox \"a2e28f77a4a4ed3379362cb1145e198f80b36630e7384c9c6279d1e1e6b89bf5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 3 13:38:43.921648 containerd[1982]: time="2026-03-03T13:38:43.920996169Z" level=info msg="Container 53f267ad4be4189a6068b3723484d486fd013efc9f4722e3cad354fcaee2a6ff: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:38:43.934310 systemd[1]: Started cri-containerd-a2ce8cf82fc914c4515ce3e6120c26b6ab784e9627595763b54b21cce3b55b22.scope - libcontainer container a2ce8cf82fc914c4515ce3e6120c26b6ab784e9627595763b54b21cce3b55b22. Mar 3 13:38:43.936531 containerd[1982]: time="2026-03-03T13:38:43.936501778Z" level=info msg="Container 4a18fdb7da3551ecd2e38c0a31b4dbe6880b5b5f922d515f78e990a61a5c51ec: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:38:43.941353 containerd[1982]: time="2026-03-03T13:38:43.941323700Z" level=info msg="CreateContainer within sandbox \"ca29a9b9e8d5e922254c161c13dc7fe517c7c7f7281a4d3416de061ae700d8b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"53f267ad4be4189a6068b3723484d486fd013efc9f4722e3cad354fcaee2a6ff\"" Mar 3 13:38:43.942571 containerd[1982]: time="2026-03-03T13:38:43.942540783Z" level=info msg="StartContainer for \"53f267ad4be4189a6068b3723484d486fd013efc9f4722e3cad354fcaee2a6ff\"" Mar 3 13:38:43.944234 containerd[1982]: time="2026-03-03T13:38:43.944206510Z" level=info msg="connecting to shim 53f267ad4be4189a6068b3723484d486fd013efc9f4722e3cad354fcaee2a6ff" address="unix:///run/containerd/s/28508d135c294dc2703d113ae360b5a736111873201fe0bc871401cbce6b0551" protocol=ttrpc version=3 Mar 3 13:38:43.948527 containerd[1982]: time="2026-03-03T13:38:43.948490811Z" level=info msg="CreateContainer within sandbox \"a2e28f77a4a4ed3379362cb1145e198f80b36630e7384c9c6279d1e1e6b89bf5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4a18fdb7da3551ecd2e38c0a31b4dbe6880b5b5f922d515f78e990a61a5c51ec\"" Mar 3 13:38:43.951334 containerd[1982]: time="2026-03-03T13:38:43.951010374Z" level=info msg="StartContainer for \"4a18fdb7da3551ecd2e38c0a31b4dbe6880b5b5f922d515f78e990a61a5c51ec\"" Mar 3 13:38:43.952002 containerd[1982]: time="2026-03-03T13:38:43.951937311Z" level=info msg="connecting to shim 4a18fdb7da3551ecd2e38c0a31b4dbe6880b5b5f922d515f78e990a61a5c51ec" address="unix:///run/containerd/s/0f112d6c542cb42fa50de37a3e30e2921563d4642cf80091c4927c93847c5488" protocol=ttrpc version=3 Mar 3 13:38:43.980037 systemd[1]: Started cri-containerd-53f267ad4be4189a6068b3723484d486fd013efc9f4722e3cad354fcaee2a6ff.scope - libcontainer container 53f267ad4be4189a6068b3723484d486fd013efc9f4722e3cad354fcaee2a6ff. Mar 3 13:38:43.983813 systemd[1]: Started cri-containerd-4a18fdb7da3551ecd2e38c0a31b4dbe6880b5b5f922d515f78e990a61a5c51ec.scope - libcontainer container 4a18fdb7da3551ecd2e38c0a31b4dbe6880b5b5f922d515f78e990a61a5c51ec. Mar 3 13:38:44.021975 containerd[1982]: time="2026-03-03T13:38:44.021093579Z" level=info msg="StartContainer for \"a2ce8cf82fc914c4515ce3e6120c26b6ab784e9627595763b54b21cce3b55b22\" returns successfully" Mar 3 13:38:44.073665 containerd[1982]: time="2026-03-03T13:38:44.073599753Z" level=info msg="StartContainer for \"53f267ad4be4189a6068b3723484d486fd013efc9f4722e3cad354fcaee2a6ff\" returns successfully" Mar 3 13:38:44.098703 kubelet[2947]: E0303 13:38:44.098663 2947 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-254?timeout=10s\": dial tcp 172.31.31.254:6443: connect: connection refused" interval="1.6s" Mar 3 13:38:44.113018 containerd[1982]: time="2026-03-03T13:38:44.112980955Z" level=info msg="StartContainer for \"4a18fdb7da3551ecd2e38c0a31b4dbe6880b5b5f922d515f78e990a61a5c51ec\" returns successfully" Mar 3 13:38:44.271533 kubelet[2947]: I0303 13:38:44.271133 2947 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-31-254" Mar 3 13:38:44.736917 kubelet[2947]: E0303 13:38:44.736413 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-254\" not found" node="ip-172-31-31-254" Mar 3 13:38:44.743649 kubelet[2947]: E0303 13:38:44.743609 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-254\" not found" node="ip-172-31-31-254" Mar 3 13:38:44.744561 kubelet[2947]: E0303 13:38:44.744381 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-254\" not found" node="ip-172-31-31-254" Mar 3 13:38:45.639419 kubelet[2947]: I0303 13:38:45.639337 2947 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-31-254" Mar 3 13:38:45.639419 kubelet[2947]: E0303 13:38:45.639388 2947 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ip-172-31-31-254\": node \"ip-172-31-31-254\" not found" Mar 3 13:38:45.654119 kubelet[2947]: E0303 13:38:45.653209 2947 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-254\" not found" Mar 3 13:38:45.745488 kubelet[2947]: E0303 13:38:45.745457 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-254\" not found" node="ip-172-31-31-254" Mar 3 13:38:45.746048 kubelet[2947]: E0303 13:38:45.746025 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-254\" not found" node="ip-172-31-31-254" Mar 3 13:38:45.755285 kubelet[2947]: E0303 13:38:45.755249 2947 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-254\" not found" Mar 3 13:38:45.856148 kubelet[2947]: E0303 13:38:45.856097 2947 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-254\" not found" Mar 3 13:38:45.956937 kubelet[2947]: E0303 13:38:45.956872 2947 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-254\" not found" Mar 3 13:38:46.058011 kubelet[2947]: E0303 13:38:46.057939 2947 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-254\" not found" Mar 3 13:38:46.158921 kubelet[2947]: E0303 13:38:46.158864 2947 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-254\" not found" Mar 3 13:38:46.259141 kubelet[2947]: E0303 13:38:46.259028 2947 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-254\" not found" Mar 3 13:38:46.359985 kubelet[2947]: E0303 13:38:46.359941 2947 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-254\" not found" Mar 3 13:38:46.460965 kubelet[2947]: E0303 13:38:46.460871 2947 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-254\" not found" Mar 3 13:38:46.561405 kubelet[2947]: E0303 13:38:46.561299 2947 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-254\" not found" Mar 3 13:38:46.662363 kubelet[2947]: E0303 13:38:46.662323 2947 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-254\" not found" Mar 3 13:38:46.763058 kubelet[2947]: E0303 13:38:46.763020 2947 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-254\" not found" Mar 3 13:38:46.794604 kubelet[2947]: I0303 13:38:46.794563 2947 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-254" Mar 3 13:38:46.807777 kubelet[2947]: I0303 13:38:46.807505 2947 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-254" Mar 3 13:38:46.813973 kubelet[2947]: I0303 13:38:46.813827 2947 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-254" Mar 3 13:38:47.667181 kubelet[2947]: I0303 13:38:47.667121 2947 apiserver.go:52] "Watching apiserver" Mar 3 13:38:47.696123 kubelet[2947]: I0303 13:38:47.696065 2947 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 3 13:38:47.805992 systemd[1]: Reload requested from client PID 3234 ('systemctl') (unit session-7.scope)... Mar 3 13:38:47.806012 systemd[1]: Reloading... Mar 3 13:38:47.930951 zram_generator::config[3285]: No configuration found. Mar 3 13:38:48.171969 systemd[1]: Reloading finished in 365 ms. Mar 3 13:38:48.204911 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:38:48.215268 systemd[1]: kubelet.service: Deactivated successfully. Mar 3 13:38:48.215495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:38:48.215548 systemd[1]: kubelet.service: Consumed 691ms CPU time, 122.2M memory peak. Mar 3 13:38:48.220016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:38:48.487560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:38:48.500002 (kubelet)[3339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 13:38:48.557903 kubelet[3339]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 13:38:48.566695 kubelet[3339]: I0303 13:38:48.566634 3339 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 3 13:38:48.566695 kubelet[3339]: I0303 13:38:48.566677 3339 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 13:38:48.566695 kubelet[3339]: I0303 13:38:48.566698 3339 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 3 13:38:48.566695 kubelet[3339]: I0303 13:38:48.566704 3339 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 13:38:48.567083 kubelet[3339]: I0303 13:38:48.567059 3339 server.go:951] "Client rotation is on, will bootstrap in background" Mar 3 13:38:48.568346 kubelet[3339]: I0303 13:38:48.568313 3339 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 3 13:38:48.576239 kubelet[3339]: I0303 13:38:48.576071 3339 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 13:38:48.581324 kubelet[3339]: I0303 13:38:48.581295 3339 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 13:38:48.591903 kubelet[3339]: I0303 13:38:48.590305 3339 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 3 13:38:48.591903 kubelet[3339]: I0303 13:38:48.590607 3339 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 13:38:48.591903 kubelet[3339]: I0303 13:38:48.590642 3339 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-254","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 13:38:48.591903 kubelet[3339]: I0303 13:38:48.590867 3339 topology_manager.go:143] "Creating topology manager with none policy" Mar 3 13:38:48.592257 kubelet[3339]: I0303 13:38:48.590903 3339 container_manager_linux.go:308] "Creating device plugin manager" Mar 3 13:38:48.592257 kubelet[3339]: I0303 13:38:48.590931 3339 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 3 13:38:48.592257 kubelet[3339]: I0303 13:38:48.591146 3339 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 3 13:38:48.592257 kubelet[3339]: I0303 13:38:48.591318 3339 kubelet.go:482] "Attempting to sync node with API server" Mar 3 13:38:48.592257 kubelet[3339]: I0303 13:38:48.591336 3339 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 13:38:48.592257 kubelet[3339]: I0303 13:38:48.591352 3339 kubelet.go:394] "Adding apiserver pod source" Mar 3 13:38:48.592257 kubelet[3339]: I0303 13:38:48.591364 3339 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 13:38:48.595183 kubelet[3339]: I0303 13:38:48.595036 3339 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 13:38:48.599101 kubelet[3339]: I0303 13:38:48.599001 3339 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 13:38:48.599615 kubelet[3339]: I0303 13:38:48.599498 3339 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 3 13:38:48.607823 kubelet[3339]: I0303 13:38:48.607734 3339 server.go:1257] "Started kubelet" Mar 3 13:38:48.610918 kubelet[3339]: I0303 13:38:48.609450 3339 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 13:38:48.610918 kubelet[3339]: I0303 13:38:48.609543 3339 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 3 13:38:48.610918 kubelet[3339]: I0303 13:38:48.609838 3339 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 13:38:48.610918 kubelet[3339]: I0303 13:38:48.609918 3339 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 13:38:48.612626 kubelet[3339]: I0303 13:38:48.611710 3339 server.go:317] "Adding debug handlers to kubelet server" Mar 3 13:38:48.614546 kubelet[3339]: I0303 13:38:48.614524 3339 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 3 13:38:48.619124 kubelet[3339]: I0303 13:38:48.618365 3339 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 13:38:48.622708 kubelet[3339]: I0303 13:38:48.622685 3339 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 3 13:38:48.622984 kubelet[3339]: I0303 13:38:48.622969 3339 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 3 13:38:48.623216 kubelet[3339]: I0303 13:38:48.623206 3339 reconciler.go:29] "Reconciler: start to sync state" Mar 3 13:38:48.625764 kubelet[3339]: I0303 13:38:48.624722 3339 factory.go:223] Registration of the systemd container factory successfully Mar 3 13:38:48.626129 kubelet[3339]: I0303 13:38:48.626098 3339 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 13:38:48.628066 kubelet[3339]: I0303 13:38:48.628004 3339 factory.go:223] Registration of the containerd container factory successfully Mar 3 13:38:48.644420 kubelet[3339]: I0303 13:38:48.644378 3339 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 3 13:38:48.645907 kubelet[3339]: I0303 13:38:48.645840 3339 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 3 13:38:48.645907 kubelet[3339]: I0303 13:38:48.645868 3339 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 3 13:38:48.646067 kubelet[3339]: I0303 13:38:48.645922 3339 kubelet.go:2501] "Starting kubelet main sync loop" Mar 3 13:38:48.646067 kubelet[3339]: E0303 13:38:48.645992 3339 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 13:38:48.695099 kubelet[3339]: I0303 13:38:48.694973 3339 cpu_manager.go:225] "Starting" policy="none" Mar 3 13:38:48.695099 kubelet[3339]: I0303 13:38:48.694991 3339 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 3 13:38:48.695099 kubelet[3339]: I0303 13:38:48.695028 3339 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 3 13:38:48.695500 kubelet[3339]: I0303 13:38:48.695180 3339 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 3 13:38:48.695500 kubelet[3339]: I0303 13:38:48.695194 3339 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 3 13:38:48.695500 kubelet[3339]: I0303 13:38:48.695215 3339 policy_none.go:50] "Start" Mar 3 13:38:48.695500 kubelet[3339]: I0303 13:38:48.695226 3339 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 3 13:38:48.695500 kubelet[3339]: I0303 13:38:48.695238 3339 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 3 13:38:48.695500 kubelet[3339]: I0303 13:38:48.695407 3339 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 3 13:38:48.695500 kubelet[3339]: I0303 13:38:48.695419 3339 policy_none.go:44] "Start" Mar 3 13:38:48.702447 kubelet[3339]: E0303 13:38:48.702310 3339 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 13:38:48.702971 kubelet[3339]: I0303 13:38:48.702733 3339 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 3 13:38:48.702971 kubelet[3339]: I0303 13:38:48.702901 3339 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 13:38:48.703966 kubelet[3339]: I0303 13:38:48.703863 3339 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 3 13:38:48.707645 kubelet[3339]: E0303 13:38:48.707606 3339 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 13:38:48.749936 kubelet[3339]: I0303 13:38:48.749259 3339 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-254" Mar 3 13:38:48.750240 kubelet[3339]: I0303 13:38:48.749256 3339 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-254" Mar 3 13:38:48.751864 kubelet[3339]: I0303 13:38:48.749365 3339 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-254" Mar 3 13:38:48.759106 kubelet[3339]: E0303 13:38:48.759068 3339 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-254\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-254" Mar 3 13:38:48.759990 kubelet[3339]: E0303 13:38:48.759933 3339 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-254\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-254" Mar 3 13:38:48.760419 kubelet[3339]: E0303 13:38:48.760357 3339 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-254\" already exists" pod="kube-system/kube-scheduler-ip-172-31-31-254" Mar 3 13:38:48.814664 kubelet[3339]: I0303 13:38:48.814637 3339 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-31-254" Mar 3 13:38:48.824319 kubelet[3339]: I0303 13:38:48.824281 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d60386d0c0f710dbababcbdad3ede565-ca-certs\") pod \"kube-apiserver-ip-172-31-31-254\" (UID: \"d60386d0c0f710dbababcbdad3ede565\") " pod="kube-system/kube-apiserver-ip-172-31-31-254" Mar 3 13:38:48.824319 kubelet[3339]: I0303 13:38:48.824317 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d60386d0c0f710dbababcbdad3ede565-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-254\" (UID: \"d60386d0c0f710dbababcbdad3ede565\") " pod="kube-system/kube-apiserver-ip-172-31-31-254" Mar 3 13:38:48.824520 kubelet[3339]: I0303 13:38:48.824334 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d60386d0c0f710dbababcbdad3ede565-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-254\" (UID: \"d60386d0c0f710dbababcbdad3ede565\") " pod="kube-system/kube-apiserver-ip-172-31-31-254" Mar 3 13:38:48.824520 kubelet[3339]: I0303 13:38:48.824351 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f6cd686b72720a43dbc97d5dda819f5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-254\" (UID: \"3f6cd686b72720a43dbc97d5dda819f5\") " pod="kube-system/kube-controller-manager-ip-172-31-31-254" Mar 3 13:38:48.824520 kubelet[3339]: I0303 13:38:48.824372 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f6cd686b72720a43dbc97d5dda819f5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-254\" (UID: \"3f6cd686b72720a43dbc97d5dda819f5\") " pod="kube-system/kube-controller-manager-ip-172-31-31-254" Mar 3 13:38:48.824520 kubelet[3339]: I0303 13:38:48.824387 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f6cd686b72720a43dbc97d5dda819f5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-254\" (UID: \"3f6cd686b72720a43dbc97d5dda819f5\") " pod="kube-system/kube-controller-manager-ip-172-31-31-254" Mar 3 13:38:48.824520 kubelet[3339]: I0303 13:38:48.824401 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e464b983c558b5bbd267fdc98d15db55-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-254\" (UID: \"e464b983c558b5bbd267fdc98d15db55\") " pod="kube-system/kube-scheduler-ip-172-31-31-254" Mar 3 13:38:48.824644 kubelet[3339]: I0303 13:38:48.824414 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f6cd686b72720a43dbc97d5dda819f5-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-254\" (UID: \"3f6cd686b72720a43dbc97d5dda819f5\") " pod="kube-system/kube-controller-manager-ip-172-31-31-254" Mar 3 13:38:48.824644 kubelet[3339]: I0303 13:38:48.824428 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f6cd686b72720a43dbc97d5dda819f5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-254\" (UID: \"3f6cd686b72720a43dbc97d5dda819f5\") " pod="kube-system/kube-controller-manager-ip-172-31-31-254" Mar 3 13:38:48.825125 kubelet[3339]: I0303 13:38:48.825064 3339 kubelet_node_status.go:123] "Node was previously registered" node="ip-172-31-31-254" Mar 3 13:38:48.825125 kubelet[3339]: I0303 13:38:48.825131 3339 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-31-254" Mar 3 13:38:49.593458 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 3 13:38:49.595095 kubelet[3339]: I0303 13:38:49.593801 3339 apiserver.go:52] "Watching apiserver" Mar 3 13:38:49.625490 kubelet[3339]: I0303 13:38:49.625451 3339 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 3 13:38:49.681854 kubelet[3339]: I0303 13:38:49.681560 3339 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-254" Mar 3 13:38:49.683909 kubelet[3339]: I0303 13:38:49.683817 3339 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-254" Mar 3 13:38:49.697830 kubelet[3339]: E0303 13:38:49.697797 3339 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-254\" already exists" pod="kube-system/kube-scheduler-ip-172-31-31-254" Mar 3 13:38:49.706391 kubelet[3339]: E0303 13:38:49.706349 3339 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-254\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-254" Mar 3 13:38:49.735179 kubelet[3339]: I0303 13:38:49.735094 3339 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-254" podStartSLOduration=3.735077009 podStartE2EDuration="3.735077009s" podCreationTimestamp="2026-03-03 13:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:38:49.731543033 +0000 UTC m=+1.225751982" watchObservedRunningTime="2026-03-03 13:38:49.735077009 +0000 UTC m=+1.229285960" Mar 3 13:38:49.735398 kubelet[3339]: I0303 13:38:49.735226 3339 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-254" podStartSLOduration=3.73522088 podStartE2EDuration="3.73522088s" podCreationTimestamp="2026-03-03 13:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:38:49.714634102 +0000 UTC m=+1.208843052" watchObservedRunningTime="2026-03-03 13:38:49.73522088 +0000 UTC m=+1.229429829" Mar 3 13:38:49.772866 kubelet[3339]: I0303 13:38:49.772802 3339 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-254" podStartSLOduration=3.772785151 podStartE2EDuration="3.772785151s" podCreationTimestamp="2026-03-03 13:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:38:49.753531752 +0000 UTC m=+1.247740704" watchObservedRunningTime="2026-03-03 13:38:49.772785151 +0000 UTC m=+1.266994102" Mar 3 13:38:54.706120 kubelet[3339]: I0303 13:38:54.706087 3339 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 3 13:38:54.707698 containerd[1982]: time="2026-03-03T13:38:54.707661900Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 3 13:38:54.710398 kubelet[3339]: I0303 13:38:54.710374 3339 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 3 13:38:55.819799 systemd[1]: Created slice kubepods-besteffort-pod7509ea9f_1dda_48d1_9200_4ad820e59119.slice - libcontainer container kubepods-besteffort-pod7509ea9f_1dda_48d1_9200_4ad820e59119.slice. Mar 3 13:38:55.870090 kubelet[3339]: I0303 13:38:55.870045 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg2ph\" (UniqueName: \"kubernetes.io/projected/7509ea9f-1dda-48d1-9200-4ad820e59119-kube-api-access-pg2ph\") pod \"kube-proxy-qnqdb\" (UID: \"7509ea9f-1dda-48d1-9200-4ad820e59119\") " pod="kube-system/kube-proxy-qnqdb" Mar 3 13:38:55.870090 kubelet[3339]: I0303 13:38:55.870089 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7509ea9f-1dda-48d1-9200-4ad820e59119-xtables-lock\") pod \"kube-proxy-qnqdb\" (UID: \"7509ea9f-1dda-48d1-9200-4ad820e59119\") " pod="kube-system/kube-proxy-qnqdb" Mar 3 13:38:55.870090 kubelet[3339]: I0303 13:38:55.870105 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7509ea9f-1dda-48d1-9200-4ad820e59119-kube-proxy\") pod \"kube-proxy-qnqdb\" (UID: \"7509ea9f-1dda-48d1-9200-4ad820e59119\") " pod="kube-system/kube-proxy-qnqdb" Mar 3 13:38:55.870532 kubelet[3339]: I0303 13:38:55.870125 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7509ea9f-1dda-48d1-9200-4ad820e59119-lib-modules\") pod \"kube-proxy-qnqdb\" (UID: \"7509ea9f-1dda-48d1-9200-4ad820e59119\") " pod="kube-system/kube-proxy-qnqdb" Mar 3 13:38:55.958527 systemd[1]: Created slice kubepods-besteffort-podbd96d374_697c_4c87_8cea_03d2a2bc3ea2.slice - libcontainer container kubepods-besteffort-podbd96d374_697c_4c87_8cea_03d2a2bc3ea2.slice. Mar 3 13:38:55.970577 kubelet[3339]: I0303 13:38:55.970542 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r2hc\" (UniqueName: \"kubernetes.io/projected/bd96d374-697c-4c87-8cea-03d2a2bc3ea2-kube-api-access-8r2hc\") pod \"tigera-operator-6cf4cccc57-cz645\" (UID: \"bd96d374-697c-4c87-8cea-03d2a2bc3ea2\") " pod="tigera-operator/tigera-operator-6cf4cccc57-cz645" Mar 3 13:38:55.970925 kubelet[3339]: I0303 13:38:55.970902 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bd96d374-697c-4c87-8cea-03d2a2bc3ea2-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-cz645\" (UID: \"bd96d374-697c-4c87-8cea-03d2a2bc3ea2\") " pod="tigera-operator/tigera-operator-6cf4cccc57-cz645" Mar 3 13:38:56.134152 containerd[1982]: time="2026-03-03T13:38:56.133734174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qnqdb,Uid:7509ea9f-1dda-48d1-9200-4ad820e59119,Namespace:kube-system,Attempt:0,}" Mar 3 13:38:56.186742 containerd[1982]: time="2026-03-03T13:38:56.186696495Z" level=info msg="connecting to shim 1a386ed1eca7f13288f0e046513d8ea1241041d90680d435038b61edef7aa486" address="unix:///run/containerd/s/5d1b29ca08ce59c1c6d6a6543ba5070fee427b4f7578f04a087e390bacbfa814" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:38:56.218110 systemd[1]: Started cri-containerd-1a386ed1eca7f13288f0e046513d8ea1241041d90680d435038b61edef7aa486.scope - libcontainer container 1a386ed1eca7f13288f0e046513d8ea1241041d90680d435038b61edef7aa486. Mar 3 13:38:56.245585 containerd[1982]: time="2026-03-03T13:38:56.245541335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qnqdb,Uid:7509ea9f-1dda-48d1-9200-4ad820e59119,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a386ed1eca7f13288f0e046513d8ea1241041d90680d435038b61edef7aa486\"" Mar 3 13:38:56.253825 containerd[1982]: time="2026-03-03T13:38:56.253409745Z" level=info msg="CreateContainer within sandbox \"1a386ed1eca7f13288f0e046513d8ea1241041d90680d435038b61edef7aa486\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 3 13:38:56.266298 containerd[1982]: time="2026-03-03T13:38:56.266256662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-cz645,Uid:bd96d374-697c-4c87-8cea-03d2a2bc3ea2,Namespace:tigera-operator,Attempt:0,}" Mar 3 13:38:56.272125 containerd[1982]: time="2026-03-03T13:38:56.272087715Z" level=info msg="Container f635749967a0dce3ec1d991d29e175e5f3d2cedbcb7e0b3c66978bae1c53bb55: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:38:56.290173 containerd[1982]: time="2026-03-03T13:38:56.290128609Z" level=info msg="CreateContainer within sandbox \"1a386ed1eca7f13288f0e046513d8ea1241041d90680d435038b61edef7aa486\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f635749967a0dce3ec1d991d29e175e5f3d2cedbcb7e0b3c66978bae1c53bb55\"" Mar 3 13:38:56.290746 containerd[1982]: time="2026-03-03T13:38:56.290716143Z" level=info msg="StartContainer for \"f635749967a0dce3ec1d991d29e175e5f3d2cedbcb7e0b3c66978bae1c53bb55\"" Mar 3 13:38:56.292351 containerd[1982]: time="2026-03-03T13:38:56.292297614Z" level=info msg="connecting to shim f635749967a0dce3ec1d991d29e175e5f3d2cedbcb7e0b3c66978bae1c53bb55" address="unix:///run/containerd/s/5d1b29ca08ce59c1c6d6a6543ba5070fee427b4f7578f04a087e390bacbfa814" protocol=ttrpc version=3 Mar 3 13:38:56.309430 containerd[1982]: time="2026-03-03T13:38:56.308994739Z" level=info msg="connecting to shim 77d824227dc6d44acfdb349dd363d4ccdd5c0b2238c763a95cf5afad5c8fe955" address="unix:///run/containerd/s/42e4129227ff0a64a573d6575b6c9e6b43326d0e28e80595e08aca9873bfd1ae" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:38:56.313152 systemd[1]: Started cri-containerd-f635749967a0dce3ec1d991d29e175e5f3d2cedbcb7e0b3c66978bae1c53bb55.scope - libcontainer container f635749967a0dce3ec1d991d29e175e5f3d2cedbcb7e0b3c66978bae1c53bb55. Mar 3 13:38:56.345289 systemd[1]: Started cri-containerd-77d824227dc6d44acfdb349dd363d4ccdd5c0b2238c763a95cf5afad5c8fe955.scope - libcontainer container 77d824227dc6d44acfdb349dd363d4ccdd5c0b2238c763a95cf5afad5c8fe955. Mar 3 13:38:56.399025 containerd[1982]: time="2026-03-03T13:38:56.398988030Z" level=info msg="StartContainer for \"f635749967a0dce3ec1d991d29e175e5f3d2cedbcb7e0b3c66978bae1c53bb55\" returns successfully" Mar 3 13:38:56.414439 containerd[1982]: time="2026-03-03T13:38:56.414391960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-cz645,Uid:bd96d374-697c-4c87-8cea-03d2a2bc3ea2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"77d824227dc6d44acfdb349dd363d4ccdd5c0b2238c763a95cf5afad5c8fe955\"" Mar 3 13:38:56.418219 containerd[1982]: time="2026-03-03T13:38:56.418184207Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 3 13:38:56.709200 kubelet[3339]: I0303 13:38:56.708704 3339 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-qnqdb" podStartSLOduration=1.708657647 podStartE2EDuration="1.708657647s" podCreationTimestamp="2026-03-03 13:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:38:56.708485443 +0000 UTC m=+8.202694388" watchObservedRunningTime="2026-03-03 13:38:56.708657647 +0000 UTC m=+8.202866597" Mar 3 13:38:57.728550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3810498227.mount: Deactivated successfully. Mar 3 13:39:01.623600 containerd[1982]: time="2026-03-03T13:39:01.623535019Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:01.631549 containerd[1982]: time="2026-03-03T13:39:01.631487043Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 3 13:39:01.639937 containerd[1982]: time="2026-03-03T13:39:01.639021724Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:01.668730 containerd[1982]: time="2026-03-03T13:39:01.668680282Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:01.671156 containerd[1982]: time="2026-03-03T13:39:01.669578812Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 5.25135491s" Mar 3 13:39:01.671156 containerd[1982]: time="2026-03-03T13:39:01.669623268Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 3 13:39:01.775528 containerd[1982]: time="2026-03-03T13:39:01.775429820Z" level=info msg="CreateContainer within sandbox \"77d824227dc6d44acfdb349dd363d4ccdd5c0b2238c763a95cf5afad5c8fe955\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 3 13:39:01.847574 containerd[1982]: time="2026-03-03T13:39:01.847522164Z" level=info msg="Container 63a8c6f0397df90384f2df1339387e168af4428414657cb521abd42671e95049: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:39:01.918102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3494134689.mount: Deactivated successfully. Mar 3 13:39:01.955138 containerd[1982]: time="2026-03-03T13:39:01.949735723Z" level=info msg="CreateContainer within sandbox \"77d824227dc6d44acfdb349dd363d4ccdd5c0b2238c763a95cf5afad5c8fe955\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"63a8c6f0397df90384f2df1339387e168af4428414657cb521abd42671e95049\"" Mar 3 13:39:01.977805 containerd[1982]: time="2026-03-03T13:39:01.957229066Z" level=info msg="StartContainer for \"63a8c6f0397df90384f2df1339387e168af4428414657cb521abd42671e95049\"" Mar 3 13:39:01.990605 containerd[1982]: time="2026-03-03T13:39:01.990531093Z" level=info msg="connecting to shim 63a8c6f0397df90384f2df1339387e168af4428414657cb521abd42671e95049" address="unix:///run/containerd/s/42e4129227ff0a64a573d6575b6c9e6b43326d0e28e80595e08aca9873bfd1ae" protocol=ttrpc version=3 Mar 3 13:39:02.114301 systemd[1]: Started cri-containerd-63a8c6f0397df90384f2df1339387e168af4428414657cb521abd42671e95049.scope - libcontainer container 63a8c6f0397df90384f2df1339387e168af4428414657cb521abd42671e95049. Mar 3 13:39:02.183051 containerd[1982]: time="2026-03-03T13:39:02.182877140Z" level=info msg="StartContainer for \"63a8c6f0397df90384f2df1339387e168af4428414657cb521abd42671e95049\" returns successfully" Mar 3 13:39:04.401653 update_engine[1958]: I20260303 13:39:04.400934 1958 update_attempter.cc:509] Updating boot flags... Mar 3 13:39:06.164905 kubelet[3339]: I0303 13:39:06.161422 3339 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-cz645" podStartSLOduration=5.885333276 podStartE2EDuration="11.16137861s" podCreationTimestamp="2026-03-03 13:38:55 +0000 UTC" firstStartedPulling="2026-03-03 13:38:56.417769084 +0000 UTC m=+7.911978010" lastFinishedPulling="2026-03-03 13:39:01.693814418 +0000 UTC m=+13.188023344" observedRunningTime="2026-03-03 13:39:02.769847324 +0000 UTC m=+14.264056272" watchObservedRunningTime="2026-03-03 13:39:06.16137861 +0000 UTC m=+17.655587559" Mar 3 13:39:09.962575 sudo[2370]: pam_unix(sudo:session): session closed for user root Mar 3 13:39:10.046648 sshd[2369]: Connection closed by 68.220.241.50 port 37526 Mar 3 13:39:10.046548 sshd-session[2366]: pam_unix(sshd:session): session closed for user core Mar 3 13:39:10.052362 systemd[1]: sshd@6-172.31.31.254:22-68.220.241.50:37526.service: Deactivated successfully. Mar 3 13:39:10.055920 systemd[1]: session-7.scope: Deactivated successfully. Mar 3 13:39:10.056381 systemd[1]: session-7.scope: Consumed 3.761s CPU time, 169M memory peak. Mar 3 13:39:10.058240 systemd-logind[1956]: Session 7 logged out. Waiting for processes to exit. Mar 3 13:39:10.060592 systemd-logind[1956]: Removed session 7. Mar 3 13:39:12.700021 systemd[1]: Created slice kubepods-besteffort-podf5eb25b0_9581_4e76_8524_bdf6aa6b05a6.slice - libcontainer container kubepods-besteffort-podf5eb25b0_9581_4e76_8524_bdf6aa6b05a6.slice. Mar 3 13:39:12.715899 kubelet[3339]: I0303 13:39:12.715701 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5eb25b0-9581-4e76-8524-bdf6aa6b05a6-tigera-ca-bundle\") pod \"calico-typha-58dd4f9b54-dv55v\" (UID: \"f5eb25b0-9581-4e76-8524-bdf6aa6b05a6\") " pod="calico-system/calico-typha-58dd4f9b54-dv55v" Mar 3 13:39:12.715899 kubelet[3339]: I0303 13:39:12.715746 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f5eb25b0-9581-4e76-8524-bdf6aa6b05a6-typha-certs\") pod \"calico-typha-58dd4f9b54-dv55v\" (UID: \"f5eb25b0-9581-4e76-8524-bdf6aa6b05a6\") " pod="calico-system/calico-typha-58dd4f9b54-dv55v" Mar 3 13:39:12.715899 kubelet[3339]: I0303 13:39:12.715780 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdnbr\" (UniqueName: \"kubernetes.io/projected/f5eb25b0-9581-4e76-8524-bdf6aa6b05a6-kube-api-access-rdnbr\") pod \"calico-typha-58dd4f9b54-dv55v\" (UID: \"f5eb25b0-9581-4e76-8524-bdf6aa6b05a6\") " pod="calico-system/calico-typha-58dd4f9b54-dv55v" Mar 3 13:39:12.793874 systemd[1]: Created slice kubepods-besteffort-pod90fb4841_b9ab_40a5_b844_21d8effe2dea.slice - libcontainer container kubepods-besteffort-pod90fb4841_b9ab_40a5_b844_21d8effe2dea.slice. Mar 3 13:39:12.816750 kubelet[3339]: I0303 13:39:12.816715 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/90fb4841-b9ab-40a5-b844-21d8effe2dea-cni-net-dir\") pod \"calico-node-pbcnq\" (UID: \"90fb4841-b9ab-40a5-b844-21d8effe2dea\") " pod="calico-system/calico-node-pbcnq" Mar 3 13:39:12.816750 kubelet[3339]: I0303 13:39:12.816748 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/90fb4841-b9ab-40a5-b844-21d8effe2dea-sys-fs\") pod \"calico-node-pbcnq\" (UID: \"90fb4841-b9ab-40a5-b844-21d8effe2dea\") " pod="calico-system/calico-node-pbcnq" Mar 3 13:39:12.816969 kubelet[3339]: I0303 13:39:12.816764 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/90fb4841-b9ab-40a5-b844-21d8effe2dea-policysync\") pod \"calico-node-pbcnq\" (UID: \"90fb4841-b9ab-40a5-b844-21d8effe2dea\") " pod="calico-system/calico-node-pbcnq" Mar 3 13:39:12.816969 kubelet[3339]: I0303 13:39:12.816801 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/90fb4841-b9ab-40a5-b844-21d8effe2dea-bpffs\") pod \"calico-node-pbcnq\" (UID: \"90fb4841-b9ab-40a5-b844-21d8effe2dea\") " pod="calico-system/calico-node-pbcnq" Mar 3 13:39:12.816969 kubelet[3339]: I0303 13:39:12.816817 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/90fb4841-b9ab-40a5-b844-21d8effe2dea-cni-log-dir\") pod \"calico-node-pbcnq\" (UID: \"90fb4841-b9ab-40a5-b844-21d8effe2dea\") " pod="calico-system/calico-node-pbcnq" Mar 3 13:39:12.816969 kubelet[3339]: I0303 13:39:12.816830 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90fb4841-b9ab-40a5-b844-21d8effe2dea-lib-modules\") pod \"calico-node-pbcnq\" (UID: \"90fb4841-b9ab-40a5-b844-21d8effe2dea\") " pod="calico-system/calico-node-pbcnq" Mar 3 13:39:12.816969 kubelet[3339]: I0303 13:39:12.816843 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/90fb4841-b9ab-40a5-b844-21d8effe2dea-nodeproc\") pod \"calico-node-pbcnq\" (UID: \"90fb4841-b9ab-40a5-b844-21d8effe2dea\") " pod="calico-system/calico-node-pbcnq" Mar 3 13:39:12.817173 kubelet[3339]: I0303 13:39:12.816858 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90fb4841-b9ab-40a5-b844-21d8effe2dea-tigera-ca-bundle\") pod \"calico-node-pbcnq\" (UID: \"90fb4841-b9ab-40a5-b844-21d8effe2dea\") " pod="calico-system/calico-node-pbcnq" Mar 3 13:39:12.817173 kubelet[3339]: I0303 13:39:12.816872 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnzqp\" (UniqueName: \"kubernetes.io/projected/90fb4841-b9ab-40a5-b844-21d8effe2dea-kube-api-access-rnzqp\") pod \"calico-node-pbcnq\" (UID: \"90fb4841-b9ab-40a5-b844-21d8effe2dea\") " pod="calico-system/calico-node-pbcnq" Mar 3 13:39:12.817173 kubelet[3339]: I0303 13:39:12.816900 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/90fb4841-b9ab-40a5-b844-21d8effe2dea-flexvol-driver-host\") pod \"calico-node-pbcnq\" (UID: \"90fb4841-b9ab-40a5-b844-21d8effe2dea\") " pod="calico-system/calico-node-pbcnq" Mar 3 13:39:12.817173 kubelet[3339]: I0303 13:39:12.816928 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/90fb4841-b9ab-40a5-b844-21d8effe2dea-node-certs\") pod \"calico-node-pbcnq\" (UID: \"90fb4841-b9ab-40a5-b844-21d8effe2dea\") " pod="calico-system/calico-node-pbcnq" Mar 3 13:39:12.817173 kubelet[3339]: I0303 13:39:12.816949 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/90fb4841-b9ab-40a5-b844-21d8effe2dea-var-run-calico\") pod \"calico-node-pbcnq\" (UID: \"90fb4841-b9ab-40a5-b844-21d8effe2dea\") " pod="calico-system/calico-node-pbcnq" Mar 3 13:39:12.817423 kubelet[3339]: I0303 13:39:12.816962 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90fb4841-b9ab-40a5-b844-21d8effe2dea-xtables-lock\") pod \"calico-node-pbcnq\" (UID: \"90fb4841-b9ab-40a5-b844-21d8effe2dea\") " pod="calico-system/calico-node-pbcnq" Mar 3 13:39:12.817423 kubelet[3339]: I0303 13:39:12.816985 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/90fb4841-b9ab-40a5-b844-21d8effe2dea-cni-bin-dir\") pod \"calico-node-pbcnq\" (UID: \"90fb4841-b9ab-40a5-b844-21d8effe2dea\") " pod="calico-system/calico-node-pbcnq" Mar 3 13:39:12.817423 kubelet[3339]: I0303 13:39:12.817000 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/90fb4841-b9ab-40a5-b844-21d8effe2dea-var-lib-calico\") pod \"calico-node-pbcnq\" (UID: \"90fb4841-b9ab-40a5-b844-21d8effe2dea\") " pod="calico-system/calico-node-pbcnq" Mar 3 13:39:12.916148 kubelet[3339]: E0303 13:39:12.916095 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwlnv" podUID="c6298661-1013-4f89-bedc-335a967b2002" Mar 3 13:39:12.933289 kubelet[3339]: E0303 13:39:12.933088 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.933289 kubelet[3339]: W0303 13:39:12.933122 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.933289 kubelet[3339]: E0303 13:39:12.933163 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.937307 kubelet[3339]: E0303 13:39:12.937229 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.937307 kubelet[3339]: W0303 13:39:12.937256 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.937307 kubelet[3339]: E0303 13:39:12.937292 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.938334 kubelet[3339]: E0303 13:39:12.937944 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.938334 kubelet[3339]: W0303 13:39:12.937967 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.938334 kubelet[3339]: E0303 13:39:12.937988 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.941263 kubelet[3339]: E0303 13:39:12.941082 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.941263 kubelet[3339]: W0303 13:39:12.941101 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.941263 kubelet[3339]: E0303 13:39:12.941124 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.956600 kubelet[3339]: E0303 13:39:12.956265 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.956600 kubelet[3339]: W0303 13:39:12.956293 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.956600 kubelet[3339]: E0303 13:39:12.956358 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.960860 kubelet[3339]: E0303 13:39:12.960677 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.960860 kubelet[3339]: W0303 13:39:12.960701 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.960860 kubelet[3339]: E0303 13:39:12.960730 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.962068 kubelet[3339]: E0303 13:39:12.961976 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.962441 kubelet[3339]: W0303 13:39:12.962209 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.962441 kubelet[3339]: E0303 13:39:12.962235 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.962941 kubelet[3339]: E0303 13:39:12.962618 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.962941 kubelet[3339]: W0303 13:39:12.962640 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.962941 kubelet[3339]: E0303 13:39:12.962655 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.963312 kubelet[3339]: E0303 13:39:12.963300 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.963415 kubelet[3339]: W0303 13:39:12.963384 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.963590 kubelet[3339]: E0303 13:39:12.963576 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.964200 kubelet[3339]: E0303 13:39:12.964185 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.964922 kubelet[3339]: W0303 13:39:12.964901 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.965133 kubelet[3339]: E0303 13:39:12.965009 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.965261 kubelet[3339]: E0303 13:39:12.965250 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.965326 kubelet[3339]: W0303 13:39:12.965316 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.965391 kubelet[3339]: E0303 13:39:12.965381 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.965859 kubelet[3339]: E0303 13:39:12.965632 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.965859 kubelet[3339]: W0303 13:39:12.965656 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.965859 kubelet[3339]: E0303 13:39:12.965669 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.966527 kubelet[3339]: E0303 13:39:12.966487 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.966738 kubelet[3339]: W0303 13:39:12.966599 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.966738 kubelet[3339]: E0303 13:39:12.966617 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.967961 kubelet[3339]: E0303 13:39:12.967494 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.967961 kubelet[3339]: W0303 13:39:12.967514 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.967961 kubelet[3339]: E0303 13:39:12.967529 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.968729 kubelet[3339]: E0303 13:39:12.968595 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.968918 kubelet[3339]: W0303 13:39:12.968804 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.969087 kubelet[3339]: E0303 13:39:12.968992 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.972696 kubelet[3339]: E0303 13:39:12.971950 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.972696 kubelet[3339]: W0303 13:39:12.971967 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.972696 kubelet[3339]: E0303 13:39:12.971984 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.972696 kubelet[3339]: E0303 13:39:12.972171 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.972696 kubelet[3339]: W0303 13:39:12.972180 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.972696 kubelet[3339]: E0303 13:39:12.972190 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.972696 kubelet[3339]: E0303 13:39:12.972429 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.972696 kubelet[3339]: W0303 13:39:12.972438 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.972696 kubelet[3339]: E0303 13:39:12.972448 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.972696 kubelet[3339]: E0303 13:39:12.972621 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.973311 kubelet[3339]: W0303 13:39:12.972628 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.973311 kubelet[3339]: E0303 13:39:12.972638 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.973311 kubelet[3339]: E0303 13:39:12.972790 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.973311 kubelet[3339]: W0303 13:39:12.972799 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.973311 kubelet[3339]: E0303 13:39:12.972809 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.973311 kubelet[3339]: E0303 13:39:12.973090 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.973311 kubelet[3339]: W0303 13:39:12.973101 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.973311 kubelet[3339]: E0303 13:39:12.973113 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.973311 kubelet[3339]: E0303 13:39:12.973290 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.973311 kubelet[3339]: W0303 13:39:12.973299 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.973710 kubelet[3339]: E0303 13:39:12.973309 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.973710 kubelet[3339]: E0303 13:39:12.973586 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.973710 kubelet[3339]: W0303 13:39:12.973596 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.973710 kubelet[3339]: E0303 13:39:12.973609 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.973908 kubelet[3339]: E0303 13:39:12.973830 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.973908 kubelet[3339]: W0303 13:39:12.973838 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.973908 kubelet[3339]: E0303 13:39:12.973849 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.974056 kubelet[3339]: E0303 13:39:12.974039 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.974056 kubelet[3339]: W0303 13:39:12.974048 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.974142 kubelet[3339]: E0303 13:39:12.974060 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.974283 kubelet[3339]: E0303 13:39:12.974270 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.974283 kubelet[3339]: W0303 13:39:12.974282 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.974393 kubelet[3339]: E0303 13:39:12.974294 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.992009 kubelet[3339]: E0303 13:39:12.991974 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.992009 kubelet[3339]: W0303 13:39:12.992002 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.992313 kubelet[3339]: E0303 13:39:12.992029 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.992313 kubelet[3339]: E0303 13:39:12.992258 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.992313 kubelet[3339]: W0303 13:39:12.992268 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.992313 kubelet[3339]: E0303 13:39:12.992282 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.994078 kubelet[3339]: E0303 13:39:12.994046 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.994078 kubelet[3339]: W0303 13:39:12.994065 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.994078 kubelet[3339]: E0303 13:39:12.994082 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.994545 kubelet[3339]: E0303 13:39:12.994531 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.994734 kubelet[3339]: W0303 13:39:12.994604 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.994734 kubelet[3339]: E0303 13:39:12.994622 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.995250 kubelet[3339]: E0303 13:39:12.995202 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.995250 kubelet[3339]: W0303 13:39:12.995216 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.995443 kubelet[3339]: E0303 13:39:12.995328 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.995715 kubelet[3339]: E0303 13:39:12.995702 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.995873 kubelet[3339]: W0303 13:39:12.995745 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.995873 kubelet[3339]: E0303 13:39:12.995760 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.996234 kubelet[3339]: E0303 13:39:12.996152 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.996234 kubelet[3339]: W0303 13:39:12.996166 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.996234 kubelet[3339]: E0303 13:39:12.996181 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.996611 kubelet[3339]: E0303 13:39:12.996531 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.996611 kubelet[3339]: W0303 13:39:12.996542 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.996611 kubelet[3339]: E0303 13:39:12.996555 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.997034 kubelet[3339]: E0303 13:39:12.996962 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.997034 kubelet[3339]: W0303 13:39:12.996975 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.997034 kubelet[3339]: E0303 13:39:12.996991 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.997412 kubelet[3339]: E0303 13:39:12.997342 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.997412 kubelet[3339]: W0303 13:39:12.997355 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.997412 kubelet[3339]: E0303 13:39:12.997370 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.997775 kubelet[3339]: E0303 13:39:12.997704 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.997775 kubelet[3339]: W0303 13:39:12.997717 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.997775 kubelet[3339]: E0303 13:39:12.997729 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.998200 kubelet[3339]: E0303 13:39:12.998122 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.998200 kubelet[3339]: W0303 13:39:12.998134 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.998200 kubelet[3339]: E0303 13:39:12.998147 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.998575 kubelet[3339]: E0303 13:39:12.998508 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.998575 kubelet[3339]: W0303 13:39:12.998520 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.998575 kubelet[3339]: E0303 13:39:12.998532 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.998873 kubelet[3339]: E0303 13:39:12.998859 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.999056 kubelet[3339]: W0303 13:39:12.998932 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.999056 kubelet[3339]: E0303 13:39:12.998947 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.999440 kubelet[3339]: E0303 13:39:12.999298 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:12.999440 kubelet[3339]: W0303 13:39:12.999313 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:12.999440 kubelet[3339]: E0303 13:39:12.999326 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:12.999902 kubelet[3339]: E0303 13:39:12.999868 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.000069 kubelet[3339]: W0303 13:39:13.000000 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.000069 kubelet[3339]: E0303 13:39:13.000020 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.000493 kubelet[3339]: E0303 13:39:13.000437 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.000493 kubelet[3339]: W0303 13:39:13.000449 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.000493 kubelet[3339]: E0303 13:39:13.000463 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.001081 kubelet[3339]: E0303 13:39:13.001068 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.001293 kubelet[3339]: W0303 13:39:13.001220 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.001293 kubelet[3339]: E0303 13:39:13.001238 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.001733 kubelet[3339]: E0303 13:39:13.001635 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.001733 kubelet[3339]: W0303 13:39:13.001668 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.001733 kubelet[3339]: E0303 13:39:13.001681 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.002195 kubelet[3339]: E0303 13:39:13.002179 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.002366 kubelet[3339]: W0303 13:39:13.002265 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.002366 kubelet[3339]: E0303 13:39:13.002281 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.010674 containerd[1982]: time="2026-03-03T13:39:13.010628610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58dd4f9b54-dv55v,Uid:f5eb25b0-9581-4e76-8524-bdf6aa6b05a6,Namespace:calico-system,Attempt:0,}" Mar 3 13:39:13.022659 kubelet[3339]: E0303 13:39:13.022505 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.022659 kubelet[3339]: W0303 13:39:13.022533 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.022659 kubelet[3339]: E0303 13:39:13.022561 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.022659 kubelet[3339]: I0303 13:39:13.022601 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c6298661-1013-4f89-bedc-335a967b2002-registration-dir\") pod \"csi-node-driver-rwlnv\" (UID: \"c6298661-1013-4f89-bedc-335a967b2002\") " pod="calico-system/csi-node-driver-rwlnv" Mar 3 13:39:13.022945 kubelet[3339]: E0303 13:39:13.022929 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.022945 kubelet[3339]: W0303 13:39:13.022943 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.023024 kubelet[3339]: E0303 13:39:13.022959 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.023024 kubelet[3339]: I0303 13:39:13.022997 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnp29\" (UniqueName: \"kubernetes.io/projected/c6298661-1013-4f89-bedc-335a967b2002-kube-api-access-rnp29\") pod \"csi-node-driver-rwlnv\" (UID: \"c6298661-1013-4f89-bedc-335a967b2002\") " pod="calico-system/csi-node-driver-rwlnv" Mar 3 13:39:13.023752 kubelet[3339]: E0303 13:39:13.023635 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.023752 kubelet[3339]: W0303 13:39:13.023653 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.023752 kubelet[3339]: E0303 13:39:13.023669 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.023752 kubelet[3339]: I0303 13:39:13.023690 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c6298661-1013-4f89-bedc-335a967b2002-socket-dir\") pod \"csi-node-driver-rwlnv\" (UID: \"c6298661-1013-4f89-bedc-335a967b2002\") " pod="calico-system/csi-node-driver-rwlnv" Mar 3 13:39:13.024338 kubelet[3339]: E0303 13:39:13.024320 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.024427 kubelet[3339]: W0303 13:39:13.024338 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.024427 kubelet[3339]: E0303 13:39:13.024352 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.024581 kubelet[3339]: I0303 13:39:13.024473 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c6298661-1013-4f89-bedc-335a967b2002-varrun\") pod \"csi-node-driver-rwlnv\" (UID: \"c6298661-1013-4f89-bedc-335a967b2002\") " pod="calico-system/csi-node-driver-rwlnv" Mar 3 13:39:13.025499 kubelet[3339]: E0303 13:39:13.025467 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.025499 kubelet[3339]: W0303 13:39:13.025498 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.025729 kubelet[3339]: E0303 13:39:13.025512 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.026942 kubelet[3339]: I0303 13:39:13.026918 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6298661-1013-4f89-bedc-335a967b2002-kubelet-dir\") pod \"csi-node-driver-rwlnv\" (UID: \"c6298661-1013-4f89-bedc-335a967b2002\") " pod="calico-system/csi-node-driver-rwlnv" Mar 3 13:39:13.027578 kubelet[3339]: E0303 13:39:13.027559 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.027666 kubelet[3339]: W0303 13:39:13.027577 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.027666 kubelet[3339]: E0303 13:39:13.027594 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.028756 kubelet[3339]: E0303 13:39:13.028739 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.028756 kubelet[3339]: W0303 13:39:13.028756 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.029623 kubelet[3339]: E0303 13:39:13.028773 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.029623 kubelet[3339]: E0303 13:39:13.029066 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.029623 kubelet[3339]: W0303 13:39:13.029077 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.029623 kubelet[3339]: E0303 13:39:13.029091 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.029623 kubelet[3339]: E0303 13:39:13.029351 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.029623 kubelet[3339]: W0303 13:39:13.029360 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.029623 kubelet[3339]: E0303 13:39:13.029372 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.030129 kubelet[3339]: E0303 13:39:13.029783 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.030129 kubelet[3339]: W0303 13:39:13.029794 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.030129 kubelet[3339]: E0303 13:39:13.029807 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.030983 kubelet[3339]: E0303 13:39:13.030431 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.030983 kubelet[3339]: W0303 13:39:13.030443 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.030983 kubelet[3339]: E0303 13:39:13.030456 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.031653 kubelet[3339]: E0303 13:39:13.031631 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.031653 kubelet[3339]: W0303 13:39:13.031653 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.031782 kubelet[3339]: E0303 13:39:13.031669 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.032150 kubelet[3339]: E0303 13:39:13.032134 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.032150 kubelet[3339]: W0303 13:39:13.032148 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.032315 kubelet[3339]: E0303 13:39:13.032162 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.033040 kubelet[3339]: E0303 13:39:13.033004 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.033040 kubelet[3339]: W0303 13:39:13.033018 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.033040 kubelet[3339]: E0303 13:39:13.033032 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.033793 kubelet[3339]: E0303 13:39:13.033763 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.033793 kubelet[3339]: W0303 13:39:13.033777 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.033793 kubelet[3339]: E0303 13:39:13.033793 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.088373 containerd[1982]: time="2026-03-03T13:39:13.088063660Z" level=info msg="connecting to shim da57cd533341c4d3af77317a7d9755a3d2e24b1bbc961e81b949eec2ee16e75b" address="unix:///run/containerd/s/216027189b7cf82542d87fc3a4455a4d23f705b9c999baed131c2a9dcb511673" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:39:13.101475 containerd[1982]: time="2026-03-03T13:39:13.101433399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pbcnq,Uid:90fb4841-b9ab-40a5-b844-21d8effe2dea,Namespace:calico-system,Attempt:0,}" Mar 3 13:39:13.130482 kubelet[3339]: E0303 13:39:13.130286 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.130482 kubelet[3339]: W0303 13:39:13.130311 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.130482 kubelet[3339]: E0303 13:39:13.130334 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.131066 kubelet[3339]: E0303 13:39:13.131011 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.131066 kubelet[3339]: W0303 13:39:13.131028 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.131702 kubelet[3339]: E0303 13:39:13.131044 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.132124 kubelet[3339]: E0303 13:39:13.132048 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.132436 kubelet[3339]: W0303 13:39:13.132266 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.132436 kubelet[3339]: E0303 13:39:13.132285 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.133472 kubelet[3339]: E0303 13:39:13.133245 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.133637 kubelet[3339]: W0303 13:39:13.133573 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.133637 kubelet[3339]: E0303 13:39:13.133592 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.134673 kubelet[3339]: E0303 13:39:13.134636 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.134812 kubelet[3339]: W0303 13:39:13.134650 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.134812 kubelet[3339]: E0303 13:39:13.134762 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.136250 kubelet[3339]: E0303 13:39:13.136201 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.136250 kubelet[3339]: W0303 13:39:13.136216 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.136250 kubelet[3339]: E0303 13:39:13.136229 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.136722 kubelet[3339]: E0303 13:39:13.136680 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.136722 kubelet[3339]: W0303 13:39:13.136693 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.136722 kubelet[3339]: E0303 13:39:13.136707 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.137415 kubelet[3339]: E0303 13:39:13.137403 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.137850 kubelet[3339]: W0303 13:39:13.137810 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.137850 kubelet[3339]: E0303 13:39:13.137833 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.140036 kubelet[3339]: E0303 13:39:13.139056 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.140036 kubelet[3339]: W0303 13:39:13.139937 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.140036 kubelet[3339]: E0303 13:39:13.139957 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.142900 kubelet[3339]: E0303 13:39:13.142498 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.142900 kubelet[3339]: W0303 13:39:13.142516 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.142900 kubelet[3339]: E0303 13:39:13.142533 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.142900 kubelet[3339]: E0303 13:39:13.142784 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.142900 kubelet[3339]: W0303 13:39:13.142793 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.142900 kubelet[3339]: E0303 13:39:13.142804 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.144898 kubelet[3339]: E0303 13:39:13.143362 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.145268 kubelet[3339]: W0303 13:39:13.145000 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.145268 kubelet[3339]: E0303 13:39:13.145025 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.145463 kubelet[3339]: E0303 13:39:13.145424 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.145463 kubelet[3339]: W0303 13:39:13.145437 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.145463 kubelet[3339]: E0303 13:39:13.145449 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.147053 kubelet[3339]: E0303 13:39:13.147034 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.147270 kubelet[3339]: W0303 13:39:13.147235 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.147270 kubelet[3339]: E0303 13:39:13.147255 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.148920 kubelet[3339]: E0303 13:39:13.148061 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.148920 kubelet[3339]: W0303 13:39:13.148075 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.148920 kubelet[3339]: E0303 13:39:13.148093 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.149255 kubelet[3339]: E0303 13:39:13.149170 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.149255 kubelet[3339]: W0303 13:39:13.149184 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.149255 kubelet[3339]: E0303 13:39:13.149198 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.150160 kubelet[3339]: E0303 13:39:13.149976 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.150160 kubelet[3339]: W0303 13:39:13.149990 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.150160 kubelet[3339]: E0303 13:39:13.150004 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.150842 kubelet[3339]: E0303 13:39:13.150791 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.151075 kubelet[3339]: W0303 13:39:13.151029 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.151075 kubelet[3339]: E0303 13:39:13.151051 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.151710 kubelet[3339]: E0303 13:39:13.151696 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.151935 kubelet[3339]: W0303 13:39:13.151841 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.151935 kubelet[3339]: E0303 13:39:13.151859 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.153349 kubelet[3339]: E0303 13:39:13.153067 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.153349 kubelet[3339]: W0303 13:39:13.153080 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.153349 kubelet[3339]: E0303 13:39:13.153093 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.153349 kubelet[3339]: E0303 13:39:13.153317 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.153349 kubelet[3339]: W0303 13:39:13.153325 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.153349 kubelet[3339]: E0303 13:39:13.153335 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.154242 kubelet[3339]: E0303 13:39:13.154196 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.154242 kubelet[3339]: W0303 13:39:13.154212 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.154242 kubelet[3339]: E0303 13:39:13.154226 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.155384 kubelet[3339]: E0303 13:39:13.155209 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.155384 kubelet[3339]: W0303 13:39:13.155223 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.155384 kubelet[3339]: E0303 13:39:13.155236 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.155770 kubelet[3339]: E0303 13:39:13.155741 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.155912 kubelet[3339]: W0303 13:39:13.155869 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.156028 kubelet[3339]: E0303 13:39:13.156002 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.157487 kubelet[3339]: E0303 13:39:13.157194 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.157487 kubelet[3339]: W0303 13:39:13.157207 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.157487 kubelet[3339]: E0303 13:39:13.157221 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.177546 systemd[1]: Started cri-containerd-da57cd533341c4d3af77317a7d9755a3d2e24b1bbc961e81b949eec2ee16e75b.scope - libcontainer container da57cd533341c4d3af77317a7d9755a3d2e24b1bbc961e81b949eec2ee16e75b. Mar 3 13:39:13.189193 containerd[1982]: time="2026-03-03T13:39:13.189140997Z" level=info msg="connecting to shim e21271da06531ddfe2951612f6e8de0e569cca88b883d182d817b028915a0050" address="unix:///run/containerd/s/ebc6f35e7f04cc2d46edac16d96185cf20d8175a7f2b99eca4a58e2b256d4d74" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:39:13.227650 kubelet[3339]: E0303 13:39:13.227377 3339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:39:13.227650 kubelet[3339]: W0303 13:39:13.227497 3339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:39:13.227650 kubelet[3339]: E0303 13:39:13.227521 3339 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:39:13.256668 systemd[1]: Started cri-containerd-e21271da06531ddfe2951612f6e8de0e569cca88b883d182d817b028915a0050.scope - libcontainer container e21271da06531ddfe2951612f6e8de0e569cca88b883d182d817b028915a0050. Mar 3 13:39:13.367416 containerd[1982]: time="2026-03-03T13:39:13.367366773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pbcnq,Uid:90fb4841-b9ab-40a5-b844-21d8effe2dea,Namespace:calico-system,Attempt:0,} returns sandbox id \"e21271da06531ddfe2951612f6e8de0e569cca88b883d182d817b028915a0050\"" Mar 3 13:39:13.371236 containerd[1982]: time="2026-03-03T13:39:13.371203209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 3 13:39:13.413800 containerd[1982]: time="2026-03-03T13:39:13.413765975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58dd4f9b54-dv55v,Uid:f5eb25b0-9581-4e76-8524-bdf6aa6b05a6,Namespace:calico-system,Attempt:0,} returns sandbox id \"da57cd533341c4d3af77317a7d9755a3d2e24b1bbc961e81b949eec2ee16e75b\"" Mar 3 13:39:14.647626 kubelet[3339]: E0303 13:39:14.646690 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwlnv" podUID="c6298661-1013-4f89-bedc-335a967b2002" Mar 3 13:39:14.898742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3020355345.mount: Deactivated successfully. Mar 3 13:39:15.000810 containerd[1982]: time="2026-03-03T13:39:15.000756455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:15.002641 containerd[1982]: time="2026-03-03T13:39:15.002409921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 3 13:39:15.004733 containerd[1982]: time="2026-03-03T13:39:15.004695547Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:15.008181 containerd[1982]: time="2026-03-03T13:39:15.008146248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:15.008917 containerd[1982]: time="2026-03-03T13:39:15.008893099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.6374316s" Mar 3 13:39:15.009332 containerd[1982]: time="2026-03-03T13:39:15.008996887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 3 13:39:15.011461 containerd[1982]: time="2026-03-03T13:39:15.011422225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 3 13:39:15.021231 containerd[1982]: time="2026-03-03T13:39:15.021183747Z" level=info msg="CreateContainer within sandbox \"e21271da06531ddfe2951612f6e8de0e569cca88b883d182d817b028915a0050\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 3 13:39:15.047850 containerd[1982]: time="2026-03-03T13:39:15.042749183Z" level=info msg="Container d99b21e2f085215d8e881b681e89509d9889b62c469f70ab409f5fbf2f68a923: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:39:15.082869 containerd[1982]: time="2026-03-03T13:39:15.082822344Z" level=info msg="CreateContainer within sandbox \"e21271da06531ddfe2951612f6e8de0e569cca88b883d182d817b028915a0050\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d99b21e2f085215d8e881b681e89509d9889b62c469f70ab409f5fbf2f68a923\"" Mar 3 13:39:15.083502 containerd[1982]: time="2026-03-03T13:39:15.083481147Z" level=info msg="StartContainer for \"d99b21e2f085215d8e881b681e89509d9889b62c469f70ab409f5fbf2f68a923\"" Mar 3 13:39:15.084901 containerd[1982]: time="2026-03-03T13:39:15.084849197Z" level=info msg="connecting to shim d99b21e2f085215d8e881b681e89509d9889b62c469f70ab409f5fbf2f68a923" address="unix:///run/containerd/s/ebc6f35e7f04cc2d46edac16d96185cf20d8175a7f2b99eca4a58e2b256d4d74" protocol=ttrpc version=3 Mar 3 13:39:15.111107 systemd[1]: Started cri-containerd-d99b21e2f085215d8e881b681e89509d9889b62c469f70ab409f5fbf2f68a923.scope - libcontainer container d99b21e2f085215d8e881b681e89509d9889b62c469f70ab409f5fbf2f68a923. Mar 3 13:39:15.212157 containerd[1982]: time="2026-03-03T13:39:15.211927245Z" level=info msg="StartContainer for \"d99b21e2f085215d8e881b681e89509d9889b62c469f70ab409f5fbf2f68a923\" returns successfully" Mar 3 13:39:15.217316 systemd[1]: cri-containerd-d99b21e2f085215d8e881b681e89509d9889b62c469f70ab409f5fbf2f68a923.scope: Deactivated successfully. Mar 3 13:39:15.262765 containerd[1982]: time="2026-03-03T13:39:15.262700177Z" level=info msg="received container exit event container_id:\"d99b21e2f085215d8e881b681e89509d9889b62c469f70ab409f5fbf2f68a923\" id:\"d99b21e2f085215d8e881b681e89509d9889b62c469f70ab409f5fbf2f68a923\" pid:4228 exited_at:{seconds:1772545155 nanos:226359339}" Mar 3 13:39:15.294001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d99b21e2f085215d8e881b681e89509d9889b62c469f70ab409f5fbf2f68a923-rootfs.mount: Deactivated successfully. Mar 3 13:39:16.647257 kubelet[3339]: E0303 13:39:16.646273 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwlnv" podUID="c6298661-1013-4f89-bedc-335a967b2002" Mar 3 13:39:17.893299 containerd[1982]: time="2026-03-03T13:39:17.893235547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:17.894291 containerd[1982]: time="2026-03-03T13:39:17.894152664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 3 13:39:17.895626 containerd[1982]: time="2026-03-03T13:39:17.895391361Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:17.898826 containerd[1982]: time="2026-03-03T13:39:17.898224388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:17.898826 containerd[1982]: time="2026-03-03T13:39:17.898723463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.886691182s" Mar 3 13:39:17.898826 containerd[1982]: time="2026-03-03T13:39:17.898747415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 3 13:39:17.899982 containerd[1982]: time="2026-03-03T13:39:17.899941500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 3 13:39:17.919102 containerd[1982]: time="2026-03-03T13:39:17.919033325Z" level=info msg="CreateContainer within sandbox \"da57cd533341c4d3af77317a7d9755a3d2e24b1bbc961e81b949eec2ee16e75b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 3 13:39:17.930045 containerd[1982]: time="2026-03-03T13:39:17.930007481Z" level=info msg="Container 59d23ba3d147919062c2e48135e1ce870702b67466b01bf88f1fce035e7c74a0: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:39:17.940847 containerd[1982]: time="2026-03-03T13:39:17.940770335Z" level=info msg="CreateContainer within sandbox \"da57cd533341c4d3af77317a7d9755a3d2e24b1bbc961e81b949eec2ee16e75b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"59d23ba3d147919062c2e48135e1ce870702b67466b01bf88f1fce035e7c74a0\"" Mar 3 13:39:17.943013 containerd[1982]: time="2026-03-03T13:39:17.942977615Z" level=info msg="StartContainer for \"59d23ba3d147919062c2e48135e1ce870702b67466b01bf88f1fce035e7c74a0\"" Mar 3 13:39:17.945289 containerd[1982]: time="2026-03-03T13:39:17.945137414Z" level=info msg="connecting to shim 59d23ba3d147919062c2e48135e1ce870702b67466b01bf88f1fce035e7c74a0" address="unix:///run/containerd/s/216027189b7cf82542d87fc3a4455a4d23f705b9c999baed131c2a9dcb511673" protocol=ttrpc version=3 Mar 3 13:39:17.966071 systemd[1]: Started cri-containerd-59d23ba3d147919062c2e48135e1ce870702b67466b01bf88f1fce035e7c74a0.scope - libcontainer container 59d23ba3d147919062c2e48135e1ce870702b67466b01bf88f1fce035e7c74a0. Mar 3 13:39:18.025760 containerd[1982]: time="2026-03-03T13:39:18.025695556Z" level=info msg="StartContainer for \"59d23ba3d147919062c2e48135e1ce870702b67466b01bf88f1fce035e7c74a0\" returns successfully" Mar 3 13:39:18.648795 kubelet[3339]: E0303 13:39:18.648754 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwlnv" podUID="c6298661-1013-4f89-bedc-335a967b2002" Mar 3 13:39:18.806562 kubelet[3339]: I0303 13:39:18.806479 3339 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-58dd4f9b54-dv55v" podStartSLOduration=2.323988982 podStartE2EDuration="6.806467451s" podCreationTimestamp="2026-03-03 13:39:12 +0000 UTC" firstStartedPulling="2026-03-03 13:39:13.41729733 +0000 UTC m=+24.911506255" lastFinishedPulling="2026-03-03 13:39:17.899775784 +0000 UTC m=+29.393984724" observedRunningTime="2026-03-03 13:39:18.805306685 +0000 UTC m=+30.299515633" watchObservedRunningTime="2026-03-03 13:39:18.806467451 +0000 UTC m=+30.300676398" Mar 3 13:39:19.796762 kubelet[3339]: I0303 13:39:19.796617 3339 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 3 13:39:20.649317 kubelet[3339]: E0303 13:39:20.649269 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwlnv" podUID="c6298661-1013-4f89-bedc-335a967b2002" Mar 3 13:39:22.647826 kubelet[3339]: E0303 13:39:22.646769 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwlnv" podUID="c6298661-1013-4f89-bedc-335a967b2002" Mar 3 13:39:24.647377 kubelet[3339]: E0303 13:39:24.647311 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwlnv" podUID="c6298661-1013-4f89-bedc-335a967b2002" Mar 3 13:39:26.647000 kubelet[3339]: E0303 13:39:26.646958 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwlnv" podUID="c6298661-1013-4f89-bedc-335a967b2002" Mar 3 13:39:28.579239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2915239046.mount: Deactivated successfully. Mar 3 13:39:28.640545 containerd[1982]: time="2026-03-03T13:39:28.632326083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:28.641105 containerd[1982]: time="2026-03-03T13:39:28.636217375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 3 13:39:28.642691 containerd[1982]: time="2026-03-03T13:39:28.642630279Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:28.646070 containerd[1982]: time="2026-03-03T13:39:28.646007563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:28.646719 containerd[1982]: time="2026-03-03T13:39:28.646660813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 10.74668934s" Mar 3 13:39:28.646719 containerd[1982]: time="2026-03-03T13:39:28.646691477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 3 13:39:28.647617 kubelet[3339]: E0303 13:39:28.647542 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwlnv" podUID="c6298661-1013-4f89-bedc-335a967b2002" Mar 3 13:39:28.655349 containerd[1982]: time="2026-03-03T13:39:28.655300789Z" level=info msg="CreateContainer within sandbox \"e21271da06531ddfe2951612f6e8de0e569cca88b883d182d817b028915a0050\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 3 13:39:28.681361 containerd[1982]: time="2026-03-03T13:39:28.679999358Z" level=info msg="Container ab9219b26b6c27c087801c13ffceac2a8ae79afe87ce60326ad444d828fd42ee: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:39:28.685155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1390334266.mount: Deactivated successfully. Mar 3 13:39:28.697036 containerd[1982]: time="2026-03-03T13:39:28.696991458Z" level=info msg="CreateContainer within sandbox \"e21271da06531ddfe2951612f6e8de0e569cca88b883d182d817b028915a0050\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"ab9219b26b6c27c087801c13ffceac2a8ae79afe87ce60326ad444d828fd42ee\"" Mar 3 13:39:28.697588 containerd[1982]: time="2026-03-03T13:39:28.697566655Z" level=info msg="StartContainer for \"ab9219b26b6c27c087801c13ffceac2a8ae79afe87ce60326ad444d828fd42ee\"" Mar 3 13:39:28.700500 containerd[1982]: time="2026-03-03T13:39:28.700435297Z" level=info msg="connecting to shim ab9219b26b6c27c087801c13ffceac2a8ae79afe87ce60326ad444d828fd42ee" address="unix:///run/containerd/s/ebc6f35e7f04cc2d46edac16d96185cf20d8175a7f2b99eca4a58e2b256d4d74" protocol=ttrpc version=3 Mar 3 13:39:28.856344 systemd[1]: Started cri-containerd-ab9219b26b6c27c087801c13ffceac2a8ae79afe87ce60326ad444d828fd42ee.scope - libcontainer container ab9219b26b6c27c087801c13ffceac2a8ae79afe87ce60326ad444d828fd42ee. Mar 3 13:39:28.960677 containerd[1982]: time="2026-03-03T13:39:28.960638519Z" level=info msg="StartContainer for \"ab9219b26b6c27c087801c13ffceac2a8ae79afe87ce60326ad444d828fd42ee\" returns successfully" Mar 3 13:39:29.183450 systemd[1]: cri-containerd-ab9219b26b6c27c087801c13ffceac2a8ae79afe87ce60326ad444d828fd42ee.scope: Deactivated successfully. Mar 3 13:39:29.183702 systemd[1]: cri-containerd-ab9219b26b6c27c087801c13ffceac2a8ae79afe87ce60326ad444d828fd42ee.scope: Consumed 92ms CPU time, 38.4M memory peak, 19.2M read from disk. Mar 3 13:39:29.185783 containerd[1982]: time="2026-03-03T13:39:29.185726557Z" level=info msg="received container exit event container_id:\"ab9219b26b6c27c087801c13ffceac2a8ae79afe87ce60326ad444d828fd42ee\" id:\"ab9219b26b6c27c087801c13ffceac2a8ae79afe87ce60326ad444d828fd42ee\" pid:4338 exited_at:{seconds:1772545169 nanos:185461184}" Mar 3 13:39:29.573853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab9219b26b6c27c087801c13ffceac2a8ae79afe87ce60326ad444d828fd42ee-rootfs.mount: Deactivated successfully. Mar 3 13:39:29.828115 containerd[1982]: time="2026-03-03T13:39:29.827987965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 3 13:39:30.648475 kubelet[3339]: E0303 13:39:30.648222 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwlnv" podUID="c6298661-1013-4f89-bedc-335a967b2002" Mar 3 13:39:32.647319 kubelet[3339]: E0303 13:39:32.647273 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwlnv" podUID="c6298661-1013-4f89-bedc-335a967b2002" Mar 3 13:39:33.536016 containerd[1982]: time="2026-03-03T13:39:33.535960636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:33.538340 containerd[1982]: time="2026-03-03T13:39:33.538142388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 3 13:39:33.540847 containerd[1982]: time="2026-03-03T13:39:33.540810989Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:33.545279 containerd[1982]: time="2026-03-03T13:39:33.545227969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:33.545982 containerd[1982]: time="2026-03-03T13:39:33.545863700Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.717841681s" Mar 3 13:39:33.545982 containerd[1982]: time="2026-03-03T13:39:33.545906268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 3 13:39:33.553353 containerd[1982]: time="2026-03-03T13:39:33.553303703Z" level=info msg="CreateContainer within sandbox \"e21271da06531ddfe2951612f6e8de0e569cca88b883d182d817b028915a0050\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 3 13:39:33.569935 containerd[1982]: time="2026-03-03T13:39:33.568780543Z" level=info msg="Container 0e84d8881980f8321708c1e5dd8817afb37353b2dd193f7dc7585370aa81bd96: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:39:33.581639 containerd[1982]: time="2026-03-03T13:39:33.581587294Z" level=info msg="CreateContainer within sandbox \"e21271da06531ddfe2951612f6e8de0e569cca88b883d182d817b028915a0050\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0e84d8881980f8321708c1e5dd8817afb37353b2dd193f7dc7585370aa81bd96\"" Mar 3 13:39:33.582124 containerd[1982]: time="2026-03-03T13:39:33.582095963Z" level=info msg="StartContainer for \"0e84d8881980f8321708c1e5dd8817afb37353b2dd193f7dc7585370aa81bd96\"" Mar 3 13:39:33.584080 containerd[1982]: time="2026-03-03T13:39:33.584031195Z" level=info msg="connecting to shim 0e84d8881980f8321708c1e5dd8817afb37353b2dd193f7dc7585370aa81bd96" address="unix:///run/containerd/s/ebc6f35e7f04cc2d46edac16d96185cf20d8175a7f2b99eca4a58e2b256d4d74" protocol=ttrpc version=3 Mar 3 13:39:33.613120 systemd[1]: Started cri-containerd-0e84d8881980f8321708c1e5dd8817afb37353b2dd193f7dc7585370aa81bd96.scope - libcontainer container 0e84d8881980f8321708c1e5dd8817afb37353b2dd193f7dc7585370aa81bd96. Mar 3 13:39:33.683994 containerd[1982]: time="2026-03-03T13:39:33.683920952Z" level=info msg="StartContainer for \"0e84d8881980f8321708c1e5dd8817afb37353b2dd193f7dc7585370aa81bd96\" returns successfully" Mar 3 13:39:34.646925 kubelet[3339]: E0303 13:39:34.646846 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwlnv" podUID="c6298661-1013-4f89-bedc-335a967b2002" Mar 3 13:39:35.780034 systemd[1]: cri-containerd-0e84d8881980f8321708c1e5dd8817afb37353b2dd193f7dc7585370aa81bd96.scope: Deactivated successfully. Mar 3 13:39:35.781404 systemd[1]: cri-containerd-0e84d8881980f8321708c1e5dd8817afb37353b2dd193f7dc7585370aa81bd96.scope: Consumed 582ms CPU time, 170.5M memory peak, 2.5M read from disk, 177M written to disk. Mar 3 13:39:35.786369 containerd[1982]: time="2026-03-03T13:39:35.786268217Z" level=info msg="received container exit event container_id:\"0e84d8881980f8321708c1e5dd8817afb37353b2dd193f7dc7585370aa81bd96\" id:\"0e84d8881980f8321708c1e5dd8817afb37353b2dd193f7dc7585370aa81bd96\" pid:4395 exited_at:{seconds:1772545175 nanos:786017768}" Mar 3 13:39:35.817441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e84d8881980f8321708c1e5dd8817afb37353b2dd193f7dc7585370aa81bd96-rootfs.mount: Deactivated successfully. Mar 3 13:39:35.824412 kubelet[3339]: I0303 13:39:35.824354 3339 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 3 13:39:35.904496 systemd[1]: Created slice kubepods-burstable-pod68c73d15_de59_4430_8497_c7d59f8425a7.slice - libcontainer container kubepods-burstable-pod68c73d15_de59_4430_8497_c7d59f8425a7.slice. Mar 3 13:39:35.923138 systemd[1]: Created slice kubepods-burstable-pod4b3d84cb_804e_46c6_ae6a_9106d56643d6.slice - libcontainer container kubepods-burstable-pod4b3d84cb_804e_46c6_ae6a_9106d56643d6.slice. Mar 3 13:39:35.925129 containerd[1982]: time="2026-03-03T13:39:35.924332065Z" level=info msg="CreateContainer within sandbox \"e21271da06531ddfe2951612f6e8de0e569cca88b883d182d817b028915a0050\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 3 13:39:35.935206 systemd[1]: Created slice kubepods-besteffort-pod55b7c3f2_95aa_4753_96ac_cfd56faee332.slice - libcontainer container kubepods-besteffort-pod55b7c3f2_95aa_4753_96ac_cfd56faee332.slice. Mar 3 13:39:35.940779 containerd[1982]: time="2026-03-03T13:39:35.940657480Z" level=info msg="Container 57aedb53f78d78ddb9908d1e2cd3ee5661bcbeb66c0a757ed275d589714d86c0: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:39:35.951368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2546235172.mount: Deactivated successfully. Mar 3 13:39:35.964705 containerd[1982]: time="2026-03-03T13:39:35.964655616Z" level=info msg="CreateContainer within sandbox \"e21271da06531ddfe2951612f6e8de0e569cca88b883d182d817b028915a0050\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"57aedb53f78d78ddb9908d1e2cd3ee5661bcbeb66c0a757ed275d589714d86c0\"" Mar 3 13:39:35.969112 containerd[1982]: time="2026-03-03T13:39:35.968089023Z" level=info msg="StartContainer for \"57aedb53f78d78ddb9908d1e2cd3ee5661bcbeb66c0a757ed275d589714d86c0\"" Mar 3 13:39:35.970688 systemd[1]: Created slice kubepods-besteffort-pod7b276f3a_0d5a_4ce2_8383_649a836c75b2.slice - libcontainer container kubepods-besteffort-pod7b276f3a_0d5a_4ce2_8383_649a836c75b2.slice. Mar 3 13:39:35.975288 containerd[1982]: time="2026-03-03T13:39:35.975225998Z" level=info msg="connecting to shim 57aedb53f78d78ddb9908d1e2cd3ee5661bcbeb66c0a757ed275d589714d86c0" address="unix:///run/containerd/s/ebc6f35e7f04cc2d46edac16d96185cf20d8175a7f2b99eca4a58e2b256d4d74" protocol=ttrpc version=3 Mar 3 13:39:35.985469 systemd[1]: Created slice kubepods-besteffort-pod80c3033b_9292_4a4e_bd6c_4aaaed72fa18.slice - libcontainer container kubepods-besteffort-pod80c3033b_9292_4a4e_bd6c_4aaaed72fa18.slice. Mar 3 13:39:36.001739 systemd[1]: Created slice kubepods-besteffort-pod7226ca70_f487_4c32_a12c_d3256900e3a0.slice - libcontainer container kubepods-besteffort-pod7226ca70_f487_4c32_a12c_d3256900e3a0.slice. Mar 3 13:39:36.005318 kubelet[3339]: I0303 13:39:36.003552 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fdwj\" (UniqueName: \"kubernetes.io/projected/80c3033b-9292-4a4e-bd6c-4aaaed72fa18-kube-api-access-8fdwj\") pod \"calico-apiserver-5bf8fd54b9-c6jwv\" (UID: \"80c3033b-9292-4a4e-bd6c-4aaaed72fa18\") " pod="calico-system/calico-apiserver-5bf8fd54b9-c6jwv" Mar 3 13:39:36.005318 kubelet[3339]: I0303 13:39:36.003593 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7226ca70-f487-4c32-a12c-d3256900e3a0-whisker-ca-bundle\") pod \"whisker-69495fc6dc-fwd4r\" (UID: \"7226ca70-f487-4c32-a12c-d3256900e3a0\") " pod="calico-system/whisker-69495fc6dc-fwd4r" Mar 3 13:39:36.005318 kubelet[3339]: I0303 13:39:36.003619 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t5v9\" (UniqueName: \"kubernetes.io/projected/7226ca70-f487-4c32-a12c-d3256900e3a0-kube-api-access-4t5v9\") pod \"whisker-69495fc6dc-fwd4r\" (UID: \"7226ca70-f487-4c32-a12c-d3256900e3a0\") " pod="calico-system/whisker-69495fc6dc-fwd4r" Mar 3 13:39:36.005318 kubelet[3339]: I0303 13:39:36.003651 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx2vc\" (UniqueName: \"kubernetes.io/projected/68c73d15-de59-4430-8497-c7d59f8425a7-kube-api-access-rx2vc\") pod \"coredns-7d764666f9-d27tl\" (UID: \"68c73d15-de59-4430-8497-c7d59f8425a7\") " pod="kube-system/coredns-7d764666f9-d27tl" Mar 3 13:39:36.005318 kubelet[3339]: I0303 13:39:36.003677 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7b276f3a-0d5a-4ce2-8383-649a836c75b2-calico-apiserver-certs\") pod \"calico-apiserver-5bf8fd54b9-szfgf\" (UID: \"7b276f3a-0d5a-4ce2-8383-649a836c75b2\") " pod="calico-system/calico-apiserver-5bf8fd54b9-szfgf" Mar 3 13:39:36.005568 kubelet[3339]: I0303 13:39:36.003703 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/80c3033b-9292-4a4e-bd6c-4aaaed72fa18-calico-apiserver-certs\") pod \"calico-apiserver-5bf8fd54b9-c6jwv\" (UID: \"80c3033b-9292-4a4e-bd6c-4aaaed72fa18\") " pod="calico-system/calico-apiserver-5bf8fd54b9-c6jwv" Mar 3 13:39:36.005568 kubelet[3339]: I0303 13:39:36.003731 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1782f57c-b194-4dc8-a4a4-a3f5d2024f13-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-4mdtz\" (UID: \"1782f57c-b194-4dc8-a4a4-a3f5d2024f13\") " pod="calico-system/goldmane-9f7667bb8-4mdtz" Mar 3 13:39:36.005568 kubelet[3339]: I0303 13:39:36.003757 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1782f57c-b194-4dc8-a4a4-a3f5d2024f13-goldmane-key-pair\") pod \"goldmane-9f7667bb8-4mdtz\" (UID: \"1782f57c-b194-4dc8-a4a4-a3f5d2024f13\") " pod="calico-system/goldmane-9f7667bb8-4mdtz" Mar 3 13:39:36.005568 kubelet[3339]: I0303 13:39:36.003779 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7226ca70-f487-4c32-a12c-d3256900e3a0-nginx-config\") pod \"whisker-69495fc6dc-fwd4r\" (UID: \"7226ca70-f487-4c32-a12c-d3256900e3a0\") " pod="calico-system/whisker-69495fc6dc-fwd4r" Mar 3 13:39:36.005568 kubelet[3339]: I0303 13:39:36.003810 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68c73d15-de59-4430-8497-c7d59f8425a7-config-volume\") pod \"coredns-7d764666f9-d27tl\" (UID: \"68c73d15-de59-4430-8497-c7d59f8425a7\") " pod="kube-system/coredns-7d764666f9-d27tl" Mar 3 13:39:36.005814 kubelet[3339]: I0303 13:39:36.003827 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55b7c3f2-95aa-4753-96ac-cfd56faee332-tigera-ca-bundle\") pod \"calico-kube-controllers-6dc9c78b65-szhdf\" (UID: \"55b7c3f2-95aa-4753-96ac-cfd56faee332\") " pod="calico-system/calico-kube-controllers-6dc9c78b65-szhdf" Mar 3 13:39:36.005814 kubelet[3339]: I0303 13:39:36.003842 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1782f57c-b194-4dc8-a4a4-a3f5d2024f13-config\") pod \"goldmane-9f7667bb8-4mdtz\" (UID: \"1782f57c-b194-4dc8-a4a4-a3f5d2024f13\") " pod="calico-system/goldmane-9f7667bb8-4mdtz" Mar 3 13:39:36.006950 kubelet[3339]: I0303 13:39:36.006911 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7226ca70-f487-4c32-a12c-d3256900e3a0-whisker-backend-key-pair\") pod \"whisker-69495fc6dc-fwd4r\" (UID: \"7226ca70-f487-4c32-a12c-d3256900e3a0\") " pod="calico-system/whisker-69495fc6dc-fwd4r" Mar 3 13:39:36.007190 kubelet[3339]: I0303 13:39:36.007167 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b3d84cb-804e-46c6-ae6a-9106d56643d6-config-volume\") pod \"coredns-7d764666f9-hm7z4\" (UID: \"4b3d84cb-804e-46c6-ae6a-9106d56643d6\") " pod="kube-system/coredns-7d764666f9-hm7z4" Mar 3 13:39:36.008873 kubelet[3339]: I0303 13:39:36.008559 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4fxd\" (UniqueName: \"kubernetes.io/projected/7b276f3a-0d5a-4ce2-8383-649a836c75b2-kube-api-access-t4fxd\") pod \"calico-apiserver-5bf8fd54b9-szfgf\" (UID: \"7b276f3a-0d5a-4ce2-8383-649a836c75b2\") " pod="calico-system/calico-apiserver-5bf8fd54b9-szfgf" Mar 3 13:39:36.008873 kubelet[3339]: I0303 13:39:36.008600 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmxmn\" (UniqueName: \"kubernetes.io/projected/4b3d84cb-804e-46c6-ae6a-9106d56643d6-kube-api-access-xmxmn\") pod \"coredns-7d764666f9-hm7z4\" (UID: \"4b3d84cb-804e-46c6-ae6a-9106d56643d6\") " pod="kube-system/coredns-7d764666f9-hm7z4" Mar 3 13:39:36.008873 kubelet[3339]: I0303 13:39:36.008617 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82qp7\" (UniqueName: \"kubernetes.io/projected/55b7c3f2-95aa-4753-96ac-cfd56faee332-kube-api-access-82qp7\") pod \"calico-kube-controllers-6dc9c78b65-szhdf\" (UID: \"55b7c3f2-95aa-4753-96ac-cfd56faee332\") " pod="calico-system/calico-kube-controllers-6dc9c78b65-szhdf" Mar 3 13:39:36.008873 kubelet[3339]: I0303 13:39:36.008632 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jdz5\" (UniqueName: \"kubernetes.io/projected/1782f57c-b194-4dc8-a4a4-a3f5d2024f13-kube-api-access-5jdz5\") pod \"goldmane-9f7667bb8-4mdtz\" (UID: \"1782f57c-b194-4dc8-a4a4-a3f5d2024f13\") " pod="calico-system/goldmane-9f7667bb8-4mdtz" Mar 3 13:39:36.026540 systemd[1]: Created slice kubepods-besteffort-pod1782f57c_b194_4dc8_a4a4_a3f5d2024f13.slice - libcontainer container kubepods-besteffort-pod1782f57c_b194_4dc8_a4a4_a3f5d2024f13.slice. Mar 3 13:39:36.041219 systemd[1]: Started cri-containerd-57aedb53f78d78ddb9908d1e2cd3ee5661bcbeb66c0a757ed275d589714d86c0.scope - libcontainer container 57aedb53f78d78ddb9908d1e2cd3ee5661bcbeb66c0a757ed275d589714d86c0. Mar 3 13:39:36.195710 containerd[1982]: time="2026-03-03T13:39:36.195669188Z" level=info msg="StartContainer for \"57aedb53f78d78ddb9908d1e2cd3ee5661bcbeb66c0a757ed275d589714d86c0\" returns successfully" Mar 3 13:39:36.223220 containerd[1982]: time="2026-03-03T13:39:36.223186706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-d27tl,Uid:68c73d15-de59-4430-8497-c7d59f8425a7,Namespace:kube-system,Attempt:0,}" Mar 3 13:39:36.237376 containerd[1982]: time="2026-03-03T13:39:36.237259693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hm7z4,Uid:4b3d84cb-804e-46c6-ae6a-9106d56643d6,Namespace:kube-system,Attempt:0,}" Mar 3 13:39:36.288752 containerd[1982]: time="2026-03-03T13:39:36.288428567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc9c78b65-szhdf,Uid:55b7c3f2-95aa-4753-96ac-cfd56faee332,Namespace:calico-system,Attempt:0,}" Mar 3 13:39:36.300877 containerd[1982]: time="2026-03-03T13:39:36.300727380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf8fd54b9-c6jwv,Uid:80c3033b-9292-4a4e-bd6c-4aaaed72fa18,Namespace:calico-system,Attempt:0,}" Mar 3 13:39:36.302350 containerd[1982]: time="2026-03-03T13:39:36.302300568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf8fd54b9-szfgf,Uid:7b276f3a-0d5a-4ce2-8383-649a836c75b2,Namespace:calico-system,Attempt:0,}" Mar 3 13:39:36.318295 containerd[1982]: time="2026-03-03T13:39:36.318249012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69495fc6dc-fwd4r,Uid:7226ca70-f487-4c32-a12c-d3256900e3a0,Namespace:calico-system,Attempt:0,}" Mar 3 13:39:36.335483 containerd[1982]: time="2026-03-03T13:39:36.335447557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-4mdtz,Uid:1782f57c-b194-4dc8-a4a4-a3f5d2024f13,Namespace:calico-system,Attempt:0,}" Mar 3 13:39:36.654701 systemd[1]: Created slice kubepods-besteffort-podc6298661_1013_4f89_bedc_335a967b2002.slice - libcontainer container kubepods-besteffort-podc6298661_1013_4f89_bedc_335a967b2002.slice. Mar 3 13:39:36.665357 containerd[1982]: time="2026-03-03T13:39:36.665064021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwlnv,Uid:c6298661-1013-4f89-bedc-335a967b2002,Namespace:calico-system,Attempt:0,}" Mar 3 13:39:36.698942 containerd[1982]: time="2026-03-03T13:39:36.698893936Z" level=error msg="Failed to destroy network for sandbox \"7a328e892761017d437c893c07c8c1020a97003c3b47b814b3fb402a2ffd5a43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.702360 containerd[1982]: time="2026-03-03T13:39:36.702162159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf8fd54b9-szfgf,Uid:7b276f3a-0d5a-4ce2-8383-649a836c75b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a328e892761017d437c893c07c8c1020a97003c3b47b814b3fb402a2ffd5a43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.704032 kubelet[3339]: E0303 13:39:36.703772 3339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a328e892761017d437c893c07c8c1020a97003c3b47b814b3fb402a2ffd5a43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.704032 kubelet[3339]: E0303 13:39:36.703961 3339 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a328e892761017d437c893c07c8c1020a97003c3b47b814b3fb402a2ffd5a43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5bf8fd54b9-szfgf" Mar 3 13:39:36.704032 kubelet[3339]: E0303 13:39:36.703980 3339 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a328e892761017d437c893c07c8c1020a97003c3b47b814b3fb402a2ffd5a43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5bf8fd54b9-szfgf" Mar 3 13:39:36.706979 kubelet[3339]: E0303 13:39:36.704529 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bf8fd54b9-szfgf_calico-system(7b276f3a-0d5a-4ce2-8383-649a836c75b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bf8fd54b9-szfgf_calico-system(7b276f3a-0d5a-4ce2-8383-649a836c75b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a328e892761017d437c893c07c8c1020a97003c3b47b814b3fb402a2ffd5a43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5bf8fd54b9-szfgf" podUID="7b276f3a-0d5a-4ce2-8383-649a836c75b2" Mar 3 13:39:36.738995 containerd[1982]: time="2026-03-03T13:39:36.738941890Z" level=error msg="Failed to destroy network for sandbox \"394a2e69dfbc11023d1ecc58b8d81f984d29dce8f12f3946c1dba0cc271f7285\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.741139 containerd[1982]: time="2026-03-03T13:39:36.741101596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hm7z4,Uid:4b3d84cb-804e-46c6-ae6a-9106d56643d6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"394a2e69dfbc11023d1ecc58b8d81f984d29dce8f12f3946c1dba0cc271f7285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.741852 kubelet[3339]: E0303 13:39:36.741455 3339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"394a2e69dfbc11023d1ecc58b8d81f984d29dce8f12f3946c1dba0cc271f7285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.741852 kubelet[3339]: E0303 13:39:36.741504 3339 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"394a2e69dfbc11023d1ecc58b8d81f984d29dce8f12f3946c1dba0cc271f7285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-hm7z4" Mar 3 13:39:36.741852 kubelet[3339]: E0303 13:39:36.741522 3339 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"394a2e69dfbc11023d1ecc58b8d81f984d29dce8f12f3946c1dba0cc271f7285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-hm7z4" Mar 3 13:39:36.742073 kubelet[3339]: E0303 13:39:36.741577 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-hm7z4_kube-system(4b3d84cb-804e-46c6-ae6a-9106d56643d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-hm7z4_kube-system(4b3d84cb-804e-46c6-ae6a-9106d56643d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"394a2e69dfbc11023d1ecc58b8d81f984d29dce8f12f3946c1dba0cc271f7285\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-hm7z4" podUID="4b3d84cb-804e-46c6-ae6a-9106d56643d6" Mar 3 13:39:36.754511 containerd[1982]: time="2026-03-03T13:39:36.754460002Z" level=error msg="Failed to destroy network for sandbox \"4594dd71968f8a7fd28c10be37587dae7634848377885376905921e7b460ee36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.759416 containerd[1982]: time="2026-03-03T13:39:36.759361347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-d27tl,Uid:68c73d15-de59-4430-8497-c7d59f8425a7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4594dd71968f8a7fd28c10be37587dae7634848377885376905921e7b460ee36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.759608 kubelet[3339]: E0303 13:39:36.759580 3339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4594dd71968f8a7fd28c10be37587dae7634848377885376905921e7b460ee36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.759672 kubelet[3339]: E0303 13:39:36.759625 3339 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4594dd71968f8a7fd28c10be37587dae7634848377885376905921e7b460ee36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-d27tl" Mar 3 13:39:36.759672 kubelet[3339]: E0303 13:39:36.759642 3339 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4594dd71968f8a7fd28c10be37587dae7634848377885376905921e7b460ee36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-d27tl" Mar 3 13:39:36.759744 kubelet[3339]: E0303 13:39:36.759689 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-d27tl_kube-system(68c73d15-de59-4430-8497-c7d59f8425a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-d27tl_kube-system(68c73d15-de59-4430-8497-c7d59f8425a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4594dd71968f8a7fd28c10be37587dae7634848377885376905921e7b460ee36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-d27tl" podUID="68c73d15-de59-4430-8497-c7d59f8425a7" Mar 3 13:39:36.765686 containerd[1982]: time="2026-03-03T13:39:36.765579987Z" level=error msg="Failed to destroy network for sandbox \"66fa178d15b3363d4e3f835dd672bd63275bd3b9553cd357eb52e80da04dc88e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.768792 containerd[1982]: time="2026-03-03T13:39:36.768749315Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69495fc6dc-fwd4r,Uid:7226ca70-f487-4c32-a12c-d3256900e3a0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"66fa178d15b3363d4e3f835dd672bd63275bd3b9553cd357eb52e80da04dc88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.769399 kubelet[3339]: E0303 13:39:36.769363 3339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66fa178d15b3363d4e3f835dd672bd63275bd3b9553cd357eb52e80da04dc88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.769473 kubelet[3339]: E0303 13:39:36.769450 3339 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66fa178d15b3363d4e3f835dd672bd63275bd3b9553cd357eb52e80da04dc88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69495fc6dc-fwd4r" Mar 3 13:39:36.769532 kubelet[3339]: E0303 13:39:36.769469 3339 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66fa178d15b3363d4e3f835dd672bd63275bd3b9553cd357eb52e80da04dc88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69495fc6dc-fwd4r" Mar 3 13:39:36.769646 kubelet[3339]: E0303 13:39:36.769523 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69495fc6dc-fwd4r_calico-system(7226ca70-f487-4c32-a12c-d3256900e3a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69495fc6dc-fwd4r_calico-system(7226ca70-f487-4c32-a12c-d3256900e3a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66fa178d15b3363d4e3f835dd672bd63275bd3b9553cd357eb52e80da04dc88e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69495fc6dc-fwd4r" podUID="7226ca70-f487-4c32-a12c-d3256900e3a0" Mar 3 13:39:36.773599 containerd[1982]: time="2026-03-03T13:39:36.773558649Z" level=error msg="Failed to destroy network for sandbox \"c03acbb4718bb4ce2f0e99bb0049adebd4720792a304b7de2f3a358a21711273\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.774298 containerd[1982]: time="2026-03-03T13:39:36.774197473Z" level=error msg="Failed to destroy network for sandbox \"0965e5843035b4138c4cc17fdbb7daf39fa5c932905fc0c6e05c3fd09fc24c27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.775504 containerd[1982]: time="2026-03-03T13:39:36.775450221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-4mdtz,Uid:1782f57c-b194-4dc8-a4a4-a3f5d2024f13,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c03acbb4718bb4ce2f0e99bb0049adebd4720792a304b7de2f3a358a21711273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.776168 kubelet[3339]: E0303 13:39:36.776128 3339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c03acbb4718bb4ce2f0e99bb0049adebd4720792a304b7de2f3a358a21711273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.776279 kubelet[3339]: E0303 13:39:36.776176 3339 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c03acbb4718bb4ce2f0e99bb0049adebd4720792a304b7de2f3a358a21711273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-4mdtz" Mar 3 13:39:36.776279 kubelet[3339]: E0303 13:39:36.776193 3339 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c03acbb4718bb4ce2f0e99bb0049adebd4720792a304b7de2f3a358a21711273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-4mdtz" Mar 3 13:39:36.776279 kubelet[3339]: E0303 13:39:36.776236 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-4mdtz_calico-system(1782f57c-b194-4dc8-a4a4-a3f5d2024f13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-4mdtz_calico-system(1782f57c-b194-4dc8-a4a4-a3f5d2024f13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c03acbb4718bb4ce2f0e99bb0049adebd4720792a304b7de2f3a358a21711273\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-4mdtz" podUID="1782f57c-b194-4dc8-a4a4-a3f5d2024f13" Mar 3 13:39:36.777061 containerd[1982]: time="2026-03-03T13:39:36.776667326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf8fd54b9-c6jwv,Uid:80c3033b-9292-4a4e-bd6c-4aaaed72fa18,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0965e5843035b4138c4cc17fdbb7daf39fa5c932905fc0c6e05c3fd09fc24c27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.778063 kubelet[3339]: E0303 13:39:36.778005 3339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0965e5843035b4138c4cc17fdbb7daf39fa5c932905fc0c6e05c3fd09fc24c27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.778063 kubelet[3339]: E0303 13:39:36.778048 3339 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0965e5843035b4138c4cc17fdbb7daf39fa5c932905fc0c6e05c3fd09fc24c27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5bf8fd54b9-c6jwv" Mar 3 13:39:36.778690 kubelet[3339]: E0303 13:39:36.778069 3339 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0965e5843035b4138c4cc17fdbb7daf39fa5c932905fc0c6e05c3fd09fc24c27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5bf8fd54b9-c6jwv" Mar 3 13:39:36.778690 kubelet[3339]: E0303 13:39:36.778113 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bf8fd54b9-c6jwv_calico-system(80c3033b-9292-4a4e-bd6c-4aaaed72fa18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bf8fd54b9-c6jwv_calico-system(80c3033b-9292-4a4e-bd6c-4aaaed72fa18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0965e5843035b4138c4cc17fdbb7daf39fa5c932905fc0c6e05c3fd09fc24c27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5bf8fd54b9-c6jwv" podUID="80c3033b-9292-4a4e-bd6c-4aaaed72fa18" Mar 3 13:39:36.789810 containerd[1982]: time="2026-03-03T13:39:36.789750578Z" level=error msg="Failed to destroy network for sandbox \"8a053c65bc9929f0623cfd6490f01867d21852e4130e80d62a5ecbd8dccac0fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.791702 containerd[1982]: time="2026-03-03T13:39:36.791621683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc9c78b65-szhdf,Uid:55b7c3f2-95aa-4753-96ac-cfd56faee332,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a053c65bc9929f0623cfd6490f01867d21852e4130e80d62a5ecbd8dccac0fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.792047 kubelet[3339]: E0303 13:39:36.792008 3339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a053c65bc9929f0623cfd6490f01867d21852e4130e80d62a5ecbd8dccac0fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.792107 kubelet[3339]: E0303 13:39:36.792058 3339 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a053c65bc9929f0623cfd6490f01867d21852e4130e80d62a5ecbd8dccac0fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dc9c78b65-szhdf" Mar 3 13:39:36.792107 kubelet[3339]: E0303 13:39:36.792076 3339 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a053c65bc9929f0623cfd6490f01867d21852e4130e80d62a5ecbd8dccac0fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dc9c78b65-szhdf" Mar 3 13:39:36.792170 kubelet[3339]: E0303 13:39:36.792125 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6dc9c78b65-szhdf_calico-system(55b7c3f2-95aa-4753-96ac-cfd56faee332)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6dc9c78b65-szhdf_calico-system(55b7c3f2-95aa-4753-96ac-cfd56faee332)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a053c65bc9929f0623cfd6490f01867d21852e4130e80d62a5ecbd8dccac0fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dc9c78b65-szhdf" podUID="55b7c3f2-95aa-4753-96ac-cfd56faee332" Mar 3 13:39:36.895009 containerd[1982]: time="2026-03-03T13:39:36.894752449Z" level=error msg="Failed to destroy network for sandbox \"015b248b81572f52f50eb1ed2952846755f7ff752aaabb8065e0e404f362e75f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.904815 systemd[1]: run-netns-cni\x2da8a48bb2\x2d119b\x2d2a53\x2df7f8\x2deb7cf0020dcb.mount: Deactivated successfully. Mar 3 13:39:36.907118 containerd[1982]: time="2026-03-03T13:39:36.906066204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwlnv,Uid:c6298661-1013-4f89-bedc-335a967b2002,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"015b248b81572f52f50eb1ed2952846755f7ff752aaabb8065e0e404f362e75f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.909146 kubelet[3339]: E0303 13:39:36.908530 3339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"015b248b81572f52f50eb1ed2952846755f7ff752aaabb8065e0e404f362e75f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:39:36.909146 kubelet[3339]: E0303 13:39:36.908594 3339 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"015b248b81572f52f50eb1ed2952846755f7ff752aaabb8065e0e404f362e75f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rwlnv" Mar 3 13:39:36.909146 kubelet[3339]: E0303 13:39:36.908620 3339 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"015b248b81572f52f50eb1ed2952846755f7ff752aaabb8065e0e404f362e75f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rwlnv" Mar 3 13:39:36.909590 kubelet[3339]: E0303 13:39:36.908703 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rwlnv_calico-system(c6298661-1013-4f89-bedc-335a967b2002)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rwlnv_calico-system(c6298661-1013-4f89-bedc-335a967b2002)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"015b248b81572f52f50eb1ed2952846755f7ff752aaabb8065e0e404f362e75f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rwlnv" podUID="c6298661-1013-4f89-bedc-335a967b2002" Mar 3 13:39:37.046901 kubelet[3339]: I0303 13:39:37.046706 3339 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-pbcnq" podStartSLOduration=2.56254596 podStartE2EDuration="25.046686589s" podCreationTimestamp="2026-03-03 13:39:12 +0000 UTC" firstStartedPulling="2026-03-03 13:39:13.369961516 +0000 UTC m=+24.864170445" lastFinishedPulling="2026-03-03 13:39:35.854102135 +0000 UTC m=+47.348311074" observedRunningTime="2026-03-03 13:39:36.929149733 +0000 UTC m=+48.423358682" watchObservedRunningTime="2026-03-03 13:39:37.046686589 +0000 UTC m=+48.540895539" Mar 3 13:39:37.124911 kubelet[3339]: I0303 13:39:37.124668 3339 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/7226ca70-f487-4c32-a12c-d3256900e3a0-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7226ca70-f487-4c32-a12c-d3256900e3a0-whisker-ca-bundle\") pod \"7226ca70-f487-4c32-a12c-d3256900e3a0\" (UID: \"7226ca70-f487-4c32-a12c-d3256900e3a0\") " Mar 3 13:39:37.126018 kubelet[3339]: I0303 13:39:37.125785 3339 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/7226ca70-f487-4c32-a12c-d3256900e3a0-kube-api-access-4t5v9\" (UniqueName: \"kubernetes.io/projected/7226ca70-f487-4c32-a12c-d3256900e3a0-kube-api-access-4t5v9\") pod \"7226ca70-f487-4c32-a12c-d3256900e3a0\" (UID: \"7226ca70-f487-4c32-a12c-d3256900e3a0\") " Mar 3 13:39:37.126018 kubelet[3339]: I0303 13:39:37.125850 3339 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/7226ca70-f487-4c32-a12c-d3256900e3a0-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7226ca70-f487-4c32-a12c-d3256900e3a0-whisker-backend-key-pair\") pod \"7226ca70-f487-4c32-a12c-d3256900e3a0\" (UID: \"7226ca70-f487-4c32-a12c-d3256900e3a0\") " Mar 3 13:39:37.126335 kubelet[3339]: I0303 13:39:37.125714 3339 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7226ca70-f487-4c32-a12c-d3256900e3a0-whisker-ca-bundle" pod "7226ca70-f487-4c32-a12c-d3256900e3a0" (UID: "7226ca70-f487-4c32-a12c-d3256900e3a0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 13:39:37.126987 kubelet[3339]: I0303 13:39:37.126603 3339 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/7226ca70-f487-4c32-a12c-d3256900e3a0-nginx-config\" (UniqueName: \"kubernetes.io/configmap/7226ca70-f487-4c32-a12c-d3256900e3a0-nginx-config\") pod \"7226ca70-f487-4c32-a12c-d3256900e3a0\" (UID: \"7226ca70-f487-4c32-a12c-d3256900e3a0\") " Mar 3 13:39:37.126987 kubelet[3339]: I0303 13:39:37.126699 3339 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7226ca70-f487-4c32-a12c-d3256900e3a0-whisker-ca-bundle\") on node \"ip-172-31-31-254\" DevicePath \"\"" Mar 3 13:39:37.128643 kubelet[3339]: I0303 13:39:37.128618 3339 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7226ca70-f487-4c32-a12c-d3256900e3a0-nginx-config" pod "7226ca70-f487-4c32-a12c-d3256900e3a0" (UID: "7226ca70-f487-4c32-a12c-d3256900e3a0"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 13:39:37.141852 kubelet[3339]: I0303 13:39:37.141807 3339 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7226ca70-f487-4c32-a12c-d3256900e3a0-whisker-backend-key-pair" pod "7226ca70-f487-4c32-a12c-d3256900e3a0" (UID: "7226ca70-f487-4c32-a12c-d3256900e3a0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 3 13:39:37.144628 kubelet[3339]: I0303 13:39:37.143067 3339 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7226ca70-f487-4c32-a12c-d3256900e3a0-kube-api-access-4t5v9" pod "7226ca70-f487-4c32-a12c-d3256900e3a0" (UID: "7226ca70-f487-4c32-a12c-d3256900e3a0"). InnerVolumeSpecName "kube-api-access-4t5v9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 13:39:37.144222 systemd[1]: var-lib-kubelet-pods-7226ca70\x2df487\x2d4c32\x2da12c\x2dd3256900e3a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4t5v9.mount: Deactivated successfully. Mar 3 13:39:37.144376 systemd[1]: var-lib-kubelet-pods-7226ca70\x2df487\x2d4c32\x2da12c\x2dd3256900e3a0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 3 13:39:37.228348 kubelet[3339]: I0303 13:39:37.227471 3339 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7226ca70-f487-4c32-a12c-d3256900e3a0-nginx-config\") on node \"ip-172-31-31-254\" DevicePath \"\"" Mar 3 13:39:37.228348 kubelet[3339]: I0303 13:39:37.227511 3339 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4t5v9\" (UniqueName: \"kubernetes.io/projected/7226ca70-f487-4c32-a12c-d3256900e3a0-kube-api-access-4t5v9\") on node \"ip-172-31-31-254\" DevicePath \"\"" Mar 3 13:39:37.228348 kubelet[3339]: I0303 13:39:37.227526 3339 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7226ca70-f487-4c32-a12c-d3256900e3a0-whisker-backend-key-pair\") on node \"ip-172-31-31-254\" DevicePath \"\"" Mar 3 13:39:37.564523 kubelet[3339]: I0303 13:39:37.564136 3339 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 3 13:39:37.892767 systemd[1]: Removed slice kubepods-besteffort-pod7226ca70_f487_4c32_a12c_d3256900e3a0.slice - libcontainer container kubepods-besteffort-pod7226ca70_f487_4c32_a12c_d3256900e3a0.slice. Mar 3 13:39:37.991853 systemd[1]: Created slice kubepods-besteffort-podac9b955a_cdae_4090_b335_329046d62475.slice - libcontainer container kubepods-besteffort-podac9b955a_cdae_4090_b335_329046d62475.slice. Mar 3 13:39:38.035771 kubelet[3339]: I0303 13:39:38.035734 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ac9b955a-cdae-4090-b335-329046d62475-whisker-backend-key-pair\") pod \"whisker-f57fd4f64-mp7cg\" (UID: \"ac9b955a-cdae-4090-b335-329046d62475\") " pod="calico-system/whisker-f57fd4f64-mp7cg" Mar 3 13:39:38.036249 kubelet[3339]: I0303 13:39:38.035828 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ac9b955a-cdae-4090-b335-329046d62475-nginx-config\") pod \"whisker-f57fd4f64-mp7cg\" (UID: \"ac9b955a-cdae-4090-b335-329046d62475\") " pod="calico-system/whisker-f57fd4f64-mp7cg" Mar 3 13:39:38.036249 kubelet[3339]: I0303 13:39:38.035924 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac9b955a-cdae-4090-b335-329046d62475-whisker-ca-bundle\") pod \"whisker-f57fd4f64-mp7cg\" (UID: \"ac9b955a-cdae-4090-b335-329046d62475\") " pod="calico-system/whisker-f57fd4f64-mp7cg" Mar 3 13:39:38.037579 kubelet[3339]: I0303 13:39:38.037507 3339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66b5c\" (UniqueName: \"kubernetes.io/projected/ac9b955a-cdae-4090-b335-329046d62475-kube-api-access-66b5c\") pod \"whisker-f57fd4f64-mp7cg\" (UID: \"ac9b955a-cdae-4090-b335-329046d62475\") " pod="calico-system/whisker-f57fd4f64-mp7cg" Mar 3 13:39:38.299644 containerd[1982]: time="2026-03-03T13:39:38.299530118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f57fd4f64-mp7cg,Uid:ac9b955a-cdae-4090-b335-329046d62475,Namespace:calico-system,Attempt:0,}" Mar 3 13:39:38.572502 systemd-networkd[1711]: cali1803fe64ffe: Link UP Mar 3 13:39:38.574718 systemd-networkd[1711]: cali1803fe64ffe: Gained carrier Mar 3 13:39:38.589761 (udev-worker)[4867]: Network interface NamePolicy= disabled on kernel command line. Mar 3 13:39:38.599954 containerd[1982]: 2026-03-03 13:39:38.325 [ERROR][4753] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 3 13:39:38.599954 containerd[1982]: 2026-03-03 13:39:38.389 [INFO][4753] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--254-k8s-whisker--f57fd4f64--mp7cg-eth0 whisker-f57fd4f64- calico-system ac9b955a-cdae-4090-b335-329046d62475 920 0 2026-03-03 13:39:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f57fd4f64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-31-254 whisker-f57fd4f64-mp7cg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1803fe64ffe [] [] }} ContainerID="ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" Namespace="calico-system" Pod="whisker-f57fd4f64-mp7cg" WorkloadEndpoint="ip--172--31--31--254-k8s-whisker--f57fd4f64--mp7cg-" Mar 3 13:39:38.599954 containerd[1982]: 2026-03-03 13:39:38.389 [INFO][4753] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" Namespace="calico-system" Pod="whisker-f57fd4f64-mp7cg" WorkloadEndpoint="ip--172--31--31--254-k8s-whisker--f57fd4f64--mp7cg-eth0" Mar 3 13:39:38.599954 containerd[1982]: 2026-03-03 13:39:38.472 [INFO][4766] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" HandleID="k8s-pod-network.ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" Workload="ip--172--31--31--254-k8s-whisker--f57fd4f64--mp7cg-eth0" Mar 3 13:39:38.600190 containerd[1982]: 2026-03-03 13:39:38.489 [INFO][4766] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" HandleID="k8s-pod-network.ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" Workload="ip--172--31--31--254-k8s-whisker--f57fd4f64--mp7cg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9b00), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-254", "pod":"whisker-f57fd4f64-mp7cg", "timestamp":"2026-03-03 13:39:38.472383932 +0000 UTC"}, Hostname:"ip-172-31-31-254", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003e9340)} Mar 3 13:39:38.600190 containerd[1982]: 2026-03-03 13:39:38.489 [INFO][4766] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:39:38.600190 containerd[1982]: 2026-03-03 13:39:38.490 [INFO][4766] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:39:38.600190 containerd[1982]: 2026-03-03 13:39:38.490 [INFO][4766] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-254' Mar 3 13:39:38.600190 containerd[1982]: 2026-03-03 13:39:38.495 [INFO][4766] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" host="ip-172-31-31-254" Mar 3 13:39:38.600190 containerd[1982]: 2026-03-03 13:39:38.504 [INFO][4766] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-254" Mar 3 13:39:38.600190 containerd[1982]: 2026-03-03 13:39:38.509 [INFO][4766] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:38.600190 containerd[1982]: 2026-03-03 13:39:38.511 [INFO][4766] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:38.600190 containerd[1982]: 2026-03-03 13:39:38.513 [INFO][4766] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:38.601571 containerd[1982]: 2026-03-03 13:39:38.513 [INFO][4766] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" host="ip-172-31-31-254" Mar 3 13:39:38.601571 containerd[1982]: 2026-03-03 13:39:38.515 [INFO][4766] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13 Mar 3 13:39:38.601571 containerd[1982]: 2026-03-03 13:39:38.519 [INFO][4766] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" host="ip-172-31-31-254" Mar 3 13:39:38.601571 containerd[1982]: 2026-03-03 13:39:38.527 [INFO][4766] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.1/26] block=192.168.114.0/26 handle="k8s-pod-network.ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" host="ip-172-31-31-254" Mar 3 13:39:38.601571 containerd[1982]: 2026-03-03 13:39:38.527 [INFO][4766] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.1/26] handle="k8s-pod-network.ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" host="ip-172-31-31-254" Mar 3 13:39:38.601571 containerd[1982]: 2026-03-03 13:39:38.527 [INFO][4766] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:39:38.601571 containerd[1982]: 2026-03-03 13:39:38.527 [INFO][4766] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.1/26] IPv6=[] ContainerID="ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" HandleID="k8s-pod-network.ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" Workload="ip--172--31--31--254-k8s-whisker--f57fd4f64--mp7cg-eth0" Mar 3 13:39:38.601764 containerd[1982]: 2026-03-03 13:39:38.531 [INFO][4753] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" Namespace="calico-system" Pod="whisker-f57fd4f64-mp7cg" WorkloadEndpoint="ip--172--31--31--254-k8s-whisker--f57fd4f64--mp7cg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--254-k8s-whisker--f57fd4f64--mp7cg-eth0", GenerateName:"whisker-f57fd4f64-", Namespace:"calico-system", SelfLink:"", UID:"ac9b955a-cdae-4090-b335-329046d62475", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 39, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f57fd4f64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-254", ContainerID:"", Pod:"whisker-f57fd4f64-mp7cg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1803fe64ffe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:39:38.601764 containerd[1982]: 2026-03-03 13:39:38.531 [INFO][4753] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.1/32] ContainerID="ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" Namespace="calico-system" Pod="whisker-f57fd4f64-mp7cg" WorkloadEndpoint="ip--172--31--31--254-k8s-whisker--f57fd4f64--mp7cg-eth0" Mar 3 13:39:38.601858 containerd[1982]: 2026-03-03 13:39:38.531 [INFO][4753] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1803fe64ffe ContainerID="ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" Namespace="calico-system" Pod="whisker-f57fd4f64-mp7cg" WorkloadEndpoint="ip--172--31--31--254-k8s-whisker--f57fd4f64--mp7cg-eth0" Mar 3 13:39:38.601858 containerd[1982]: 2026-03-03 13:39:38.575 [INFO][4753] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" Namespace="calico-system" Pod="whisker-f57fd4f64-mp7cg" WorkloadEndpoint="ip--172--31--31--254-k8s-whisker--f57fd4f64--mp7cg-eth0" Mar 3 13:39:38.601927 containerd[1982]: 2026-03-03 13:39:38.576 [INFO][4753] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" Namespace="calico-system" Pod="whisker-f57fd4f64-mp7cg" WorkloadEndpoint="ip--172--31--31--254-k8s-whisker--f57fd4f64--mp7cg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--254-k8s-whisker--f57fd4f64--mp7cg-eth0", GenerateName:"whisker-f57fd4f64-", Namespace:"calico-system", SelfLink:"", UID:"ac9b955a-cdae-4090-b335-329046d62475", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 39, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f57fd4f64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-254", ContainerID:"ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13", Pod:"whisker-f57fd4f64-mp7cg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1803fe64ffe", MAC:"82:30:26:5e:72:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:39:38.601987 containerd[1982]: 2026-03-03 13:39:38.588 [INFO][4753] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" Namespace="calico-system" Pod="whisker-f57fd4f64-mp7cg" WorkloadEndpoint="ip--172--31--31--254-k8s-whisker--f57fd4f64--mp7cg-eth0" Mar 3 13:39:38.656930 kubelet[3339]: I0303 13:39:38.656866 3339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7226ca70-f487-4c32-a12c-d3256900e3a0" path="/var/lib/kubelet/pods/7226ca70-f487-4c32-a12c-d3256900e3a0/volumes" Mar 3 13:39:38.727426 containerd[1982]: time="2026-03-03T13:39:38.727387560Z" level=info msg="connecting to shim ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13" address="unix:///run/containerd/s/73ace7da14e03c5ac9e118bbc0068bc5bec066fd872a21dded2b92de2a91369d" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:39:38.831551 systemd[1]: Started cri-containerd-ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13.scope - libcontainer container ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13. Mar 3 13:39:39.030113 containerd[1982]: time="2026-03-03T13:39:39.029916906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f57fd4f64-mp7cg,Uid:ac9b955a-cdae-4090-b335-329046d62475,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13\"" Mar 3 13:39:39.066610 containerd[1982]: time="2026-03-03T13:39:39.066085076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 3 13:39:40.055554 systemd-networkd[1711]: cali1803fe64ffe: Gained IPv6LL Mar 3 13:39:40.321959 (udev-worker)[4866]: Network interface NamePolicy= disabled on kernel command line. Mar 3 13:39:40.322708 systemd-networkd[1711]: vxlan.calico: Link UP Mar 3 13:39:40.322712 systemd-networkd[1711]: vxlan.calico: Gained carrier Mar 3 13:39:41.416577 containerd[1982]: time="2026-03-03T13:39:41.416113965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:41.422183 containerd[1982]: time="2026-03-03T13:39:41.422137687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 3 13:39:41.432360 containerd[1982]: time="2026-03-03T13:39:41.432307529Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:41.435102 containerd[1982]: time="2026-03-03T13:39:41.434990944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:41.437593 containerd[1982]: time="2026-03-03T13:39:41.437425403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.370086233s" Mar 3 13:39:41.437788 containerd[1982]: time="2026-03-03T13:39:41.437688715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 3 13:39:41.516043 containerd[1982]: time="2026-03-03T13:39:41.515993202Z" level=info msg="CreateContainer within sandbox \"ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 3 13:39:41.532053 containerd[1982]: time="2026-03-03T13:39:41.532011687Z" level=info msg="Container 6096c2734d9a8a45fde20d69abb58c9e8e03fa70bda1fb8eb4b163b0dc2a9635: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:39:41.537396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3453313050.mount: Deactivated successfully. Mar 3 13:39:41.564349 containerd[1982]: time="2026-03-03T13:39:41.564303181Z" level=info msg="CreateContainer within sandbox \"ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6096c2734d9a8a45fde20d69abb58c9e8e03fa70bda1fb8eb4b163b0dc2a9635\"" Mar 3 13:39:41.565580 containerd[1982]: time="2026-03-03T13:39:41.565196088Z" level=info msg="StartContainer for \"6096c2734d9a8a45fde20d69abb58c9e8e03fa70bda1fb8eb4b163b0dc2a9635\"" Mar 3 13:39:41.567652 containerd[1982]: time="2026-03-03T13:39:41.567618590Z" level=info msg="connecting to shim 6096c2734d9a8a45fde20d69abb58c9e8e03fa70bda1fb8eb4b163b0dc2a9635" address="unix:///run/containerd/s/73ace7da14e03c5ac9e118bbc0068bc5bec066fd872a21dded2b92de2a91369d" protocol=ttrpc version=3 Mar 3 13:39:41.641245 systemd-networkd[1711]: vxlan.calico: Gained IPv6LL Mar 3 13:39:41.648155 systemd[1]: Started cri-containerd-6096c2734d9a8a45fde20d69abb58c9e8e03fa70bda1fb8eb4b163b0dc2a9635.scope - libcontainer container 6096c2734d9a8a45fde20d69abb58c9e8e03fa70bda1fb8eb4b163b0dc2a9635. Mar 3 13:39:41.756022 containerd[1982]: time="2026-03-03T13:39:41.755844429Z" level=info msg="StartContainer for \"6096c2734d9a8a45fde20d69abb58c9e8e03fa70bda1fb8eb4b163b0dc2a9635\" returns successfully" Mar 3 13:39:41.759633 containerd[1982]: time="2026-03-03T13:39:41.759606516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 3 13:39:43.800189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1412746748.mount: Deactivated successfully. Mar 3 13:39:43.815470 containerd[1982]: time="2026-03-03T13:39:43.815419290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:43.816374 containerd[1982]: time="2026-03-03T13:39:43.816326658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 3 13:39:43.817126 containerd[1982]: time="2026-03-03T13:39:43.817060113Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:43.819393 containerd[1982]: time="2026-03-03T13:39:43.819365200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:43.819968 containerd[1982]: time="2026-03-03T13:39:43.819936825Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.06006497s" Mar 3 13:39:43.820096 containerd[1982]: time="2026-03-03T13:39:43.820034411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 3 13:39:43.840375 containerd[1982]: time="2026-03-03T13:39:43.840305625Z" level=info msg="CreateContainer within sandbox \"ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 3 13:39:43.856931 containerd[1982]: time="2026-03-03T13:39:43.856132589Z" level=info msg="Container 5156f4506ee2d38c4e7bf1f347e0284c9da332916ef709b08527226267fc6160: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:39:43.867977 containerd[1982]: time="2026-03-03T13:39:43.867936918Z" level=info msg="CreateContainer within sandbox \"ba675b3ac46ac03ba09768b23255688164fe1500b372d885531dde106a870a13\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5156f4506ee2d38c4e7bf1f347e0284c9da332916ef709b08527226267fc6160\"" Mar 3 13:39:43.869084 containerd[1982]: time="2026-03-03T13:39:43.869055607Z" level=info msg="StartContainer for \"5156f4506ee2d38c4e7bf1f347e0284c9da332916ef709b08527226267fc6160\"" Mar 3 13:39:43.871244 containerd[1982]: time="2026-03-03T13:39:43.871201657Z" level=info msg="connecting to shim 5156f4506ee2d38c4e7bf1f347e0284c9da332916ef709b08527226267fc6160" address="unix:///run/containerd/s/73ace7da14e03c5ac9e118bbc0068bc5bec066fd872a21dded2b92de2a91369d" protocol=ttrpc version=3 Mar 3 13:39:43.894154 systemd[1]: Started cri-containerd-5156f4506ee2d38c4e7bf1f347e0284c9da332916ef709b08527226267fc6160.scope - libcontainer container 5156f4506ee2d38c4e7bf1f347e0284c9da332916ef709b08527226267fc6160. Mar 3 13:39:43.946642 containerd[1982]: time="2026-03-03T13:39:43.946570788Z" level=info msg="StartContainer for \"5156f4506ee2d38c4e7bf1f347e0284c9da332916ef709b08527226267fc6160\" returns successfully" Mar 3 13:39:43.974636 ntpd[2230]: Listen normally on 6 vxlan.calico 192.168.114.0:123 Mar 3 13:39:43.975009 ntpd[2230]: Listen normally on 7 cali1803fe64ffe [fe80::ecee:eeff:feee:eeee%4]:123 Mar 3 13:39:43.977632 ntpd[2230]: 3 Mar 13:39:43 ntpd[2230]: Listen normally on 6 vxlan.calico 192.168.114.0:123 Mar 3 13:39:43.977632 ntpd[2230]: 3 Mar 13:39:43 ntpd[2230]: Listen normally on 7 cali1803fe64ffe [fe80::ecee:eeff:feee:eeee%4]:123 Mar 3 13:39:43.977632 ntpd[2230]: 3 Mar 13:39:43 ntpd[2230]: Listen normally on 8 vxlan.calico [fe80::64a1:33ff:fe35:40ef%5]:123 Mar 3 13:39:43.975071 ntpd[2230]: Listen normally on 8 vxlan.calico [fe80::64a1:33ff:fe35:40ef%5]:123 Mar 3 13:39:44.202287 kubelet[3339]: I0303 13:39:44.199343 3339 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-f57fd4f64-mp7cg" podStartSLOduration=2.409092127 podStartE2EDuration="7.196944563s" podCreationTimestamp="2026-03-03 13:39:37 +0000 UTC" firstStartedPulling="2026-03-03 13:39:39.032869665 +0000 UTC m=+50.527078594" lastFinishedPulling="2026-03-03 13:39:43.820722104 +0000 UTC m=+55.314931030" observedRunningTime="2026-03-03 13:39:44.192414615 +0000 UTC m=+55.686623562" watchObservedRunningTime="2026-03-03 13:39:44.196944563 +0000 UTC m=+55.691153511" Mar 3 13:39:47.425549 systemd[1]: Started sshd@7-172.31.31.254:22-68.220.241.50:58056.service - OpenSSH per-connection server daemon (68.220.241.50:58056). Mar 3 13:39:47.650767 containerd[1982]: time="2026-03-03T13:39:47.650653925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwlnv,Uid:c6298661-1013-4f89-bedc-335a967b2002,Namespace:calico-system,Attempt:0,}" Mar 3 13:39:47.652139 containerd[1982]: time="2026-03-03T13:39:47.652084411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-4mdtz,Uid:1782f57c-b194-4dc8-a4a4-a3f5d2024f13,Namespace:calico-system,Attempt:0,}" Mar 3 13:39:47.655446 containerd[1982]: time="2026-03-03T13:39:47.655406507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf8fd54b9-c6jwv,Uid:80c3033b-9292-4a4e-bd6c-4aaaed72fa18,Namespace:calico-system,Attempt:0,}" Mar 3 13:39:47.945175 sshd[5149]: Accepted publickey for core from 68.220.241.50 port 58056 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:39:47.949247 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:39:47.956206 systemd-logind[1956]: New session 8 of user core. Mar 3 13:39:47.962050 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 3 13:39:48.276860 systemd-networkd[1711]: cali27c46c42d3f: Link UP Mar 3 13:39:48.277134 systemd-networkd[1711]: cali27c46c42d3f: Gained carrier Mar 3 13:39:48.306021 containerd[1982]: 2026-03-03 13:39:47.971 [INFO][5157] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--c6jwv-eth0 calico-apiserver-5bf8fd54b9- calico-system 80c3033b-9292-4a4e-bd6c-4aaaed72fa18 855 0 2026-03-03 13:39:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bf8fd54b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-254 calico-apiserver-5bf8fd54b9-c6jwv eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali27c46c42d3f [] [] }} ContainerID="cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" Namespace="calico-system" Pod="calico-apiserver-5bf8fd54b9-c6jwv" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--c6jwv-" Mar 3 13:39:48.306021 containerd[1982]: 2026-03-03 13:39:47.972 [INFO][5157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" Namespace="calico-system" Pod="calico-apiserver-5bf8fd54b9-c6jwv" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--c6jwv-eth0" Mar 3 13:39:48.306021 containerd[1982]: 2026-03-03 13:39:48.207 [INFO][5194] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" HandleID="k8s-pod-network.cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" Workload="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--c6jwv-eth0" Mar 3 13:39:48.306241 containerd[1982]: 2026-03-03 13:39:48.223 [INFO][5194] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" HandleID="k8s-pod-network.cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" Workload="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--c6jwv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003743f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-254", "pod":"calico-apiserver-5bf8fd54b9-c6jwv", "timestamp":"2026-03-03 13:39:48.207379342 +0000 UTC"}, Hostname:"ip-172-31-31-254", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000285a20)} Mar 3 13:39:48.306241 containerd[1982]: 2026-03-03 13:39:48.223 [INFO][5194] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:39:48.306241 containerd[1982]: 2026-03-03 13:39:48.224 [INFO][5194] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:39:48.306241 containerd[1982]: 2026-03-03 13:39:48.224 [INFO][5194] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-254' Mar 3 13:39:48.306241 containerd[1982]: 2026-03-03 13:39:48.229 [INFO][5194] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" host="ip-172-31-31-254" Mar 3 13:39:48.306241 containerd[1982]: 2026-03-03 13:39:48.236 [INFO][5194] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-254" Mar 3 13:39:48.306241 containerd[1982]: 2026-03-03 13:39:48.242 [INFO][5194] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:48.306241 containerd[1982]: 2026-03-03 13:39:48.245 [INFO][5194] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:48.306241 containerd[1982]: 2026-03-03 13:39:48.248 [INFO][5194] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:48.306529 containerd[1982]: 2026-03-03 13:39:48.248 [INFO][5194] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" host="ip-172-31-31-254" Mar 3 13:39:48.306529 containerd[1982]: 2026-03-03 13:39:48.250 [INFO][5194] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e Mar 3 13:39:48.306529 containerd[1982]: 2026-03-03 13:39:48.260 [INFO][5194] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" host="ip-172-31-31-254" Mar 3 13:39:48.306529 containerd[1982]: 2026-03-03 13:39:48.267 [INFO][5194] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.2/26] block=192.168.114.0/26 handle="k8s-pod-network.cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" host="ip-172-31-31-254" Mar 3 13:39:48.306529 containerd[1982]: 2026-03-03 13:39:48.267 [INFO][5194] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.2/26] handle="k8s-pod-network.cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" host="ip-172-31-31-254" Mar 3 13:39:48.306529 containerd[1982]: 2026-03-03 13:39:48.267 [INFO][5194] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:39:48.306529 containerd[1982]: 2026-03-03 13:39:48.267 [INFO][5194] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.2/26] IPv6=[] ContainerID="cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" HandleID="k8s-pod-network.cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" Workload="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--c6jwv-eth0" Mar 3 13:39:48.306688 containerd[1982]: 2026-03-03 13:39:48.272 [INFO][5157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" Namespace="calico-system" Pod="calico-apiserver-5bf8fd54b9-c6jwv" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--c6jwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--c6jwv-eth0", GenerateName:"calico-apiserver-5bf8fd54b9-", Namespace:"calico-system", SelfLink:"", UID:"80c3033b-9292-4a4e-bd6c-4aaaed72fa18", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 39, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bf8fd54b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-254", ContainerID:"", Pod:"calico-apiserver-5bf8fd54b9-c6jwv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali27c46c42d3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:39:48.306748 containerd[1982]: 2026-03-03 13:39:48.272 [INFO][5157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.2/32] ContainerID="cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" Namespace="calico-system" Pod="calico-apiserver-5bf8fd54b9-c6jwv" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--c6jwv-eth0" Mar 3 13:39:48.306748 containerd[1982]: 2026-03-03 13:39:48.272 [INFO][5157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27c46c42d3f ContainerID="cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" Namespace="calico-system" Pod="calico-apiserver-5bf8fd54b9-c6jwv" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--c6jwv-eth0" Mar 3 13:39:48.306748 containerd[1982]: 2026-03-03 13:39:48.277 [INFO][5157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" Namespace="calico-system" Pod="calico-apiserver-5bf8fd54b9-c6jwv" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--c6jwv-eth0" Mar 3 13:39:48.306815 containerd[1982]: 2026-03-03 13:39:48.278 [INFO][5157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" Namespace="calico-system" Pod="calico-apiserver-5bf8fd54b9-c6jwv" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--c6jwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--c6jwv-eth0", GenerateName:"calico-apiserver-5bf8fd54b9-", Namespace:"calico-system", SelfLink:"", UID:"80c3033b-9292-4a4e-bd6c-4aaaed72fa18", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 39, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bf8fd54b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-254", ContainerID:"cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e", Pod:"calico-apiserver-5bf8fd54b9-c6jwv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali27c46c42d3f", MAC:"be:ed:9a:8d:78:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:39:48.306872 containerd[1982]: 2026-03-03 13:39:48.298 [INFO][5157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" Namespace="calico-system" Pod="calico-apiserver-5bf8fd54b9-c6jwv" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--c6jwv-eth0" Mar 3 13:39:48.311815 (udev-worker)[5221]: Network interface NamePolicy= disabled on kernel command line. Mar 3 13:39:48.400553 (udev-worker)[5225]: Network interface NamePolicy= disabled on kernel command line. Mar 3 13:39:48.408235 systemd-networkd[1711]: calicac5aebb567: Link UP Mar 3 13:39:48.408541 systemd-networkd[1711]: calicac5aebb567: Gained carrier Mar 3 13:39:48.450419 containerd[1982]: 2026-03-03 13:39:47.971 [INFO][5154] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--254-k8s-goldmane--9f7667bb8--4mdtz-eth0 goldmane-9f7667bb8- calico-system 1782f57c-b194-4dc8-a4a4-a3f5d2024f13 860 0 2026-03-03 13:39:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-31-254 goldmane-9f7667bb8-4mdtz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicac5aebb567 [] [] }} ContainerID="fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" Namespace="calico-system" Pod="goldmane-9f7667bb8-4mdtz" WorkloadEndpoint="ip--172--31--31--254-k8s-goldmane--9f7667bb8--4mdtz-" Mar 3 13:39:48.450419 containerd[1982]: 2026-03-03 13:39:47.972 [INFO][5154] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" Namespace="calico-system" Pod="goldmane-9f7667bb8-4mdtz" WorkloadEndpoint="ip--172--31--31--254-k8s-goldmane--9f7667bb8--4mdtz-eth0" Mar 3 13:39:48.450419 containerd[1982]: 2026-03-03 13:39:48.207 [INFO][5192] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" HandleID="k8s-pod-network.fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" Workload="ip--172--31--31--254-k8s-goldmane--9f7667bb8--4mdtz-eth0" Mar 3 13:39:48.450622 containerd[1982]: 2026-03-03 13:39:48.225 [INFO][5192] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" HandleID="k8s-pod-network.fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" Workload="ip--172--31--31--254-k8s-goldmane--9f7667bb8--4mdtz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011cb30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-254", "pod":"goldmane-9f7667bb8-4mdtz", "timestamp":"2026-03-03 13:39:48.207796546 +0000 UTC"}, Hostname:"ip-172-31-31-254", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ee9a0)} Mar 3 13:39:48.450622 containerd[1982]: 2026-03-03 13:39:48.225 [INFO][5192] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:39:48.450622 containerd[1982]: 2026-03-03 13:39:48.268 [INFO][5192] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:39:48.450622 containerd[1982]: 2026-03-03 13:39:48.268 [INFO][5192] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-254' Mar 3 13:39:48.450622 containerd[1982]: 2026-03-03 13:39:48.331 [INFO][5192] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" host="ip-172-31-31-254" Mar 3 13:39:48.450622 containerd[1982]: 2026-03-03 13:39:48.340 [INFO][5192] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-254" Mar 3 13:39:48.450622 containerd[1982]: 2026-03-03 13:39:48.347 [INFO][5192] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:48.450622 containerd[1982]: 2026-03-03 13:39:48.350 [INFO][5192] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:48.450622 containerd[1982]: 2026-03-03 13:39:48.355 [INFO][5192] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:48.450878 containerd[1982]: 2026-03-03 13:39:48.355 [INFO][5192] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" host="ip-172-31-31-254" Mar 3 13:39:48.450878 containerd[1982]: 2026-03-03 13:39:48.359 [INFO][5192] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e Mar 3 13:39:48.450878 containerd[1982]: 2026-03-03 13:39:48.367 [INFO][5192] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" host="ip-172-31-31-254" Mar 3 13:39:48.450878 containerd[1982]: 2026-03-03 13:39:48.385 [INFO][5192] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.3/26] block=192.168.114.0/26 handle="k8s-pod-network.fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" host="ip-172-31-31-254" Mar 3 13:39:48.450878 containerd[1982]: 2026-03-03 13:39:48.385 [INFO][5192] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.3/26] handle="k8s-pod-network.fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" host="ip-172-31-31-254" Mar 3 13:39:48.450878 containerd[1982]: 2026-03-03 13:39:48.385 [INFO][5192] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:39:48.450878 containerd[1982]: 2026-03-03 13:39:48.385 [INFO][5192] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.3/26] IPv6=[] ContainerID="fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" HandleID="k8s-pod-network.fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" Workload="ip--172--31--31--254-k8s-goldmane--9f7667bb8--4mdtz-eth0" Mar 3 13:39:48.452649 containerd[1982]: 2026-03-03 13:39:48.397 [INFO][5154] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" Namespace="calico-system" Pod="goldmane-9f7667bb8-4mdtz" WorkloadEndpoint="ip--172--31--31--254-k8s-goldmane--9f7667bb8--4mdtz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--254-k8s-goldmane--9f7667bb8--4mdtz-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1782f57c-b194-4dc8-a4a4-a3f5d2024f13", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 39, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-254", ContainerID:"", Pod:"goldmane-9f7667bb8-4mdtz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicac5aebb567", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:39:48.452649 containerd[1982]: 2026-03-03 13:39:48.398 [INFO][5154] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.3/32] ContainerID="fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" Namespace="calico-system" Pod="goldmane-9f7667bb8-4mdtz" WorkloadEndpoint="ip--172--31--31--254-k8s-goldmane--9f7667bb8--4mdtz-eth0" Mar 3 13:39:48.452747 containerd[1982]: 2026-03-03 13:39:48.398 [INFO][5154] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicac5aebb567 ContainerID="fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" Namespace="calico-system" Pod="goldmane-9f7667bb8-4mdtz" WorkloadEndpoint="ip--172--31--31--254-k8s-goldmane--9f7667bb8--4mdtz-eth0" Mar 3 13:39:48.452747 containerd[1982]: 2026-03-03 13:39:48.409 [INFO][5154] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" Namespace="calico-system" Pod="goldmane-9f7667bb8-4mdtz" WorkloadEndpoint="ip--172--31--31--254-k8s-goldmane--9f7667bb8--4mdtz-eth0" Mar 3 13:39:48.452803 containerd[1982]: 2026-03-03 13:39:48.412 [INFO][5154] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" Namespace="calico-system" Pod="goldmane-9f7667bb8-4mdtz" WorkloadEndpoint="ip--172--31--31--254-k8s-goldmane--9f7667bb8--4mdtz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--254-k8s-goldmane--9f7667bb8--4mdtz-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1782f57c-b194-4dc8-a4a4-a3f5d2024f13", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 39, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-254", ContainerID:"fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e", Pod:"goldmane-9f7667bb8-4mdtz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicac5aebb567", MAC:"16:6b:56:d1:56:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:39:48.452873 containerd[1982]: 2026-03-03 13:39:48.437 [INFO][5154] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" Namespace="calico-system" Pod="goldmane-9f7667bb8-4mdtz" WorkloadEndpoint="ip--172--31--31--254-k8s-goldmane--9f7667bb8--4mdtz-eth0" Mar 3 13:39:48.463305 containerd[1982]: time="2026-03-03T13:39:48.462838066Z" level=info msg="connecting to shim cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e" address="unix:///run/containerd/s/2dd36ab380c158d3221ace826207a59d11379169f8ca59082a704031171a3e63" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:39:48.510287 systemd-networkd[1711]: cali09a5333137a: Link UP Mar 3 13:39:48.512590 systemd-networkd[1711]: cali09a5333137a: Gained carrier Mar 3 13:39:48.528978 containerd[1982]: time="2026-03-03T13:39:48.528806082Z" level=info msg="connecting to shim fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e" address="unix:///run/containerd/s/107ff1f3c8e8bfdbbec95470f475e3815766038f0a463d94ac356badb9bae523" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:39:48.560227 containerd[1982]: 2026-03-03 13:39:47.971 [INFO][5153] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--254-k8s-csi--node--driver--rwlnv-eth0 csi-node-driver- calico-system c6298661-1013-4f89-bedc-335a967b2002 710 0 2026-03-03 13:39:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-31-254 csi-node-driver-rwlnv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali09a5333137a [] [] }} ContainerID="03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" Namespace="calico-system" Pod="csi-node-driver-rwlnv" WorkloadEndpoint="ip--172--31--31--254-k8s-csi--node--driver--rwlnv-" Mar 3 13:39:48.560227 containerd[1982]: 2026-03-03 13:39:47.972 [INFO][5153] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" Namespace="calico-system" Pod="csi-node-driver-rwlnv" WorkloadEndpoint="ip--172--31--31--254-k8s-csi--node--driver--rwlnv-eth0" Mar 3 13:39:48.560227 containerd[1982]: 2026-03-03 13:39:48.208 [INFO][5190] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" HandleID="k8s-pod-network.03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" Workload="ip--172--31--31--254-k8s-csi--node--driver--rwlnv-eth0" Mar 3 13:39:48.560519 containerd[1982]: 2026-03-03 13:39:48.228 [INFO][5190] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" HandleID="k8s-pod-network.03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" Workload="ip--172--31--31--254-k8s-csi--node--driver--rwlnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001237f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-254", "pod":"csi-node-driver-rwlnv", "timestamp":"2026-03-03 13:39:48.20878095 +0000 UTC"}, Hostname:"ip-172-31-31-254", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003c8420)} Mar 3 13:39:48.560519 containerd[1982]: 2026-03-03 13:39:48.228 [INFO][5190] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:39:48.560519 containerd[1982]: 2026-03-03 13:39:48.387 [INFO][5190] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:39:48.560519 containerd[1982]: 2026-03-03 13:39:48.387 [INFO][5190] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-254' Mar 3 13:39:48.560519 containerd[1982]: 2026-03-03 13:39:48.430 [INFO][5190] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" host="ip-172-31-31-254" Mar 3 13:39:48.560519 containerd[1982]: 2026-03-03 13:39:48.440 [INFO][5190] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-254" Mar 3 13:39:48.560519 containerd[1982]: 2026-03-03 13:39:48.450 [INFO][5190] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:48.560519 containerd[1982]: 2026-03-03 13:39:48.453 [INFO][5190] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:48.560519 containerd[1982]: 2026-03-03 13:39:48.456 [INFO][5190] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:48.560849 containerd[1982]: 2026-03-03 13:39:48.456 [INFO][5190] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" host="ip-172-31-31-254" Mar 3 13:39:48.560849 containerd[1982]: 2026-03-03 13:39:48.459 [INFO][5190] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778 Mar 3 13:39:48.560849 containerd[1982]: 2026-03-03 13:39:48.471 [INFO][5190] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" host="ip-172-31-31-254" Mar 3 13:39:48.560849 containerd[1982]: 2026-03-03 13:39:48.491 [INFO][5190] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.4/26] block=192.168.114.0/26 handle="k8s-pod-network.03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" host="ip-172-31-31-254" Mar 3 13:39:48.560849 containerd[1982]: 2026-03-03 13:39:48.491 [INFO][5190] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.4/26] handle="k8s-pod-network.03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" host="ip-172-31-31-254" Mar 3 13:39:48.560849 containerd[1982]: 2026-03-03 13:39:48.491 [INFO][5190] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:39:48.560849 containerd[1982]: 2026-03-03 13:39:48.491 [INFO][5190] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.4/26] IPv6=[] ContainerID="03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" HandleID="k8s-pod-network.03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" Workload="ip--172--31--31--254-k8s-csi--node--driver--rwlnv-eth0" Mar 3 13:39:48.561079 containerd[1982]: 2026-03-03 13:39:48.499 [INFO][5153] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" Namespace="calico-system" Pod="csi-node-driver-rwlnv" WorkloadEndpoint="ip--172--31--31--254-k8s-csi--node--driver--rwlnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--254-k8s-csi--node--driver--rwlnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6298661-1013-4f89-bedc-335a967b2002", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 39, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-254", ContainerID:"", Pod:"csi-node-driver-rwlnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali09a5333137a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:39:48.561159 containerd[1982]: 2026-03-03 13:39:48.500 [INFO][5153] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.4/32] ContainerID="03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" Namespace="calico-system" Pod="csi-node-driver-rwlnv" WorkloadEndpoint="ip--172--31--31--254-k8s-csi--node--driver--rwlnv-eth0" Mar 3 13:39:48.561159 containerd[1982]: 2026-03-03 13:39:48.501 [INFO][5153] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09a5333137a ContainerID="03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" Namespace="calico-system" Pod="csi-node-driver-rwlnv" WorkloadEndpoint="ip--172--31--31--254-k8s-csi--node--driver--rwlnv-eth0" Mar 3 13:39:48.561159 containerd[1982]: 2026-03-03 13:39:48.514 [INFO][5153] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" Namespace="calico-system" Pod="csi-node-driver-rwlnv" WorkloadEndpoint="ip--172--31--31--254-k8s-csi--node--driver--rwlnv-eth0" Mar 3 13:39:48.561254 containerd[1982]: 2026-03-03 13:39:48.522 [INFO][5153] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" Namespace="calico-system" Pod="csi-node-driver-rwlnv" WorkloadEndpoint="ip--172--31--31--254-k8s-csi--node--driver--rwlnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--254-k8s-csi--node--driver--rwlnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6298661-1013-4f89-bedc-335a967b2002", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 39, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-254", ContainerID:"03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778", Pod:"csi-node-driver-rwlnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali09a5333137a", MAC:"6e:37:bb:b4:94:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:39:48.561952 containerd[1982]: 2026-03-03 13:39:48.547 [INFO][5153] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" Namespace="calico-system" Pod="csi-node-driver-rwlnv" WorkloadEndpoint="ip--172--31--31--254-k8s-csi--node--driver--rwlnv-eth0" Mar 3 13:39:48.586082 systemd[1]: Started cri-containerd-cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e.scope - libcontainer container cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e. Mar 3 13:39:48.623943 containerd[1982]: time="2026-03-03T13:39:48.622810293Z" level=info msg="connecting to shim 03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778" address="unix:///run/containerd/s/129a9095b5d764b4362d943701705f533d470be403880972b02c513290a55327" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:39:48.635276 systemd[1]: Started cri-containerd-fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e.scope - libcontainer container fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e. Mar 3 13:39:48.659821 systemd[1]: Started cri-containerd-03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778.scope - libcontainer container 03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778. Mar 3 13:39:48.761876 containerd[1982]: time="2026-03-03T13:39:48.761184423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwlnv,Uid:c6298661-1013-4f89-bedc-335a967b2002,Namespace:calico-system,Attempt:0,} returns sandbox id \"03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778\"" Mar 3 13:39:48.808960 containerd[1982]: time="2026-03-03T13:39:48.806583837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 3 13:39:48.881907 containerd[1982]: time="2026-03-03T13:39:48.877773788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-4mdtz,Uid:1782f57c-b194-4dc8-a4a4-a3f5d2024f13,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e\"" Mar 3 13:39:48.905853 containerd[1982]: time="2026-03-03T13:39:48.905783429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf8fd54b9-c6jwv,Uid:80c3033b-9292-4a4e-bd6c-4aaaed72fa18,Namespace:calico-system,Attempt:0,} returns sandbox id \"cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e\"" Mar 3 13:39:49.026354 sshd[5188]: Connection closed by 68.220.241.50 port 58056 Mar 3 13:39:49.027210 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Mar 3 13:39:49.034113 systemd-logind[1956]: Session 8 logged out. Waiting for processes to exit. Mar 3 13:39:49.035216 systemd[1]: sshd@7-172.31.31.254:22-68.220.241.50:58056.service: Deactivated successfully. Mar 3 13:39:49.037304 systemd[1]: session-8.scope: Deactivated successfully. Mar 3 13:39:49.038501 systemd-logind[1956]: Removed session 8. Mar 3 13:39:49.449113 systemd-networkd[1711]: calicac5aebb567: Gained IPv6LL Mar 3 13:39:49.641063 systemd-networkd[1711]: cali09a5333137a: Gained IPv6LL Mar 3 13:39:49.642301 systemd-networkd[1711]: cali27c46c42d3f: Gained IPv6LL Mar 3 13:39:49.651235 containerd[1982]: time="2026-03-03T13:39:49.651197440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-d27tl,Uid:68c73d15-de59-4430-8497-c7d59f8425a7,Namespace:kube-system,Attempt:0,}" Mar 3 13:39:49.779382 systemd-networkd[1711]: cali5847042b5bb: Link UP Mar 3 13:39:49.779542 systemd-networkd[1711]: cali5847042b5bb: Gained carrier Mar 3 13:39:49.795117 containerd[1982]: 2026-03-03 13:39:49.702 [INFO][5421] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--254-k8s-coredns--7d764666f9--d27tl-eth0 coredns-7d764666f9- kube-system 68c73d15-de59-4430-8497-c7d59f8425a7 856 0 2026-03-03 13:38:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-254 coredns-7d764666f9-d27tl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5847042b5bb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" Namespace="kube-system" Pod="coredns-7d764666f9-d27tl" WorkloadEndpoint="ip--172--31--31--254-k8s-coredns--7d764666f9--d27tl-" Mar 3 13:39:49.795117 containerd[1982]: 2026-03-03 13:39:49.702 [INFO][5421] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" Namespace="kube-system" Pod="coredns-7d764666f9-d27tl" WorkloadEndpoint="ip--172--31--31--254-k8s-coredns--7d764666f9--d27tl-eth0" Mar 3 13:39:49.795117 containerd[1982]: 2026-03-03 13:39:49.731 [INFO][5432] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" HandleID="k8s-pod-network.0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" Workload="ip--172--31--31--254-k8s-coredns--7d764666f9--d27tl-eth0" Mar 3 13:39:49.795573 containerd[1982]: 2026-03-03 13:39:49.741 [INFO][5432] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" HandleID="k8s-pod-network.0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" Workload="ip--172--31--31--254-k8s-coredns--7d764666f9--d27tl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efe60), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-254", "pod":"coredns-7d764666f9-d27tl", "timestamp":"2026-03-03 13:39:49.73110114 +0000 UTC"}, Hostname:"ip-172-31-31-254", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000426b00)} Mar 3 13:39:49.795573 containerd[1982]: 2026-03-03 13:39:49.741 [INFO][5432] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:39:49.795573 containerd[1982]: 2026-03-03 13:39:49.741 [INFO][5432] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:39:49.795573 containerd[1982]: 2026-03-03 13:39:49.741 [INFO][5432] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-254' Mar 3 13:39:49.795573 containerd[1982]: 2026-03-03 13:39:49.744 [INFO][5432] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" host="ip-172-31-31-254" Mar 3 13:39:49.795573 containerd[1982]: 2026-03-03 13:39:49.750 [INFO][5432] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-254" Mar 3 13:39:49.795573 containerd[1982]: 2026-03-03 13:39:49.755 [INFO][5432] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:49.795573 containerd[1982]: 2026-03-03 13:39:49.757 [INFO][5432] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:49.795573 containerd[1982]: 2026-03-03 13:39:49.759 [INFO][5432] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:49.795804 containerd[1982]: 2026-03-03 13:39:49.759 [INFO][5432] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" host="ip-172-31-31-254" Mar 3 13:39:49.795804 containerd[1982]: 2026-03-03 13:39:49.761 [INFO][5432] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6 Mar 3 13:39:49.795804 containerd[1982]: 2026-03-03 13:39:49.765 [INFO][5432] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" host="ip-172-31-31-254" Mar 3 13:39:49.795804 containerd[1982]: 2026-03-03 13:39:49.774 [INFO][5432] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.5/26] block=192.168.114.0/26 handle="k8s-pod-network.0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" host="ip-172-31-31-254" Mar 3 13:39:49.795804 containerd[1982]: 2026-03-03 13:39:49.774 [INFO][5432] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.5/26] handle="k8s-pod-network.0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" host="ip-172-31-31-254" Mar 3 13:39:49.795804 containerd[1982]: 2026-03-03 13:39:49.774 [INFO][5432] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:39:49.795804 containerd[1982]: 2026-03-03 13:39:49.774 [INFO][5432] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.5/26] IPv6=[] ContainerID="0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" HandleID="k8s-pod-network.0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" Workload="ip--172--31--31--254-k8s-coredns--7d764666f9--d27tl-eth0" Mar 3 13:39:49.796085 containerd[1982]: 2026-03-03 13:39:49.776 [INFO][5421] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" Namespace="kube-system" Pod="coredns-7d764666f9-d27tl" WorkloadEndpoint="ip--172--31--31--254-k8s-coredns--7d764666f9--d27tl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--254-k8s-coredns--7d764666f9--d27tl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"68c73d15-de59-4430-8497-c7d59f8425a7", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 38, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-254", ContainerID:"", Pod:"coredns-7d764666f9-d27tl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5847042b5bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:39:49.796085 containerd[1982]: 2026-03-03 13:39:49.776 [INFO][5421] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.5/32] ContainerID="0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" Namespace="kube-system" Pod="coredns-7d764666f9-d27tl" WorkloadEndpoint="ip--172--31--31--254-k8s-coredns--7d764666f9--d27tl-eth0" Mar 3 13:39:49.796085 containerd[1982]: 2026-03-03 13:39:49.776 [INFO][5421] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5847042b5bb ContainerID="0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" Namespace="kube-system" Pod="coredns-7d764666f9-d27tl" WorkloadEndpoint="ip--172--31--31--254-k8s-coredns--7d764666f9--d27tl-eth0" Mar 3 13:39:49.796085 containerd[1982]: 2026-03-03 13:39:49.778 [INFO][5421] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" Namespace="kube-system" Pod="coredns-7d764666f9-d27tl" WorkloadEndpoint="ip--172--31--31--254-k8s-coredns--7d764666f9--d27tl-eth0" Mar 3 13:39:49.796085 containerd[1982]: 2026-03-03 13:39:49.779 [INFO][5421] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" Namespace="kube-system" Pod="coredns-7d764666f9-d27tl" WorkloadEndpoint="ip--172--31--31--254-k8s-coredns--7d764666f9--d27tl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--254-k8s-coredns--7d764666f9--d27tl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"68c73d15-de59-4430-8497-c7d59f8425a7", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 38, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-254", ContainerID:"0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6", Pod:"coredns-7d764666f9-d27tl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5847042b5bb", MAC:"06:94:76:a5:d3:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:39:49.796085 containerd[1982]: 2026-03-03 13:39:49.791 [INFO][5421] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" Namespace="kube-system" Pod="coredns-7d764666f9-d27tl" WorkloadEndpoint="ip--172--31--31--254-k8s-coredns--7d764666f9--d27tl-eth0" Mar 3 13:39:49.832917 containerd[1982]: time="2026-03-03T13:39:49.832557662Z" level=info msg="connecting to shim 0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6" address="unix:///run/containerd/s/b7ba1b6ec2fca31fef6636b7b1a6118336e3d05bc70c5c118c201c2cd365f31e" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:39:49.865105 systemd[1]: Started cri-containerd-0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6.scope - libcontainer container 0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6. Mar 3 13:39:49.943087 containerd[1982]: time="2026-03-03T13:39:49.942803224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-d27tl,Uid:68c73d15-de59-4430-8497-c7d59f8425a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6\"" Mar 3 13:39:49.952622 containerd[1982]: time="2026-03-03T13:39:49.952434149Z" level=info msg="CreateContainer within sandbox \"0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 13:39:49.996312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2709065759.mount: Deactivated successfully. Mar 3 13:39:49.997828 containerd[1982]: time="2026-03-03T13:39:49.996500530Z" level=info msg="Container cf95c79691feeb339478831389d39da3ae0747cdcb64054d4910ecec695970f3: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:39:50.023638 containerd[1982]: time="2026-03-03T13:39:50.023580318Z" level=info msg="CreateContainer within sandbox \"0517d20044a0a02e192acc056dc0c4a6f86a08dfef9e8f9f643d7f6dc654acb6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf95c79691feeb339478831389d39da3ae0747cdcb64054d4910ecec695970f3\"" Mar 3 13:39:50.024358 containerd[1982]: time="2026-03-03T13:39:50.024141147Z" level=info msg="StartContainer for \"cf95c79691feeb339478831389d39da3ae0747cdcb64054d4910ecec695970f3\"" Mar 3 13:39:50.031825 containerd[1982]: time="2026-03-03T13:39:50.031724612Z" level=info msg="connecting to shim cf95c79691feeb339478831389d39da3ae0747cdcb64054d4910ecec695970f3" address="unix:///run/containerd/s/b7ba1b6ec2fca31fef6636b7b1a6118336e3d05bc70c5c118c201c2cd365f31e" protocol=ttrpc version=3 Mar 3 13:39:50.060090 systemd[1]: Started cri-containerd-cf95c79691feeb339478831389d39da3ae0747cdcb64054d4910ecec695970f3.scope - libcontainer container cf95c79691feeb339478831389d39da3ae0747cdcb64054d4910ecec695970f3. Mar 3 13:39:50.102222 containerd[1982]: time="2026-03-03T13:39:50.102177471Z" level=info msg="StartContainer for \"cf95c79691feeb339478831389d39da3ae0747cdcb64054d4910ecec695970f3\" returns successfully" Mar 3 13:39:50.656916 containerd[1982]: time="2026-03-03T13:39:50.656638276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hm7z4,Uid:4b3d84cb-804e-46c6-ae6a-9106d56643d6,Namespace:kube-system,Attempt:0,}" Mar 3 13:39:50.661905 containerd[1982]: time="2026-03-03T13:39:50.661579614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf8fd54b9-szfgf,Uid:7b276f3a-0d5a-4ce2-8383-649a836c75b2,Namespace:calico-system,Attempt:0,}" Mar 3 13:39:50.996609 systemd-networkd[1711]: cali52d40d0e090: Link UP Mar 3 13:39:50.997366 systemd-networkd[1711]: cali52d40d0e090: Gained carrier Mar 3 13:39:51.018566 kubelet[3339]: I0303 13:39:51.018496 3339 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-d27tl" podStartSLOduration=56.018472856 podStartE2EDuration="56.018472856s" podCreationTimestamp="2026-03-03 13:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:39:50.189923366 +0000 UTC m=+61.684132314" watchObservedRunningTime="2026-03-03 13:39:51.018472856 +0000 UTC m=+62.512681804" Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.851 [INFO][5550] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--szfgf-eth0 calico-apiserver-5bf8fd54b9- calico-system 7b276f3a-0d5a-4ce2-8383-649a836c75b2 858 0 2026-03-03 13:39:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bf8fd54b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-254 calico-apiserver-5bf8fd54b9-szfgf eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali52d40d0e090 [] [] }} ContainerID="d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" Namespace="calico-system" Pod="calico-apiserver-5bf8fd54b9-szfgf" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--szfgf-" Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.855 [INFO][5550] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" Namespace="calico-system" Pod="calico-apiserver-5bf8fd54b9-szfgf" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--szfgf-eth0" Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.917 [INFO][5593] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" HandleID="k8s-pod-network.d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" Workload="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--szfgf-eth0" Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.928 [INFO][5593] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" HandleID="k8s-pod-network.d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" Workload="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--szfgf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036f730), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-254", "pod":"calico-apiserver-5bf8fd54b9-szfgf", "timestamp":"2026-03-03 13:39:50.917926164 +0000 UTC"}, Hostname:"ip-172-31-31-254", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000262f20)} Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.928 [INFO][5593] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.928 [INFO][5593] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.928 [INFO][5593] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-254' Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.932 [INFO][5593] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" host="ip-172-31-31-254" Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.939 [INFO][5593] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-254" Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.946 [INFO][5593] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.949 [INFO][5593] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.953 [INFO][5593] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.953 [INFO][5593] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" host="ip-172-31-31-254" Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.957 [INFO][5593] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172 Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.974 [INFO][5593] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" host="ip-172-31-31-254" Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.988 [INFO][5593] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.6/26] block=192.168.114.0/26 handle="k8s-pod-network.d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" host="ip-172-31-31-254" Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.988 [INFO][5593] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.6/26] handle="k8s-pod-network.d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" host="ip-172-31-31-254" Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.988 [INFO][5593] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:39:51.024578 containerd[1982]: 2026-03-03 13:39:50.988 [INFO][5593] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.6/26] IPv6=[] ContainerID="d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" HandleID="k8s-pod-network.d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" Workload="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--szfgf-eth0" Mar 3 13:39:51.026475 containerd[1982]: 2026-03-03 13:39:50.991 [INFO][5550] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" Namespace="calico-system" Pod="calico-apiserver-5bf8fd54b9-szfgf" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--szfgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--szfgf-eth0", GenerateName:"calico-apiserver-5bf8fd54b9-", Namespace:"calico-system", SelfLink:"", UID:"7b276f3a-0d5a-4ce2-8383-649a836c75b2", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 39, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bf8fd54b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-254", ContainerID:"", Pod:"calico-apiserver-5bf8fd54b9-szfgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali52d40d0e090", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:39:51.026475 containerd[1982]: 2026-03-03 13:39:50.992 [INFO][5550] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.6/32] ContainerID="d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" Namespace="calico-system" Pod="calico-apiserver-5bf8fd54b9-szfgf" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--szfgf-eth0" Mar 3 13:39:51.026475 containerd[1982]: 2026-03-03 13:39:50.992 [INFO][5550] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52d40d0e090 ContainerID="d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" Namespace="calico-system" Pod="calico-apiserver-5bf8fd54b9-szfgf" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--szfgf-eth0" Mar 3 13:39:51.026475 containerd[1982]: 2026-03-03 13:39:50.998 [INFO][5550] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" Namespace="calico-system" Pod="calico-apiserver-5bf8fd54b9-szfgf" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--szfgf-eth0" Mar 3 13:39:51.026475 containerd[1982]: 2026-03-03 13:39:50.999 [INFO][5550] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" Namespace="calico-system" Pod="calico-apiserver-5bf8fd54b9-szfgf" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--szfgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--szfgf-eth0", GenerateName:"calico-apiserver-5bf8fd54b9-", Namespace:"calico-system", SelfLink:"", UID:"7b276f3a-0d5a-4ce2-8383-649a836c75b2", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 39, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bf8fd54b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-254", ContainerID:"d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172", Pod:"calico-apiserver-5bf8fd54b9-szfgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali52d40d0e090", MAC:"42:e5:cc:48:28:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:39:51.026475 containerd[1982]: 2026-03-03 13:39:51.017 [INFO][5550] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" Namespace="calico-system" Pod="calico-apiserver-5bf8fd54b9-szfgf" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--apiserver--5bf8fd54b9--szfgf-eth0" Mar 3 13:39:51.093006 containerd[1982]: time="2026-03-03T13:39:51.092871829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 3 13:39:51.115995 containerd[1982]: time="2026-03-03T13:39:51.115937386Z" level=info msg="connecting to shim d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172" address="unix:///run/containerd/s/8d3cce50a226ed6a13e59f49888fdc61855ec9350bd086b748750936e83b8a8e" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:39:51.146108 containerd[1982]: time="2026-03-03T13:39:51.146053198Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.339329546s" Mar 3 13:39:51.146108 containerd[1982]: time="2026-03-03T13:39:51.146110051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 3 13:39:51.150971 containerd[1982]: time="2026-03-03T13:39:51.149850600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 3 13:39:51.156058 containerd[1982]: time="2026-03-03T13:39:51.155663250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:51.156921 containerd[1982]: time="2026-03-03T13:39:51.156862351Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:51.158255 containerd[1982]: time="2026-03-03T13:39:51.157639007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:51.166876 containerd[1982]: time="2026-03-03T13:39:51.166817060Z" level=info msg="CreateContainer within sandbox \"03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 3 13:39:51.172996 systemd-networkd[1711]: cali34633b1cf83: Link UP Mar 3 13:39:51.173281 systemd-networkd[1711]: cali34633b1cf83: Gained carrier Mar 3 13:39:51.178093 systemd[1]: Started cri-containerd-d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172.scope - libcontainer container d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172. Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:50.797 [INFO][5549] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--254-k8s-coredns--7d764666f9--hm7z4-eth0 coredns-7d764666f9- kube-system 4b3d84cb-804e-46c6-ae6a-9106d56643d6 857 0 2026-03-03 13:38:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-254 coredns-7d764666f9-hm7z4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali34633b1cf83 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" Namespace="kube-system" Pod="coredns-7d764666f9-hm7z4" WorkloadEndpoint="ip--172--31--31--254-k8s-coredns--7d764666f9--hm7z4-" Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:50.798 [INFO][5549] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" Namespace="kube-system" Pod="coredns-7d764666f9-hm7z4" WorkloadEndpoint="ip--172--31--31--254-k8s-coredns--7d764666f9--hm7z4-eth0" Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:50.956 [INFO][5582] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" HandleID="k8s-pod-network.85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" Workload="ip--172--31--31--254-k8s-coredns--7d764666f9--hm7z4-eth0" Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:50.972 [INFO][5582] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" HandleID="k8s-pod-network.85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" Workload="ip--172--31--31--254-k8s-coredns--7d764666f9--hm7z4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040e210), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-254", "pod":"coredns-7d764666f9-hm7z4", "timestamp":"2026-03-03 13:39:50.956986352 +0000 UTC"}, Hostname:"ip-172-31-31-254", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000292000)} Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:50.972 [INFO][5582] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:50.988 [INFO][5582] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:50.988 [INFO][5582] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-254' Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:51.035 [INFO][5582] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" host="ip-172-31-31-254" Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:51.050 [INFO][5582] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-254" Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:51.059 [INFO][5582] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:51.062 [INFO][5582] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:51.069 [INFO][5582] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:51.072 [INFO][5582] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" host="ip-172-31-31-254" Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:51.087 [INFO][5582] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:51.113 [INFO][5582] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" host="ip-172-31-31-254" Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:51.140 [INFO][5582] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.7/26] block=192.168.114.0/26 handle="k8s-pod-network.85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" host="ip-172-31-31-254" Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:51.142 [INFO][5582] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.7/26] handle="k8s-pod-network.85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" host="ip-172-31-31-254" Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:51.143 [INFO][5582] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:39:51.229209 containerd[1982]: 2026-03-03 13:39:51.143 [INFO][5582] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.7/26] IPv6=[] ContainerID="85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" HandleID="k8s-pod-network.85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" Workload="ip--172--31--31--254-k8s-coredns--7d764666f9--hm7z4-eth0" Mar 3 13:39:51.233078 containerd[1982]: 2026-03-03 13:39:51.158 [INFO][5549] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" Namespace="kube-system" Pod="coredns-7d764666f9-hm7z4" WorkloadEndpoint="ip--172--31--31--254-k8s-coredns--7d764666f9--hm7z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--254-k8s-coredns--7d764666f9--hm7z4-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4b3d84cb-804e-46c6-ae6a-9106d56643d6", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 38, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-254", ContainerID:"", Pod:"coredns-7d764666f9-hm7z4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34633b1cf83", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:39:51.233078 containerd[1982]: 2026-03-03 13:39:51.158 [INFO][5549] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.7/32] ContainerID="85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" Namespace="kube-system" Pod="coredns-7d764666f9-hm7z4" WorkloadEndpoint="ip--172--31--31--254-k8s-coredns--7d764666f9--hm7z4-eth0" Mar 3 13:39:51.233078 containerd[1982]: 2026-03-03 13:39:51.158 [INFO][5549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34633b1cf83 ContainerID="85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" Namespace="kube-system" Pod="coredns-7d764666f9-hm7z4" WorkloadEndpoint="ip--172--31--31--254-k8s-coredns--7d764666f9--hm7z4-eth0" Mar 3 13:39:51.233078 containerd[1982]: 2026-03-03 13:39:51.175 [INFO][5549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" Namespace="kube-system" Pod="coredns-7d764666f9-hm7z4" WorkloadEndpoint="ip--172--31--31--254-k8s-coredns--7d764666f9--hm7z4-eth0" Mar 3 13:39:51.233078 containerd[1982]: 2026-03-03 13:39:51.183 [INFO][5549] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" Namespace="kube-system" Pod="coredns-7d764666f9-hm7z4" WorkloadEndpoint="ip--172--31--31--254-k8s-coredns--7d764666f9--hm7z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--254-k8s-coredns--7d764666f9--hm7z4-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4b3d84cb-804e-46c6-ae6a-9106d56643d6", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 38, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-254", ContainerID:"85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f", Pod:"coredns-7d764666f9-hm7z4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34633b1cf83", MAC:"16:6c:dc:08:fd:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:39:51.233078 containerd[1982]: 2026-03-03 13:39:51.220 [INFO][5549] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" Namespace="kube-system" Pod="coredns-7d764666f9-hm7z4" WorkloadEndpoint="ip--172--31--31--254-k8s-coredns--7d764666f9--hm7z4-eth0" Mar 3 13:39:51.236616 containerd[1982]: time="2026-03-03T13:39:51.235035275Z" level=info msg="Container b934a99d09ebc61bb14ae57d50181f25bdaecb6f9ddb03d57841b3caee276f8d: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:39:51.248775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3198886413.mount: Deactivated successfully. Mar 3 13:39:51.326832 containerd[1982]: time="2026-03-03T13:39:51.326678758Z" level=info msg="CreateContainer within sandbox \"03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b934a99d09ebc61bb14ae57d50181f25bdaecb6f9ddb03d57841b3caee276f8d\"" Mar 3 13:39:51.328937 containerd[1982]: time="2026-03-03T13:39:51.327987890Z" level=info msg="StartContainer for \"b934a99d09ebc61bb14ae57d50181f25bdaecb6f9ddb03d57841b3caee276f8d\"" Mar 3 13:39:51.332057 containerd[1982]: time="2026-03-03T13:39:51.331660201Z" level=info msg="connecting to shim b934a99d09ebc61bb14ae57d50181f25bdaecb6f9ddb03d57841b3caee276f8d" address="unix:///run/containerd/s/129a9095b5d764b4362d943701705f533d470be403880972b02c513290a55327" protocol=ttrpc version=3 Mar 3 13:39:51.369114 systemd[1]: Started cri-containerd-b934a99d09ebc61bb14ae57d50181f25bdaecb6f9ddb03d57841b3caee276f8d.scope - libcontainer container b934a99d09ebc61bb14ae57d50181f25bdaecb6f9ddb03d57841b3caee276f8d. Mar 3 13:39:51.387941 containerd[1982]: time="2026-03-03T13:39:51.387893247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf8fd54b9-szfgf,Uid:7b276f3a-0d5a-4ce2-8383-649a836c75b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172\"" Mar 3 13:39:51.402421 containerd[1982]: time="2026-03-03T13:39:51.401382659Z" level=info msg="connecting to shim 85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f" address="unix:///run/containerd/s/b5e48a0f92acb7cde71fba63db1c03789988ca6facbb242c0cd0513fabb9f754" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:39:51.433229 systemd-networkd[1711]: cali5847042b5bb: Gained IPv6LL Mar 3 13:39:51.472111 systemd[1]: Started cri-containerd-85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f.scope - libcontainer container 85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f. Mar 3 13:39:51.504614 containerd[1982]: time="2026-03-03T13:39:51.504490768Z" level=info msg="StartContainer for \"b934a99d09ebc61bb14ae57d50181f25bdaecb6f9ddb03d57841b3caee276f8d\" returns successfully" Mar 3 13:39:51.566229 containerd[1982]: time="2026-03-03T13:39:51.566188418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hm7z4,Uid:4b3d84cb-804e-46c6-ae6a-9106d56643d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f\"" Mar 3 13:39:51.575371 containerd[1982]: time="2026-03-03T13:39:51.575314396Z" level=info msg="CreateContainer within sandbox \"85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 13:39:51.593745 containerd[1982]: time="2026-03-03T13:39:51.593701696Z" level=info msg="Container 160d173fe664011f5bd627b555fbf797116acf31522b89f476a8267e57706598: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:39:51.606034 containerd[1982]: time="2026-03-03T13:39:51.605977012Z" level=info msg="CreateContainer within sandbox \"85c9fa9f6fdab654e9ce6fc01f7b9fda6dce6494396724bd7b2f19d5d845981f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"160d173fe664011f5bd627b555fbf797116acf31522b89f476a8267e57706598\"" Mar 3 13:39:51.606852 containerd[1982]: time="2026-03-03T13:39:51.606827154Z" level=info msg="StartContainer for \"160d173fe664011f5bd627b555fbf797116acf31522b89f476a8267e57706598\"" Mar 3 13:39:51.607999 containerd[1982]: time="2026-03-03T13:39:51.607939360Z" level=info msg="connecting to shim 160d173fe664011f5bd627b555fbf797116acf31522b89f476a8267e57706598" address="unix:///run/containerd/s/b5e48a0f92acb7cde71fba63db1c03789988ca6facbb242c0cd0513fabb9f754" protocol=ttrpc version=3 Mar 3 13:39:51.626092 systemd[1]: Started cri-containerd-160d173fe664011f5bd627b555fbf797116acf31522b89f476a8267e57706598.scope - libcontainer container 160d173fe664011f5bd627b555fbf797116acf31522b89f476a8267e57706598. Mar 3 13:39:51.652555 containerd[1982]: time="2026-03-03T13:39:51.652505688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc9c78b65-szhdf,Uid:55b7c3f2-95aa-4753-96ac-cfd56faee332,Namespace:calico-system,Attempt:0,}" Mar 3 13:39:51.670402 containerd[1982]: time="2026-03-03T13:39:51.670367177Z" level=info msg="StartContainer for \"160d173fe664011f5bd627b555fbf797116acf31522b89f476a8267e57706598\" returns successfully" Mar 3 13:39:51.836995 systemd-networkd[1711]: calia9c5d1efb74: Link UP Mar 3 13:39:51.837372 systemd-networkd[1711]: calia9c5d1efb74: Gained carrier Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.720 [INFO][5791] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--254-k8s-calico--kube--controllers--6dc9c78b65--szhdf-eth0 calico-kube-controllers-6dc9c78b65- calico-system 55b7c3f2-95aa-4753-96ac-cfd56faee332 849 0 2026-03-03 13:39:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6dc9c78b65 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-31-254 calico-kube-controllers-6dc9c78b65-szhdf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia9c5d1efb74 [] [] }} ContainerID="026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" Namespace="calico-system" Pod="calico-kube-controllers-6dc9c78b65-szhdf" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--kube--controllers--6dc9c78b65--szhdf-" Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.720 [INFO][5791] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" Namespace="calico-system" Pod="calico-kube-controllers-6dc9c78b65-szhdf" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--kube--controllers--6dc9c78b65--szhdf-eth0" Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.761 [INFO][5809] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" HandleID="k8s-pod-network.026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" Workload="ip--172--31--31--254-k8s-calico--kube--controllers--6dc9c78b65--szhdf-eth0" Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.769 [INFO][5809] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" HandleID="k8s-pod-network.026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" Workload="ip--172--31--31--254-k8s-calico--kube--controllers--6dc9c78b65--szhdf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-254", "pod":"calico-kube-controllers-6dc9c78b65-szhdf", "timestamp":"2026-03-03 13:39:51.761488908 +0000 UTC"}, Hostname:"ip-172-31-31-254", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003fd080)} Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.769 [INFO][5809] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.769 [INFO][5809] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.769 [INFO][5809] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-254' Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.772 [INFO][5809] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" host="ip-172-31-31-254" Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.782 [INFO][5809] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-31-254" Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.789 [INFO][5809] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.792 [INFO][5809] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.795 [INFO][5809] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ip-172-31-31-254" Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.795 [INFO][5809] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" host="ip-172-31-31-254" Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.797 [INFO][5809] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9 Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.813 [INFO][5809] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" host="ip-172-31-31-254" Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.825 [INFO][5809] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.8/26] block=192.168.114.0/26 handle="k8s-pod-network.026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" host="ip-172-31-31-254" Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.825 [INFO][5809] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.8/26] handle="k8s-pod-network.026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" host="ip-172-31-31-254" Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.825 [INFO][5809] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:39:51.863441 containerd[1982]: 2026-03-03 13:39:51.825 [INFO][5809] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.8/26] IPv6=[] ContainerID="026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" HandleID="k8s-pod-network.026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" Workload="ip--172--31--31--254-k8s-calico--kube--controllers--6dc9c78b65--szhdf-eth0" Mar 3 13:39:51.865414 containerd[1982]: 2026-03-03 13:39:51.828 [INFO][5791] cni-plugin/k8s.go 418: Populated endpoint ContainerID="026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" Namespace="calico-system" Pod="calico-kube-controllers-6dc9c78b65-szhdf" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--kube--controllers--6dc9c78b65--szhdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--254-k8s-calico--kube--controllers--6dc9c78b65--szhdf-eth0", GenerateName:"calico-kube-controllers-6dc9c78b65-", Namespace:"calico-system", SelfLink:"", UID:"55b7c3f2-95aa-4753-96ac-cfd56faee332", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 39, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc9c78b65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-254", ContainerID:"", Pod:"calico-kube-controllers-6dc9c78b65-szhdf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9c5d1efb74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:39:51.865414 containerd[1982]: 2026-03-03 13:39:51.829 [INFO][5791] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.8/32] ContainerID="026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" Namespace="calico-system" Pod="calico-kube-controllers-6dc9c78b65-szhdf" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--kube--controllers--6dc9c78b65--szhdf-eth0" Mar 3 13:39:51.865414 containerd[1982]: 2026-03-03 13:39:51.829 [INFO][5791] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9c5d1efb74 ContainerID="026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" Namespace="calico-system" Pod="calico-kube-controllers-6dc9c78b65-szhdf" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--kube--controllers--6dc9c78b65--szhdf-eth0" Mar 3 13:39:51.865414 containerd[1982]: 2026-03-03 13:39:51.834 [INFO][5791] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" Namespace="calico-system" Pod="calico-kube-controllers-6dc9c78b65-szhdf" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--kube--controllers--6dc9c78b65--szhdf-eth0" Mar 3 13:39:51.865414 containerd[1982]: 2026-03-03 13:39:51.836 [INFO][5791] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" Namespace="calico-system" Pod="calico-kube-controllers-6dc9c78b65-szhdf" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--kube--controllers--6dc9c78b65--szhdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--254-k8s-calico--kube--controllers--6dc9c78b65--szhdf-eth0", GenerateName:"calico-kube-controllers-6dc9c78b65-", Namespace:"calico-system", SelfLink:"", UID:"55b7c3f2-95aa-4753-96ac-cfd56faee332", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 39, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc9c78b65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-254", ContainerID:"026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9", Pod:"calico-kube-controllers-6dc9c78b65-szhdf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9c5d1efb74", MAC:"5a:65:b0:e5:26:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:39:51.865414 containerd[1982]: 2026-03-03 13:39:51.855 [INFO][5791] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" Namespace="calico-system" Pod="calico-kube-controllers-6dc9c78b65-szhdf" WorkloadEndpoint="ip--172--31--31--254-k8s-calico--kube--controllers--6dc9c78b65--szhdf-eth0" Mar 3 13:39:51.949522 containerd[1982]: time="2026-03-03T13:39:51.949415652Z" level=info msg="connecting to shim 026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9" address="unix:///run/containerd/s/59d86cae542661282cfbcbb7cadc344d5470c9b6d9ecbe40d868354abb69106a" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:39:51.993113 systemd[1]: Started cri-containerd-026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9.scope - libcontainer container 026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9. Mar 3 13:39:52.066440 containerd[1982]: time="2026-03-03T13:39:52.066393228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc9c78b65-szhdf,Uid:55b7c3f2-95aa-4753-96ac-cfd56faee332,Namespace:calico-system,Attempt:0,} returns sandbox id \"026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9\"" Mar 3 13:39:52.075593 systemd-networkd[1711]: cali52d40d0e090: Gained IPv6LL Mar 3 13:39:52.213201 kubelet[3339]: I0303 13:39:52.212998 3339 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-hm7z4" podStartSLOduration=57.212979415 podStartE2EDuration="57.212979415s" podCreationTimestamp="2026-03-03 13:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:39:52.210569039 +0000 UTC m=+63.704777988" watchObservedRunningTime="2026-03-03 13:39:52.212979415 +0000 UTC m=+63.707188363" Mar 3 13:39:52.969020 systemd-networkd[1711]: cali34633b1cf83: Gained IPv6LL Mar 3 13:39:53.673122 systemd-networkd[1711]: calia9c5d1efb74: Gained IPv6LL Mar 3 13:39:54.111518 systemd[1]: Started sshd@8-172.31.31.254:22-68.220.241.50:32798.service - OpenSSH per-connection server daemon (68.220.241.50:32798). Mar 3 13:39:54.603594 sshd[5895]: Accepted publickey for core from 68.220.241.50 port 32798 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:39:54.607120 sshd-session[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:39:54.613120 systemd-logind[1956]: New session 9 of user core. Mar 3 13:39:54.618096 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 3 13:39:55.264599 sshd[5898]: Connection closed by 68.220.241.50 port 32798 Mar 3 13:39:55.265652 sshd-session[5895]: pam_unix(sshd:session): session closed for user core Mar 3 13:39:55.273826 systemd[1]: sshd@8-172.31.31.254:22-68.220.241.50:32798.service: Deactivated successfully. Mar 3 13:39:55.278306 systemd[1]: session-9.scope: Deactivated successfully. Mar 3 13:39:55.280965 systemd-logind[1956]: Session 9 logged out. Waiting for processes to exit. Mar 3 13:39:55.284367 systemd-logind[1956]: Removed session 9. Mar 3 13:39:55.802128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3886027344.mount: Deactivated successfully. Mar 3 13:39:55.974943 ntpd[2230]: Listen normally on 9 cali27c46c42d3f [fe80::ecee:eeff:feee:eeee%8]:123 Mar 3 13:39:55.975011 ntpd[2230]: Listen normally on 10 calicac5aebb567 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 3 13:39:55.975671 ntpd[2230]: 3 Mar 13:39:55 ntpd[2230]: Listen normally on 9 cali27c46c42d3f [fe80::ecee:eeff:feee:eeee%8]:123 Mar 3 13:39:55.975671 ntpd[2230]: 3 Mar 13:39:55 ntpd[2230]: Listen normally on 10 calicac5aebb567 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 3 13:39:55.975671 ntpd[2230]: 3 Mar 13:39:55 ntpd[2230]: Listen normally on 11 cali09a5333137a [fe80::ecee:eeff:feee:eeee%10]:123 Mar 3 13:39:55.975671 ntpd[2230]: 3 Mar 13:39:55 ntpd[2230]: Listen normally on 12 cali5847042b5bb [fe80::ecee:eeff:feee:eeee%11]:123 Mar 3 13:39:55.975671 ntpd[2230]: 3 Mar 13:39:55 ntpd[2230]: Listen normally on 13 cali52d40d0e090 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 3 13:39:55.975671 ntpd[2230]: 3 Mar 13:39:55 ntpd[2230]: Listen normally on 14 cali34633b1cf83 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 3 13:39:55.975671 ntpd[2230]: 3 Mar 13:39:55 ntpd[2230]: Listen normally on 15 calia9c5d1efb74 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 3 13:39:55.975038 ntpd[2230]: Listen normally on 11 cali09a5333137a [fe80::ecee:eeff:feee:eeee%10]:123 Mar 3 13:39:55.975066 ntpd[2230]: Listen normally on 12 cali5847042b5bb [fe80::ecee:eeff:feee:eeee%11]:123 Mar 3 13:39:55.975091 ntpd[2230]: Listen normally on 13 cali52d40d0e090 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 3 13:39:55.975118 ntpd[2230]: Listen normally on 14 cali34633b1cf83 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 3 13:39:55.975146 ntpd[2230]: Listen normally on 15 calia9c5d1efb74 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 3 13:39:56.408712 containerd[1982]: time="2026-03-03T13:39:56.408660659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:56.410173 containerd[1982]: time="2026-03-03T13:39:56.410136937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 3 13:39:56.427195 containerd[1982]: time="2026-03-03T13:39:56.427136168Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:56.429915 containerd[1982]: time="2026-03-03T13:39:56.429637368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:39:56.430559 containerd[1982]: time="2026-03-03T13:39:56.430511779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.27857798s" Mar 3 13:39:56.430559 containerd[1982]: time="2026-03-03T13:39:56.430545513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 3 13:39:56.466943 containerd[1982]: time="2026-03-03T13:39:56.466661296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 3 13:39:56.491185 containerd[1982]: time="2026-03-03T13:39:56.491152230Z" level=info msg="CreateContainer within sandbox \"fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 3 13:39:56.501376 containerd[1982]: time="2026-03-03T13:39:56.501332669Z" level=info msg="Container ca3ed3e724ca975dff5434cbde569f28f02cc3e6b00daec29a683aaf92d908d7: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:39:56.613653 containerd[1982]: time="2026-03-03T13:39:56.613611244Z" level=info msg="CreateContainer within sandbox \"fb3d18c09e94023ae649e1aede0c12cd46536a1ebe129c6a4869f23fba67822e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ca3ed3e724ca975dff5434cbde569f28f02cc3e6b00daec29a683aaf92d908d7\"" Mar 3 13:39:56.631293 containerd[1982]: time="2026-03-03T13:39:56.631243974Z" level=info msg="StartContainer for \"ca3ed3e724ca975dff5434cbde569f28f02cc3e6b00daec29a683aaf92d908d7\"" Mar 3 13:39:56.632574 containerd[1982]: time="2026-03-03T13:39:56.632533176Z" level=info msg="connecting to shim ca3ed3e724ca975dff5434cbde569f28f02cc3e6b00daec29a683aaf92d908d7" address="unix:///run/containerd/s/107ff1f3c8e8bfdbbec95470f475e3815766038f0a463d94ac356badb9bae523" protocol=ttrpc version=3 Mar 3 13:39:56.677681 systemd[1]: Started cri-containerd-ca3ed3e724ca975dff5434cbde569f28f02cc3e6b00daec29a683aaf92d908d7.scope - libcontainer container ca3ed3e724ca975dff5434cbde569f28f02cc3e6b00daec29a683aaf92d908d7. Mar 3 13:39:56.745154 containerd[1982]: time="2026-03-03T13:39:56.745049447Z" level=info msg="StartContainer for \"ca3ed3e724ca975dff5434cbde569f28f02cc3e6b00daec29a683aaf92d908d7\" returns successfully" Mar 3 13:40:00.353656 systemd[1]: Started sshd@9-172.31.31.254:22-68.220.241.50:32808.service - OpenSSH per-connection server daemon (68.220.241.50:32808). Mar 3 13:40:00.910461 sshd[6038]: Accepted publickey for core from 68.220.241.50 port 32808 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:00.914305 sshd-session[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:00.926994 systemd-logind[1956]: New session 10 of user core. Mar 3 13:40:00.930244 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 3 13:40:02.783342 sshd[6057]: Connection closed by 68.220.241.50 port 32808 Mar 3 13:40:02.784395 sshd-session[6038]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:02.793688 systemd[1]: sshd@9-172.31.31.254:22-68.220.241.50:32808.service: Deactivated successfully. Mar 3 13:40:02.799749 systemd[1]: session-10.scope: Deactivated successfully. Mar 3 13:40:02.802062 systemd-logind[1956]: Session 10 logged out. Waiting for processes to exit. Mar 3 13:40:02.805487 systemd-logind[1956]: Removed session 10. Mar 3 13:40:03.362652 containerd[1982]: time="2026-03-03T13:40:03.362601949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:03.366251 containerd[1982]: time="2026-03-03T13:40:03.365945350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 3 13:40:03.366251 containerd[1982]: time="2026-03-03T13:40:03.366190490Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:03.369082 containerd[1982]: time="2026-03-03T13:40:03.369049745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:03.369909 containerd[1982]: time="2026-03-03T13:40:03.369849126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 6.903148641s" Mar 3 13:40:03.369909 containerd[1982]: time="2026-03-03T13:40:03.369876075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 3 13:40:03.373711 containerd[1982]: time="2026-03-03T13:40:03.373011933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 3 13:40:03.444239 containerd[1982]: time="2026-03-03T13:40:03.444161942Z" level=info msg="CreateContainer within sandbox \"cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 3 13:40:03.452664 containerd[1982]: time="2026-03-03T13:40:03.451924117Z" level=info msg="Container ef6ced27ff8db164ba993322fe91f289e82c8dbb584b6314c93446e38753ef63: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:40:03.460397 containerd[1982]: time="2026-03-03T13:40:03.460349384Z" level=info msg="CreateContainer within sandbox \"cb1bec023a3e7010c6f5144d5a501a21003262e53ab2fad3c94637307009a94e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ef6ced27ff8db164ba993322fe91f289e82c8dbb584b6314c93446e38753ef63\"" Mar 3 13:40:03.461930 containerd[1982]: time="2026-03-03T13:40:03.461061204Z" level=info msg="StartContainer for \"ef6ced27ff8db164ba993322fe91f289e82c8dbb584b6314c93446e38753ef63\"" Mar 3 13:40:03.462329 containerd[1982]: time="2026-03-03T13:40:03.462305648Z" level=info msg="connecting to shim ef6ced27ff8db164ba993322fe91f289e82c8dbb584b6314c93446e38753ef63" address="unix:///run/containerd/s/2dd36ab380c158d3221ace826207a59d11379169f8ca59082a704031171a3e63" protocol=ttrpc version=3 Mar 3 13:40:03.557719 systemd[1]: Started cri-containerd-ef6ced27ff8db164ba993322fe91f289e82c8dbb584b6314c93446e38753ef63.scope - libcontainer container ef6ced27ff8db164ba993322fe91f289e82c8dbb584b6314c93446e38753ef63. Mar 3 13:40:03.635473 containerd[1982]: time="2026-03-03T13:40:03.635359793Z" level=info msg="StartContainer for \"ef6ced27ff8db164ba993322fe91f289e82c8dbb584b6314c93446e38753ef63\" returns successfully" Mar 3 13:40:03.754475 containerd[1982]: time="2026-03-03T13:40:03.753290844Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:03.755895 containerd[1982]: time="2026-03-03T13:40:03.755840884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 3 13:40:03.761997 containerd[1982]: time="2026-03-03T13:40:03.761926482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 388.828795ms" Mar 3 13:40:03.761997 containerd[1982]: time="2026-03-03T13:40:03.761999318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 3 13:40:03.764644 containerd[1982]: time="2026-03-03T13:40:03.764603585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 3 13:40:03.776062 containerd[1982]: time="2026-03-03T13:40:03.776019489Z" level=info msg="CreateContainer within sandbox \"d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 3 13:40:03.796905 containerd[1982]: time="2026-03-03T13:40:03.794259348Z" level=info msg="Container c8a78d4f405049d1a3f1be3ff078bcced6528f6817ca41adb3440007f7ebcded: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:40:03.805730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4029557254.mount: Deactivated successfully. Mar 3 13:40:03.818953 containerd[1982]: time="2026-03-03T13:40:03.818910249Z" level=info msg="CreateContainer within sandbox \"d60f35c2f28df2197421a06d237993563ffdb5b86e05f5f3d767f39dacadc172\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c8a78d4f405049d1a3f1be3ff078bcced6528f6817ca41adb3440007f7ebcded\"" Mar 3 13:40:03.821024 containerd[1982]: time="2026-03-03T13:40:03.820870846Z" level=info msg="StartContainer for \"c8a78d4f405049d1a3f1be3ff078bcced6528f6817ca41adb3440007f7ebcded\"" Mar 3 13:40:03.824170 containerd[1982]: time="2026-03-03T13:40:03.824114411Z" level=info msg="connecting to shim c8a78d4f405049d1a3f1be3ff078bcced6528f6817ca41adb3440007f7ebcded" address="unix:///run/containerd/s/8d3cce50a226ed6a13e59f49888fdc61855ec9350bd086b748750936e83b8a8e" protocol=ttrpc version=3 Mar 3 13:40:03.855099 systemd[1]: Started cri-containerd-c8a78d4f405049d1a3f1be3ff078bcced6528f6817ca41adb3440007f7ebcded.scope - libcontainer container c8a78d4f405049d1a3f1be3ff078bcced6528f6817ca41adb3440007f7ebcded. Mar 3 13:40:03.932357 containerd[1982]: time="2026-03-03T13:40:03.932316251Z" level=info msg="StartContainer for \"c8a78d4f405049d1a3f1be3ff078bcced6528f6817ca41adb3440007f7ebcded\" returns successfully" Mar 3 13:40:04.297424 kubelet[3339]: I0303 13:40:04.250345 3339 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-4mdtz" podStartSLOduration=45.686063435 podStartE2EDuration="53.242641131s" podCreationTimestamp="2026-03-03 13:39:11 +0000 UTC" firstStartedPulling="2026-03-03 13:39:48.892320548 +0000 UTC m=+60.386529477" lastFinishedPulling="2026-03-03 13:39:56.448898234 +0000 UTC m=+67.943107173" observedRunningTime="2026-03-03 13:39:57.231335671 +0000 UTC m=+68.725544620" watchObservedRunningTime="2026-03-03 13:40:04.242641131 +0000 UTC m=+75.736850086" Mar 3 13:40:04.301979 kubelet[3339]: I0303 13:40:04.299504 3339 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5bf8fd54b9-szfgf" podStartSLOduration=40.927322125 podStartE2EDuration="53.299482648s" podCreationTimestamp="2026-03-03 13:39:11 +0000 UTC" firstStartedPulling="2026-03-03 13:39:51.392180561 +0000 UTC m=+62.886389494" lastFinishedPulling="2026-03-03 13:40:03.764341089 +0000 UTC m=+75.258550017" observedRunningTime="2026-03-03 13:40:04.217026002 +0000 UTC m=+75.711234950" watchObservedRunningTime="2026-03-03 13:40:04.299482648 +0000 UTC m=+75.793691593" Mar 3 13:40:04.301979 kubelet[3339]: I0303 13:40:04.300406 3339 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5bf8fd54b9-c6jwv" podStartSLOduration=38.838484251 podStartE2EDuration="53.3003931s" podCreationTimestamp="2026-03-03 13:39:11 +0000 UTC" firstStartedPulling="2026-03-03 13:39:48.910754607 +0000 UTC m=+60.404963533" lastFinishedPulling="2026-03-03 13:40:03.372663443 +0000 UTC m=+74.866872382" observedRunningTime="2026-03-03 13:40:04.298914939 +0000 UTC m=+75.793123883" watchObservedRunningTime="2026-03-03 13:40:04.3003931 +0000 UTC m=+75.794602051" Mar 3 13:40:05.912536 containerd[1982]: time="2026-03-03T13:40:05.912485736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:05.914602 containerd[1982]: time="2026-03-03T13:40:05.914561044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 3 13:40:05.916093 containerd[1982]: time="2026-03-03T13:40:05.916056895Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:05.921814 containerd[1982]: time="2026-03-03T13:40:05.921768768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:05.924356 containerd[1982]: time="2026-03-03T13:40:05.924311944Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.15965283s" Mar 3 13:40:05.924481 containerd[1982]: time="2026-03-03T13:40:05.924359874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 3 13:40:05.928907 containerd[1982]: time="2026-03-03T13:40:05.928843355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 3 13:40:05.936332 containerd[1982]: time="2026-03-03T13:40:05.936291687Z" level=info msg="CreateContainer within sandbox \"03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 3 13:40:05.950986 containerd[1982]: time="2026-03-03T13:40:05.950943725Z" level=info msg="Container 9945e178dba331de23b7d6457e386bf82145bec523638038ec5cecb6e5f5f8f2: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:40:05.964181 containerd[1982]: time="2026-03-03T13:40:05.964134231Z" level=info msg="CreateContainer within sandbox \"03c7f0e5dfa884de885d64e5c535536abfcc226a3fb1bb28bd6722f636f80778\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9945e178dba331de23b7d6457e386bf82145bec523638038ec5cecb6e5f5f8f2\"" Mar 3 13:40:05.965731 containerd[1982]: time="2026-03-03T13:40:05.965684606Z" level=info msg="StartContainer for \"9945e178dba331de23b7d6457e386bf82145bec523638038ec5cecb6e5f5f8f2\"" Mar 3 13:40:05.968380 containerd[1982]: time="2026-03-03T13:40:05.968330869Z" level=info msg="connecting to shim 9945e178dba331de23b7d6457e386bf82145bec523638038ec5cecb6e5f5f8f2" address="unix:///run/containerd/s/129a9095b5d764b4362d943701705f533d470be403880972b02c513290a55327" protocol=ttrpc version=3 Mar 3 13:40:06.004111 systemd[1]: Started cri-containerd-9945e178dba331de23b7d6457e386bf82145bec523638038ec5cecb6e5f5f8f2.scope - libcontainer container 9945e178dba331de23b7d6457e386bf82145bec523638038ec5cecb6e5f5f8f2. Mar 3 13:40:06.091167 containerd[1982]: time="2026-03-03T13:40:06.091099189Z" level=info msg="StartContainer for \"9945e178dba331de23b7d6457e386bf82145bec523638038ec5cecb6e5f5f8f2\" returns successfully" Mar 3 13:40:07.034571 kubelet[3339]: I0303 13:40:07.026570 3339 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 3 13:40:07.044352 kubelet[3339]: I0303 13:40:07.043847 3339 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 3 13:40:07.886418 systemd[1]: Started sshd@10-172.31.31.254:22-68.220.241.50:37212.service - OpenSSH per-connection server daemon (68.220.241.50:37212). Mar 3 13:40:08.563006 sshd[6194]: Accepted publickey for core from 68.220.241.50 port 37212 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:08.566515 sshd-session[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:08.577837 systemd-logind[1956]: New session 11 of user core. Mar 3 13:40:08.581389 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 3 13:40:08.905608 kubelet[3339]: I0303 13:40:08.905099 3339 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-rwlnv" podStartSLOduration=39.754272889 podStartE2EDuration="56.887514326s" podCreationTimestamp="2026-03-03 13:39:12 +0000 UTC" firstStartedPulling="2026-03-03 13:39:48.794260632 +0000 UTC m=+60.288469558" lastFinishedPulling="2026-03-03 13:40:05.927502046 +0000 UTC m=+77.421710995" observedRunningTime="2026-03-03 13:40:06.285257667 +0000 UTC m=+77.779466618" watchObservedRunningTime="2026-03-03 13:40:08.887514326 +0000 UTC m=+80.381723274" Mar 3 13:40:09.566624 sshd[6228]: Connection closed by 68.220.241.50 port 37212 Mar 3 13:40:09.568727 sshd-session[6194]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:09.573065 systemd-logind[1956]: Session 11 logged out. Waiting for processes to exit. Mar 3 13:40:09.573856 systemd[1]: sshd@10-172.31.31.254:22-68.220.241.50:37212.service: Deactivated successfully. Mar 3 13:40:09.576798 systemd[1]: session-11.scope: Deactivated successfully. Mar 3 13:40:09.579040 systemd-logind[1956]: Removed session 11. Mar 3 13:40:09.655614 systemd[1]: Started sshd@11-172.31.31.254:22-68.220.241.50:37214.service - OpenSSH per-connection server daemon (68.220.241.50:37214). Mar 3 13:40:10.124298 sshd[6258]: Accepted publickey for core from 68.220.241.50 port 37214 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:10.126051 sshd-session[6258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:10.132869 systemd-logind[1956]: New session 12 of user core. Mar 3 13:40:10.138036 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 3 13:40:10.612995 sshd[6261]: Connection closed by 68.220.241.50 port 37214 Mar 3 13:40:10.615221 sshd-session[6258]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:10.621508 systemd-logind[1956]: Session 12 logged out. Waiting for processes to exit. Mar 3 13:40:10.622225 systemd[1]: sshd@11-172.31.31.254:22-68.220.241.50:37214.service: Deactivated successfully. Mar 3 13:40:10.626869 systemd[1]: session-12.scope: Deactivated successfully. Mar 3 13:40:10.633591 systemd-logind[1956]: Removed session 12. Mar 3 13:40:10.715218 systemd[1]: Started sshd@12-172.31.31.254:22-68.220.241.50:37220.service - OpenSSH per-connection server daemon (68.220.241.50:37220). Mar 3 13:40:11.206919 sshd[6272]: Accepted publickey for core from 68.220.241.50 port 37220 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:11.210548 sshd-session[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:11.219420 systemd-logind[1956]: New session 13 of user core. Mar 3 13:40:11.226169 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 3 13:40:11.669784 sshd[6279]: Connection closed by 68.220.241.50 port 37220 Mar 3 13:40:11.673144 sshd-session[6272]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:11.680784 systemd-logind[1956]: Session 13 logged out. Waiting for processes to exit. Mar 3 13:40:11.681593 systemd[1]: sshd@12-172.31.31.254:22-68.220.241.50:37220.service: Deactivated successfully. Mar 3 13:40:11.685670 systemd[1]: session-13.scope: Deactivated successfully. Mar 3 13:40:11.689724 systemd-logind[1956]: Removed session 13. Mar 3 13:40:12.183640 containerd[1982]: time="2026-03-03T13:40:12.183114163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 3 13:40:12.192271 containerd[1982]: time="2026-03-03T13:40:12.192215758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:12.357741 containerd[1982]: time="2026-03-03T13:40:12.357407407Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:12.480995 containerd[1982]: time="2026-03-03T13:40:12.478698587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:40:12.495044 containerd[1982]: time="2026-03-03T13:40:12.494931684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 6.561128012s" Mar 3 13:40:12.495388 containerd[1982]: time="2026-03-03T13:40:12.495032397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 3 13:40:12.787041 containerd[1982]: time="2026-03-03T13:40:12.786922441Z" level=info msg="CreateContainer within sandbox \"026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 3 13:40:12.903746 containerd[1982]: time="2026-03-03T13:40:12.903608288Z" level=info msg="Container ae90d27d50474b5eff9327cb476d7b947da2ecd58408951391e95d16fbfac730: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:40:12.981563 containerd[1982]: time="2026-03-03T13:40:12.981503791Z" level=info msg="CreateContainer within sandbox \"026faf1b76ce75d0f80e3480964850902f36ff3990f2f4ee6a538916cf67cfc9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ae90d27d50474b5eff9327cb476d7b947da2ecd58408951391e95d16fbfac730\"" Mar 3 13:40:13.002902 containerd[1982]: time="2026-03-03T13:40:13.002422754Z" level=info msg="StartContainer for \"ae90d27d50474b5eff9327cb476d7b947da2ecd58408951391e95d16fbfac730\"" Mar 3 13:40:13.007449 containerd[1982]: time="2026-03-03T13:40:13.007360731Z" level=info msg="connecting to shim ae90d27d50474b5eff9327cb476d7b947da2ecd58408951391e95d16fbfac730" address="unix:///run/containerd/s/59d86cae542661282cfbcbb7cadc344d5470c9b6d9ecbe40d868354abb69106a" protocol=ttrpc version=3 Mar 3 13:40:13.157378 systemd[1]: Started cri-containerd-ae90d27d50474b5eff9327cb476d7b947da2ecd58408951391e95d16fbfac730.scope - libcontainer container ae90d27d50474b5eff9327cb476d7b947da2ecd58408951391e95d16fbfac730. Mar 3 13:40:13.402391 containerd[1982]: time="2026-03-03T13:40:13.402356451Z" level=info msg="StartContainer for \"ae90d27d50474b5eff9327cb476d7b947da2ecd58408951391e95d16fbfac730\" returns successfully" Mar 3 13:40:14.621564 kubelet[3339]: I0303 13:40:14.621400 3339 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6dc9c78b65-szhdf" podStartSLOduration=41.156665257 podStartE2EDuration="1m1.621382654s" podCreationTimestamp="2026-03-03 13:39:13 +0000 UTC" firstStartedPulling="2026-03-03 13:39:52.068022692 +0000 UTC m=+63.562231617" lastFinishedPulling="2026-03-03 13:40:12.532740085 +0000 UTC m=+84.026949014" observedRunningTime="2026-03-03 13:40:14.611154884 +0000 UTC m=+86.105363835" watchObservedRunningTime="2026-03-03 13:40:14.621382654 +0000 UTC m=+86.115591641" Mar 3 13:40:16.764268 systemd[1]: Started sshd@13-172.31.31.254:22-68.220.241.50:51744.service - OpenSSH per-connection server daemon (68.220.241.50:51744). Mar 3 13:40:17.286832 sshd[6362]: Accepted publickey for core from 68.220.241.50 port 51744 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:17.291265 sshd-session[6362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:17.296974 systemd-logind[1956]: New session 14 of user core. Mar 3 13:40:17.303061 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 3 13:40:17.902975 sshd[6365]: Connection closed by 68.220.241.50 port 51744 Mar 3 13:40:17.904153 sshd-session[6362]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:17.908695 systemd[1]: sshd@13-172.31.31.254:22-68.220.241.50:51744.service: Deactivated successfully. Mar 3 13:40:17.910643 systemd[1]: session-14.scope: Deactivated successfully. Mar 3 13:40:17.911907 systemd-logind[1956]: Session 14 logged out. Waiting for processes to exit. Mar 3 13:40:17.913808 systemd-logind[1956]: Removed session 14. Mar 3 13:40:18.004299 systemd[1]: Started sshd@14-172.31.31.254:22-68.220.241.50:51754.service - OpenSSH per-connection server daemon (68.220.241.50:51754). Mar 3 13:40:18.471234 sshd[6377]: Accepted publickey for core from 68.220.241.50 port 51754 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:18.473892 sshd-session[6377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:18.478766 systemd-logind[1956]: New session 15 of user core. Mar 3 13:40:18.486138 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 3 13:40:19.296128 sshd[6381]: Connection closed by 68.220.241.50 port 51754 Mar 3 13:40:19.297991 sshd-session[6377]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:19.302335 systemd-logind[1956]: Session 15 logged out. Waiting for processes to exit. Mar 3 13:40:19.302752 systemd[1]: sshd@14-172.31.31.254:22-68.220.241.50:51754.service: Deactivated successfully. Mar 3 13:40:19.305304 systemd[1]: session-15.scope: Deactivated successfully. Mar 3 13:40:19.307526 systemd-logind[1956]: Removed session 15. Mar 3 13:40:19.379277 systemd[1]: Started sshd@15-172.31.31.254:22-68.220.241.50:51766.service - OpenSSH per-connection server daemon (68.220.241.50:51766). Mar 3 13:40:19.827824 sshd[6392]: Accepted publickey for core from 68.220.241.50 port 51766 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:19.829232 sshd-session[6392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:19.835025 systemd-logind[1956]: New session 16 of user core. Mar 3 13:40:19.845121 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 3 13:40:20.903736 sshd[6396]: Connection closed by 68.220.241.50 port 51766 Mar 3 13:40:20.904776 sshd-session[6392]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:20.910746 systemd[1]: sshd@15-172.31.31.254:22-68.220.241.50:51766.service: Deactivated successfully. Mar 3 13:40:20.914179 systemd[1]: session-16.scope: Deactivated successfully. Mar 3 13:40:20.915489 systemd-logind[1956]: Session 16 logged out. Waiting for processes to exit. Mar 3 13:40:20.919265 systemd-logind[1956]: Removed session 16. Mar 3 13:40:20.991613 systemd[1]: Started sshd@16-172.31.31.254:22-68.220.241.50:51776.service - OpenSSH per-connection server daemon (68.220.241.50:51776). Mar 3 13:40:21.453494 sshd[6430]: Accepted publickey for core from 68.220.241.50 port 51776 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:21.454983 sshd-session[6430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:21.460365 systemd-logind[1956]: New session 17 of user core. Mar 3 13:40:21.465093 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 3 13:40:22.036824 sshd[6434]: Connection closed by 68.220.241.50 port 51776 Mar 3 13:40:22.038923 sshd-session[6430]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:22.042907 systemd-logind[1956]: Session 17 logged out. Waiting for processes to exit. Mar 3 13:40:22.043475 systemd[1]: sshd@16-172.31.31.254:22-68.220.241.50:51776.service: Deactivated successfully. Mar 3 13:40:22.045600 systemd[1]: session-17.scope: Deactivated successfully. Mar 3 13:40:22.047530 systemd-logind[1956]: Removed session 17. Mar 3 13:40:22.124227 systemd[1]: Started sshd@17-172.31.31.254:22-68.220.241.50:51790.service - OpenSSH per-connection server daemon (68.220.241.50:51790). Mar 3 13:40:22.578155 sshd[6446]: Accepted publickey for core from 68.220.241.50 port 51790 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:22.579665 sshd-session[6446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:22.584869 systemd-logind[1956]: New session 18 of user core. Mar 3 13:40:22.589062 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 3 13:40:22.907598 sshd[6449]: Connection closed by 68.220.241.50 port 51790 Mar 3 13:40:22.909718 sshd-session[6446]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:22.918988 systemd[1]: sshd@17-172.31.31.254:22-68.220.241.50:51790.service: Deactivated successfully. Mar 3 13:40:22.921377 systemd[1]: session-18.scope: Deactivated successfully. Mar 3 13:40:22.923544 systemd-logind[1956]: Session 18 logged out. Waiting for processes to exit. Mar 3 13:40:22.924566 systemd-logind[1956]: Removed session 18. Mar 3 13:40:28.008293 systemd[1]: Started sshd@18-172.31.31.254:22-68.220.241.50:50240.service - OpenSSH per-connection server daemon (68.220.241.50:50240). Mar 3 13:40:28.559941 sshd[6488]: Accepted publickey for core from 68.220.241.50 port 50240 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:28.562723 sshd-session[6488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:28.568580 systemd-logind[1956]: New session 19 of user core. Mar 3 13:40:28.573047 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 3 13:40:29.485428 sshd[6491]: Connection closed by 68.220.241.50 port 50240 Mar 3 13:40:29.486196 sshd-session[6488]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:29.491928 systemd-logind[1956]: Session 19 logged out. Waiting for processes to exit. Mar 3 13:40:29.492715 systemd[1]: sshd@18-172.31.31.254:22-68.220.241.50:50240.service: Deactivated successfully. Mar 3 13:40:29.496116 systemd[1]: session-19.scope: Deactivated successfully. Mar 3 13:40:29.500063 systemd-logind[1956]: Removed session 19. Mar 3 13:40:34.573023 systemd[1]: Started sshd@19-172.31.31.254:22-68.220.241.50:39064.service - OpenSSH per-connection server daemon (68.220.241.50:39064). Mar 3 13:40:35.022945 sshd[6532]: Accepted publickey for core from 68.220.241.50 port 39064 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:35.024076 sshd-session[6532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:35.032596 systemd-logind[1956]: New session 20 of user core. Mar 3 13:40:35.035038 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 3 13:40:35.633567 sshd[6535]: Connection closed by 68.220.241.50 port 39064 Mar 3 13:40:35.635060 sshd-session[6532]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:35.638944 systemd[1]: sshd@19-172.31.31.254:22-68.220.241.50:39064.service: Deactivated successfully. Mar 3 13:40:35.641384 systemd[1]: session-20.scope: Deactivated successfully. Mar 3 13:40:35.642307 systemd-logind[1956]: Session 20 logged out. Waiting for processes to exit. Mar 3 13:40:35.643716 systemd-logind[1956]: Removed session 20. Mar 3 13:40:40.723337 systemd[1]: Started sshd@20-172.31.31.254:22-68.220.241.50:39066.service - OpenSSH per-connection server daemon (68.220.241.50:39066). Mar 3 13:40:41.226584 sshd[6571]: Accepted publickey for core from 68.220.241.50 port 39066 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:41.228310 sshd-session[6571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:41.233300 systemd-logind[1956]: New session 21 of user core. Mar 3 13:40:41.236024 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 3 13:40:42.107773 sshd[6574]: Connection closed by 68.220.241.50 port 39066 Mar 3 13:40:42.109689 sshd-session[6571]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:42.113647 systemd-logind[1956]: Session 21 logged out. Waiting for processes to exit. Mar 3 13:40:42.114166 systemd[1]: sshd@20-172.31.31.254:22-68.220.241.50:39066.service: Deactivated successfully. Mar 3 13:40:42.116401 systemd[1]: session-21.scope: Deactivated successfully. Mar 3 13:40:42.118936 systemd-logind[1956]: Removed session 21. Mar 3 13:40:47.212100 systemd[1]: Started sshd@21-172.31.31.254:22-68.220.241.50:41866.service - OpenSSH per-connection server daemon (68.220.241.50:41866). Mar 3 13:40:47.745999 sshd[6615]: Accepted publickey for core from 68.220.241.50 port 41866 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:47.751109 sshd-session[6615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:47.759115 systemd-logind[1956]: New session 22 of user core. Mar 3 13:40:47.767096 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 3 13:40:48.554319 sshd[6618]: Connection closed by 68.220.241.50 port 41866 Mar 3 13:40:48.555100 sshd-session[6615]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:48.559583 systemd-logind[1956]: Session 22 logged out. Waiting for processes to exit. Mar 3 13:40:48.559723 systemd[1]: sshd@21-172.31.31.254:22-68.220.241.50:41866.service: Deactivated successfully. Mar 3 13:40:48.562007 systemd[1]: session-22.scope: Deactivated successfully. Mar 3 13:40:48.563748 systemd-logind[1956]: Removed session 22. Mar 3 13:40:53.638012 systemd[1]: Started sshd@22-172.31.31.254:22-68.220.241.50:37524.service - OpenSSH per-connection server daemon (68.220.241.50:37524). Mar 3 13:40:54.096367 sshd[6633]: Accepted publickey for core from 68.220.241.50 port 37524 ssh2: RSA SHA256:UxBW+CXMVLyW97tYWSUXD+Ppcv2OHptrnXcAM8AU9iw Mar 3 13:40:54.097726 sshd-session[6633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:54.103096 systemd-logind[1956]: New session 23 of user core. Mar 3 13:40:54.108073 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 3 13:40:54.443774 sshd[6636]: Connection closed by 68.220.241.50 port 37524 Mar 3 13:40:54.445074 sshd-session[6633]: pam_unix(sshd:session): session closed for user core Mar 3 13:40:54.449055 systemd[1]: sshd@22-172.31.31.254:22-68.220.241.50:37524.service: Deactivated successfully. Mar 3 13:40:54.451360 systemd[1]: session-23.scope: Deactivated successfully. Mar 3 13:40:54.452238 systemd-logind[1956]: Session 23 logged out. Waiting for processes to exit. Mar 3 13:40:54.453696 systemd-logind[1956]: Removed session 23. Mar 3 13:41:08.313298 systemd[1]: cri-containerd-4a18fdb7da3551ecd2e38c0a31b4dbe6880b5b5f922d515f78e990a61a5c51ec.scope: Deactivated successfully. Mar 3 13:41:08.314127 systemd[1]: cri-containerd-4a18fdb7da3551ecd2e38c0a31b4dbe6880b5b5f922d515f78e990a61a5c51ec.scope: Consumed 2.770s CPU time, 85.1M memory peak, 82.9M read from disk. Mar 3 13:41:08.454435 containerd[1982]: time="2026-03-03T13:41:08.441841728Z" level=info msg="received container exit event container_id:\"4a18fdb7da3551ecd2e38c0a31b4dbe6880b5b5f922d515f78e990a61a5c51ec\" id:\"4a18fdb7da3551ecd2e38c0a31b4dbe6880b5b5f922d515f78e990a61a5c51ec\" pid:3178 exit_status:1 exited_at:{seconds:1772545268 nanos:351190903}" Mar 3 13:41:08.630115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a18fdb7da3551ecd2e38c0a31b4dbe6880b5b5f922d515f78e990a61a5c51ec-rootfs.mount: Deactivated successfully. Mar 3 13:41:09.003229 kubelet[3339]: I0303 13:41:09.003097 3339 scope.go:122] "RemoveContainer" containerID="4a18fdb7da3551ecd2e38c0a31b4dbe6880b5b5f922d515f78e990a61a5c51ec" Mar 3 13:41:09.134720 containerd[1982]: time="2026-03-03T13:41:09.134657337Z" level=info msg="CreateContainer within sandbox \"a2e28f77a4a4ed3379362cb1145e198f80b36630e7384c9c6279d1e1e6b89bf5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 3 13:41:09.303593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2959482903.mount: Deactivated successfully. Mar 3 13:41:09.314120 containerd[1982]: time="2026-03-03T13:41:09.314071236Z" level=info msg="Container 9dc751f2d1ff857357df293b7529ef6d3221adbf667577d270b7f70dca69c6f1: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:41:09.337573 containerd[1982]: time="2026-03-03T13:41:09.337532966Z" level=info msg="CreateContainer within sandbox \"a2e28f77a4a4ed3379362cb1145e198f80b36630e7384c9c6279d1e1e6b89bf5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9dc751f2d1ff857357df293b7529ef6d3221adbf667577d270b7f70dca69c6f1\"" Mar 3 13:41:09.343854 containerd[1982]: time="2026-03-03T13:41:09.343789633Z" level=info msg="StartContainer for \"9dc751f2d1ff857357df293b7529ef6d3221adbf667577d270b7f70dca69c6f1\"" Mar 3 13:41:09.361378 containerd[1982]: time="2026-03-03T13:41:09.361322253Z" level=info msg="connecting to shim 9dc751f2d1ff857357df293b7529ef6d3221adbf667577d270b7f70dca69c6f1" address="unix:///run/containerd/s/0f112d6c542cb42fa50de37a3e30e2921563d4642cf80091c4927c93847c5488" protocol=ttrpc version=3 Mar 3 13:41:09.401112 systemd[1]: Started cri-containerd-9dc751f2d1ff857357df293b7529ef6d3221adbf667577d270b7f70dca69c6f1.scope - libcontainer container 9dc751f2d1ff857357df293b7529ef6d3221adbf667577d270b7f70dca69c6f1. Mar 3 13:41:09.491938 containerd[1982]: time="2026-03-03T13:41:09.491901622Z" level=info msg="StartContainer for \"9dc751f2d1ff857357df293b7529ef6d3221adbf667577d270b7f70dca69c6f1\" returns successfully" Mar 3 13:41:09.833545 systemd[1]: cri-containerd-63a8c6f0397df90384f2df1339387e168af4428414657cb521abd42671e95049.scope: Deactivated successfully. Mar 3 13:41:09.834062 systemd[1]: cri-containerd-63a8c6f0397df90384f2df1339387e168af4428414657cb521abd42671e95049.scope: Consumed 7.571s CPU time, 132.4M memory peak, 62.8M read from disk. Mar 3 13:41:09.837202 containerd[1982]: time="2026-03-03T13:41:09.837167475Z" level=info msg="received container exit event container_id:\"63a8c6f0397df90384f2df1339387e168af4428414657cb521abd42671e95049\" id:\"63a8c6f0397df90384f2df1339387e168af4428414657cb521abd42671e95049\" pid:3671 exit_status:1 exited_at:{seconds:1772545269 nanos:835558669}" Mar 3 13:41:09.865345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63a8c6f0397df90384f2df1339387e168af4428414657cb521abd42671e95049-rootfs.mount: Deactivated successfully. Mar 3 13:41:10.004398 kubelet[3339]: I0303 13:41:10.004358 3339 scope.go:122] "RemoveContainer" containerID="63a8c6f0397df90384f2df1339387e168af4428414657cb521abd42671e95049" Mar 3 13:41:10.032904 containerd[1982]: time="2026-03-03T13:41:10.031217349Z" level=info msg="CreateContainer within sandbox \"77d824227dc6d44acfdb349dd363d4ccdd5c0b2238c763a95cf5afad5c8fe955\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 3 13:41:10.051845 containerd[1982]: time="2026-03-03T13:41:10.051799903Z" level=info msg="Container d28043a4ad9c8b95fa23cd8f59aaff84791bd2c5384ead2d72ab1a014e56b9e9: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:41:10.065652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3313555433.mount: Deactivated successfully. Mar 3 13:41:10.073523 containerd[1982]: time="2026-03-03T13:41:10.073477030Z" level=info msg="CreateContainer within sandbox \"77d824227dc6d44acfdb349dd363d4ccdd5c0b2238c763a95cf5afad5c8fe955\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d28043a4ad9c8b95fa23cd8f59aaff84791bd2c5384ead2d72ab1a014e56b9e9\"" Mar 3 13:41:10.077080 containerd[1982]: time="2026-03-03T13:41:10.077050263Z" level=info msg="StartContainer for \"d28043a4ad9c8b95fa23cd8f59aaff84791bd2c5384ead2d72ab1a014e56b9e9\"" Mar 3 13:41:10.080905 containerd[1982]: time="2026-03-03T13:41:10.080857887Z" level=info msg="connecting to shim d28043a4ad9c8b95fa23cd8f59aaff84791bd2c5384ead2d72ab1a014e56b9e9" address="unix:///run/containerd/s/42e4129227ff0a64a573d6575b6c9e6b43326d0e28e80595e08aca9873bfd1ae" protocol=ttrpc version=3 Mar 3 13:41:10.145140 systemd[1]: Started cri-containerd-d28043a4ad9c8b95fa23cd8f59aaff84791bd2c5384ead2d72ab1a014e56b9e9.scope - libcontainer container d28043a4ad9c8b95fa23cd8f59aaff84791bd2c5384ead2d72ab1a014e56b9e9. Mar 3 13:41:10.240028 containerd[1982]: time="2026-03-03T13:41:10.239987687Z" level=info msg="StartContainer for \"d28043a4ad9c8b95fa23cd8f59aaff84791bd2c5384ead2d72ab1a014e56b9e9\" returns successfully" Mar 3 13:41:11.759203 kubelet[3339]: E0303 13:41:11.753788 3339 controller.go:251] "Failed to update lease" err="Put \"https://172.31.31.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-254?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 3 13:41:14.958052 systemd[1]: cri-containerd-a2ce8cf82fc914c4515ce3e6120c26b6ab784e9627595763b54b21cce3b55b22.scope: Deactivated successfully. Mar 3 13:41:14.958331 systemd[1]: cri-containerd-a2ce8cf82fc914c4515ce3e6120c26b6ab784e9627595763b54b21cce3b55b22.scope: Consumed 1.280s CPU time, 35.5M memory peak, 47.2M read from disk. Mar 3 13:41:14.961839 containerd[1982]: time="2026-03-03T13:41:14.961723751Z" level=info msg="received container exit event container_id:\"a2ce8cf82fc914c4515ce3e6120c26b6ab784e9627595763b54b21cce3b55b22\" id:\"a2ce8cf82fc914c4515ce3e6120c26b6ab784e9627595763b54b21cce3b55b22\" pid:3142 exit_status:1 exited_at:{seconds:1772545274 nanos:961423014}" Mar 3 13:41:14.990335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2ce8cf82fc914c4515ce3e6120c26b6ab784e9627595763b54b21cce3b55b22-rootfs.mount: Deactivated successfully. Mar 3 13:41:15.027358 kubelet[3339]: I0303 13:41:15.027325 3339 scope.go:122] "RemoveContainer" containerID="a2ce8cf82fc914c4515ce3e6120c26b6ab784e9627595763b54b21cce3b55b22" Mar 3 13:41:15.029718 containerd[1982]: time="2026-03-03T13:41:15.029679290Z" level=info msg="CreateContainer within sandbox \"de87b1d9fd82b84c4080122b5a7cdd9412cc0eaaf996b8d0e85e600593d343aa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 3 13:41:15.048321 containerd[1982]: time="2026-03-03T13:41:15.047590676Z" level=info msg="Container 4d3d4ab279baa7acd1e88024bec3bbf6326cee640e034a285587687cf9228f66: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:41:15.062195 containerd[1982]: time="2026-03-03T13:41:15.062148743Z" level=info msg="CreateContainer within sandbox \"de87b1d9fd82b84c4080122b5a7cdd9412cc0eaaf996b8d0e85e600593d343aa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4d3d4ab279baa7acd1e88024bec3bbf6326cee640e034a285587687cf9228f66\"" Mar 3 13:41:15.062924 containerd[1982]: time="2026-03-03T13:41:15.062773548Z" level=info msg="StartContainer for \"4d3d4ab279baa7acd1e88024bec3bbf6326cee640e034a285587687cf9228f66\"" Mar 3 13:41:15.063966 containerd[1982]: time="2026-03-03T13:41:15.063935641Z" level=info msg="connecting to shim 4d3d4ab279baa7acd1e88024bec3bbf6326cee640e034a285587687cf9228f66" address="unix:///run/containerd/s/9a5f46024fc9e77d1b4c838e3250a39a41211d9e3261601b6cf8fd2b4e0b19c7" protocol=ttrpc version=3 Mar 3 13:41:15.094090 systemd[1]: Started cri-containerd-4d3d4ab279baa7acd1e88024bec3bbf6326cee640e034a285587687cf9228f66.scope - libcontainer container 4d3d4ab279baa7acd1e88024bec3bbf6326cee640e034a285587687cf9228f66. Mar 3 13:41:15.153622 containerd[1982]: time="2026-03-03T13:41:15.153572973Z" level=info msg="StartContainer for \"4d3d4ab279baa7acd1e88024bec3bbf6326cee640e034a285587687cf9228f66\" returns successfully" Mar 3 13:41:21.760202 kubelet[3339]: E0303 13:41:21.759814 3339 controller.go:251] "Failed to update lease" err="Put \"https://172.31.31.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-254?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 3 13:41:21.916458 systemd[1]: cri-containerd-d28043a4ad9c8b95fa23cd8f59aaff84791bd2c5384ead2d72ab1a014e56b9e9.scope: Deactivated successfully. Mar 3 13:41:21.917394 systemd[1]: cri-containerd-d28043a4ad9c8b95fa23cd8f59aaff84791bd2c5384ead2d72ab1a014e56b9e9.scope: Consumed 287ms CPU time, 83.1M memory peak, 48.4M read from disk. Mar 3 13:41:21.917898 containerd[1982]: time="2026-03-03T13:41:21.917721518Z" level=info msg="received container exit event container_id:\"d28043a4ad9c8b95fa23cd8f59aaff84791bd2c5384ead2d72ab1a014e56b9e9\" id:\"d28043a4ad9c8b95fa23cd8f59aaff84791bd2c5384ead2d72ab1a014e56b9e9\" pid:6771 exit_status:1 exited_at:{seconds:1772545281 nanos:916836164}" Mar 3 13:41:21.944823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d28043a4ad9c8b95fa23cd8f59aaff84791bd2c5384ead2d72ab1a014e56b9e9-rootfs.mount: Deactivated successfully. Mar 3 13:41:22.071710 kubelet[3339]: I0303 13:41:22.071597 3339 scope.go:122] "RemoveContainer" containerID="d28043a4ad9c8b95fa23cd8f59aaff84791bd2c5384ead2d72ab1a014e56b9e9" Mar 3 13:41:22.088661 kubelet[3339]: I0303 13:41:22.088328 3339 scope.go:122] "RemoveContainer" containerID="63a8c6f0397df90384f2df1339387e168af4428414657cb521abd42671e95049" Mar 3 13:41:22.097221 kubelet[3339]: E0303 13:41:22.092224 3339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-cz645_tigera-operator(bd96d374-697c-4c87-8cea-03d2a2bc3ea2)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-cz645" podUID="bd96d374-697c-4c87-8cea-03d2a2bc3ea2" Mar 3 13:41:22.210094 containerd[1982]: time="2026-03-03T13:41:22.209979012Z" level=info msg="RemoveContainer for \"63a8c6f0397df90384f2df1339387e168af4428414657cb521abd42671e95049\"" Mar 3 13:41:22.237042 containerd[1982]: time="2026-03-03T13:41:22.236992765Z" level=info msg="RemoveContainer for \"63a8c6f0397df90384f2df1339387e168af4428414657cb521abd42671e95049\" returns successfully" Mar 3 13:41:31.765670 kubelet[3339]: E0303 13:41:31.765526 3339 controller.go:251] "Failed to update lease" err="Put \"https://172.31.31.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-254?timeout=10s\": context deadline exceeded" Mar 3 13:41:32.018596 systemd-logind[1956]: Power key pressed short.