Mar 13 00:38:05.913621 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:38:05.913661 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:38:05.913680 kernel: BIOS-provided physical RAM map: Mar 13 00:38:05.913691 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 13 00:38:05.913701 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 13 00:38:05.913713 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 13 00:38:05.913728 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 13 00:38:05.913742 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 13 00:38:05.913754 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 13 00:38:05.913767 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 13 00:38:05.913779 kernel: NX (Execute Disable) protection: active Mar 13 00:38:05.913795 kernel: APIC: Static calls initialized Mar 13 00:38:05.913808 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Mar 13 00:38:05.913822 kernel: extended physical RAM map: Mar 13 00:38:05.913839 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 13 00:38:05.913853 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Mar 13 00:38:05.913870 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Mar 13 00:38:05.913883 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Mar 13 00:38:05.913895 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 13 00:38:05.913909 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 13 00:38:05.913922 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 13 00:38:05.913936 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 13 00:38:05.913949 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 13 00:38:05.913962 kernel: efi: EFI v2.7 by EDK II Mar 13 00:38:05.913976 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Mar 13 00:38:05.913990 kernel: secureboot: Secure boot disabled Mar 13 00:38:05.914003 kernel: SMBIOS 2.7 present. Mar 13 00:38:05.914018 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 13 00:38:05.914032 kernel: DMI: Memory slots populated: 1/1 Mar 13 00:38:05.914045 kernel: Hypervisor detected: KVM Mar 13 00:38:05.914059 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 13 00:38:05.914072 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 13 00:38:05.914086 kernel: kvm-clock: using sched offset of 5319419386 cycles Mar 13 00:38:05.914100 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 00:38:05.914114 kernel: tsc: Detected 2499.996 MHz processor Mar 13 00:38:05.914128 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:38:05.914142 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:38:05.914158 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 13 00:38:05.914172 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 13 00:38:05.914187 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:38:05.914206 kernel: Using GB pages for direct mapping Mar 13 00:38:05.914220 kernel: ACPI: Early table checksum verification disabled Mar 13 00:38:05.914235 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 13 00:38:05.914249 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 13 00:38:05.914267 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 13 00:38:05.914282 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 13 00:38:05.914297 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 13 00:38:05.914312 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 13 00:38:05.914327 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 13 00:38:05.914342 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 13 00:38:05.914356 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 13 00:38:05.914371 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 13 00:38:05.914388 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 13 00:38:05.914403 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 13 00:38:05.916463 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 13 00:38:05.916481 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 13 00:38:05.916496 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 13 00:38:05.916511 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 13 00:38:05.916526 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 13 00:38:05.916541 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 13 00:38:05.916559 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 13 00:38:05.916575 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 13 00:38:05.916589 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 13 00:38:05.916604 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 13 00:38:05.916618 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 13 00:38:05.916633 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 13 00:38:05.916648 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 13 00:38:05.916663 kernel: NUMA: Initialized distance table, cnt=1 Mar 13 00:38:05.916678 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Mar 13 00:38:05.916693 kernel: Zone ranges: Mar 13 00:38:05.916711 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:38:05.916726 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 13 00:38:05.916751 kernel: Normal empty Mar 13 00:38:05.916766 kernel: Device empty Mar 13 00:38:05.916781 kernel: Movable zone start for each node Mar 13 00:38:05.916795 kernel: Early memory node ranges Mar 13 00:38:05.916810 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 13 00:38:05.916824 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 13 00:38:05.916839 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 13 00:38:05.916856 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 13 00:38:05.916871 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:38:05.916886 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 13 00:38:05.916901 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 13 00:38:05.916916 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 13 00:38:05.916930 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 13 00:38:05.916945 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 13 00:38:05.916961 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 13 00:38:05.916975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 13 00:38:05.916993 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:38:05.917008 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 13 00:38:05.917023 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 13 00:38:05.917037 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:38:05.917052 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 13 00:38:05.917066 kernel: TSC deadline timer available Mar 13 00:38:05.917081 kernel: CPU topo: Max. logical packages: 1 Mar 13 00:38:05.917096 kernel: CPU topo: Max. logical dies: 1 Mar 13 00:38:05.917110 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:38:05.917124 kernel: CPU topo: Max. threads per core: 2 Mar 13 00:38:05.917143 kernel: CPU topo: Num. cores per package: 1 Mar 13 00:38:05.917158 kernel: CPU topo: Num. threads per package: 2 Mar 13 00:38:05.917172 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Mar 13 00:38:05.917188 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 13 00:38:05.917202 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 13 00:38:05.917217 kernel: Booting paravirtualized kernel on KVM Mar 13 00:38:05.917232 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:38:05.917247 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 13 00:38:05.917261 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Mar 13 00:38:05.917279 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Mar 13 00:38:05.917293 kernel: pcpu-alloc: [0] 0 1 Mar 13 00:38:05.917308 kernel: kvm-guest: PV spinlocks enabled Mar 13 00:38:05.917323 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 00:38:05.917340 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:38:05.917355 kernel: random: crng init done Mar 13 00:38:05.917370 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 00:38:05.917385 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 13 00:38:05.917402 kernel: Fallback order for Node 0: 0 Mar 13 00:38:05.917429 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Mar 13 00:38:05.917448 kernel: Policy zone: DMA32 Mar 13 00:38:05.917474 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:38:05.917493 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 00:38:05.917509 kernel: Kernel/User page tables isolation: enabled Mar 13 00:38:05.917525 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:38:05.917540 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:38:05.917556 kernel: Dynamic Preempt: voluntary Mar 13 00:38:05.917572 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:38:05.917588 kernel: rcu: RCU event tracing is enabled. Mar 13 00:38:05.917604 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 00:38:05.917623 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:38:05.917639 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:38:05.917655 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:38:05.917671 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:38:05.917686 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 00:38:05.917705 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:38:05.917721 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:38:05.917737 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:38:05.917752 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 13 00:38:05.917769 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:38:05.917785 kernel: Console: colour dummy device 80x25 Mar 13 00:38:05.917800 kernel: printk: legacy console [tty0] enabled Mar 13 00:38:05.917815 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:38:05.917834 kernel: ACPI: Core revision 20240827 Mar 13 00:38:05.917850 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 13 00:38:05.917866 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:38:05.917881 kernel: x2apic enabled Mar 13 00:38:05.917897 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:38:05.917913 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 13 00:38:05.917929 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Mar 13 00:38:05.917945 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 13 00:38:05.917960 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 13 00:38:05.917976 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:38:05.917994 kernel: Spectre V2 : Mitigation: Retpolines Mar 13 00:38:05.918009 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 13 00:38:05.918025 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 13 00:38:05.918041 kernel: RETBleed: Vulnerable Mar 13 00:38:05.918056 kernel: Speculative Store Bypass: Vulnerable Mar 13 00:38:05.918072 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 13 00:38:05.918087 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 13 00:38:05.918102 kernel: GDS: Unknown: Dependent on hypervisor status Mar 13 00:38:05.918118 kernel: active return thunk: its_return_thunk Mar 13 00:38:05.918133 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 13 00:38:05.918148 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:38:05.918166 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:38:05.918182 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:38:05.918197 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 13 00:38:05.918213 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 13 00:38:05.918228 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 13 00:38:05.918243 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 13 00:38:05.918259 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 13 00:38:05.918274 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 13 00:38:05.918289 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:38:05.918305 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 13 00:38:05.918324 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 13 00:38:05.918339 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 13 00:38:05.918354 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 13 00:38:05.918370 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 13 00:38:05.918385 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 13 00:38:05.918401 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 13 00:38:05.924201 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:38:05.924223 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:38:05.924238 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:38:05.924251 kernel: landlock: Up and running. Mar 13 00:38:05.924264 kernel: SELinux: Initializing. Mar 13 00:38:05.924278 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 13 00:38:05.924299 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 13 00:38:05.924315 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 13 00:38:05.924332 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 13 00:38:05.924349 kernel: signal: max sigframe size: 3632 Mar 13 00:38:05.924367 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:38:05.924385 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:38:05.924402 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:38:05.924433 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 13 00:38:05.924450 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:38:05.924470 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:38:05.924486 kernel: .... node #0, CPUs: #1 Mar 13 00:38:05.924503 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 13 00:38:05.924520 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 13 00:38:05.924536 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 00:38:05.924552 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Mar 13 00:38:05.924569 kernel: Memory: 1899860K/2037804K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 133380K reserved, 0K cma-reserved) Mar 13 00:38:05.924585 kernel: devtmpfs: initialized Mar 13 00:38:05.924601 kernel: x86/mm: Memory block size: 128MB Mar 13 00:38:05.924621 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 13 00:38:05.924638 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:38:05.924654 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 00:38:05.924670 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:38:05.924686 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:38:05.924703 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:38:05.924719 kernel: audit: type=2000 audit(1773362283.530:1): state=initialized audit_enabled=0 res=1 Mar 13 00:38:05.924743 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:38:05.924759 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:38:05.924780 kernel: cpuidle: using governor menu Mar 13 00:38:05.924795 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:38:05.924812 kernel: dca service started, version 1.12.1 Mar 13 00:38:05.924828 kernel: PCI: Using configuration type 1 for base access Mar 13 00:38:05.924844 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:38:05.924861 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:38:05.924877 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:38:05.924894 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:38:05.924910 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:38:05.924929 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:38:05.924946 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:38:05.924961 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:38:05.924977 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 13 00:38:05.924994 kernel: ACPI: Interpreter enabled Mar 13 00:38:05.925010 kernel: ACPI: PM: (supports S0 S5) Mar 13 00:38:05.925026 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:38:05.925043 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:38:05.925059 kernel: PCI: Using E820 reservations for host bridge windows Mar 13 00:38:05.925078 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 13 00:38:05.925094 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 00:38:05.925318 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 13 00:38:05.925486 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 13 00:38:05.925623 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 13 00:38:05.925642 kernel: acpiphp: Slot [3] registered Mar 13 00:38:05.925658 kernel: acpiphp: Slot [4] registered Mar 13 00:38:05.925678 kernel: acpiphp: Slot [5] registered Mar 13 00:38:05.925694 kernel: acpiphp: Slot [6] registered Mar 13 00:38:05.925710 kernel: acpiphp: Slot [7] registered Mar 13 00:38:05.925725 kernel: acpiphp: Slot [8] registered Mar 13 00:38:05.925741 kernel: acpiphp: Slot [9] registered Mar 13 00:38:05.925756 kernel: acpiphp: Slot [10] registered Mar 13 00:38:05.925772 kernel: acpiphp: Slot [11] registered Mar 13 00:38:05.925788 kernel: acpiphp: Slot [12] registered Mar 13 00:38:05.925804 kernel: acpiphp: Slot [13] registered Mar 13 00:38:05.925822 kernel: acpiphp: Slot [14] registered Mar 13 00:38:05.925838 kernel: acpiphp: Slot [15] registered Mar 13 00:38:05.925854 kernel: acpiphp: Slot [16] registered Mar 13 00:38:05.925869 kernel: acpiphp: Slot [17] registered Mar 13 00:38:05.925885 kernel: acpiphp: Slot [18] registered Mar 13 00:38:05.925900 kernel: acpiphp: Slot [19] registered Mar 13 00:38:05.925916 kernel: acpiphp: Slot [20] registered Mar 13 00:38:05.925932 kernel: acpiphp: Slot [21] registered Mar 13 00:38:05.925947 kernel: acpiphp: Slot [22] registered Mar 13 00:38:05.925963 kernel: acpiphp: Slot [23] registered Mar 13 00:38:05.925982 kernel: acpiphp: Slot [24] registered Mar 13 00:38:05.925997 kernel: acpiphp: Slot [25] registered Mar 13 00:38:05.926012 kernel: acpiphp: Slot [26] registered Mar 13 00:38:05.926028 kernel: acpiphp: Slot [27] registered Mar 13 00:38:05.926044 kernel: acpiphp: Slot [28] registered Mar 13 00:38:05.926060 kernel: acpiphp: Slot [29] registered Mar 13 00:38:05.926076 kernel: acpiphp: Slot [30] registered Mar 13 00:38:05.926091 kernel: acpiphp: Slot [31] registered Mar 13 00:38:05.926107 kernel: PCI host bridge to bus 0000:00 Mar 13 00:38:05.926251 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 13 00:38:05.926376 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 13 00:38:05.926570 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 13 00:38:05.926698 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 13 00:38:05.926895 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 13 00:38:05.927023 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 00:38:05.927192 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Mar 13 00:38:05.927358 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Mar 13 00:38:05.927534 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Mar 13 00:38:05.927674 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 13 00:38:05.927809 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 13 00:38:05.927942 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 13 00:38:05.928077 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 13 00:38:05.928222 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 13 00:38:05.928357 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 13 00:38:05.928506 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 13 00:38:05.928650 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Mar 13 00:38:05.928853 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Mar 13 00:38:05.929051 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 13 00:38:05.929191 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 13 00:38:05.929348 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Mar 13 00:38:05.931132 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Mar 13 00:38:05.931294 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Mar 13 00:38:05.931463 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Mar 13 00:38:05.931485 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 13 00:38:05.931502 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 13 00:38:05.931518 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 13 00:38:05.931539 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 13 00:38:05.931555 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 13 00:38:05.931571 kernel: iommu: Default domain type: Translated Mar 13 00:38:05.931587 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:38:05.931602 kernel: efivars: Registered efivars operations Mar 13 00:38:05.931618 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:38:05.931634 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 13 00:38:05.931650 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Mar 13 00:38:05.931665 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 13 00:38:05.931683 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 13 00:38:05.931822 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 13 00:38:05.931959 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 13 00:38:05.932096 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 13 00:38:05.932116 kernel: vgaarb: loaded Mar 13 00:38:05.932132 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 13 00:38:05.932148 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 13 00:38:05.932164 kernel: clocksource: Switched to clocksource kvm-clock Mar 13 00:38:05.932180 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:38:05.932199 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:38:05.932215 kernel: pnp: PnP ACPI init Mar 13 00:38:05.932231 kernel: pnp: PnP ACPI: found 5 devices Mar 13 00:38:05.932247 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:38:05.932263 kernel: NET: Registered PF_INET protocol family Mar 13 00:38:05.932279 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 00:38:05.932294 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 13 00:38:05.932310 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:38:05.932327 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 13 00:38:05.932346 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 13 00:38:05.932362 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 13 00:38:05.932378 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 13 00:38:05.932393 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 13 00:38:05.932424 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:38:05.932440 kernel: NET: Registered PF_XDP protocol family Mar 13 00:38:05.932571 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 13 00:38:05.932697 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 13 00:38:05.932834 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 13 00:38:05.932960 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 13 00:38:05.933082 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 13 00:38:05.933224 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 13 00:38:05.933244 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:38:05.933260 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 13 00:38:05.933277 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 13 00:38:05.933292 kernel: clocksource: Switched to clocksource tsc Mar 13 00:38:05.933308 kernel: Initialise system trusted keyrings Mar 13 00:38:05.933328 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 13 00:38:05.933343 kernel: Key type asymmetric registered Mar 13 00:38:05.933359 kernel: Asymmetric key parser 'x509' registered Mar 13 00:38:05.933374 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:38:05.933390 kernel: io scheduler mq-deadline registered Mar 13 00:38:05.936866 kernel: io scheduler kyber registered Mar 13 00:38:05.936898 kernel: io scheduler bfq registered Mar 13 00:38:05.936914 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:38:05.936930 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:38:05.936951 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:38:05.936966 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 13 00:38:05.936982 kernel: i8042: Warning: Keylock active Mar 13 00:38:05.936996 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 13 00:38:05.937012 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 13 00:38:05.937199 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 13 00:38:05.937338 kernel: rtc_cmos 00:00: registered as rtc0 Mar 13 00:38:05.937496 kernel: rtc_cmos 00:00: setting system clock to 2026-03-13T00:38:05 UTC (1773362285) Mar 13 00:38:05.937626 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 13 00:38:05.937666 kernel: intel_pstate: CPU model not supported Mar 13 00:38:05.937685 kernel: efifb: probing for efifb Mar 13 00:38:05.937700 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Mar 13 00:38:05.937716 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 13 00:38:05.937733 kernel: efifb: scrolling: redraw Mar 13 00:38:05.937750 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 13 00:38:05.937766 kernel: Console: switching to colour frame buffer device 100x37 Mar 13 00:38:05.937783 kernel: fb0: EFI VGA frame buffer device Mar 13 00:38:05.937797 kernel: pstore: Using crash dump compression: deflate Mar 13 00:38:05.937814 kernel: pstore: Registered efi_pstore as persistent store backend Mar 13 00:38:05.937831 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:38:05.937848 kernel: Segment Routing with IPv6 Mar 13 00:38:05.937865 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:38:05.937882 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:38:05.937899 kernel: Key type dns_resolver registered Mar 13 00:38:05.937916 kernel: IPI shorthand broadcast: enabled Mar 13 00:38:05.937936 kernel: sched_clock: Marking stable (2569002640, 144373623)->(2788883254, -75506991) Mar 13 00:38:05.937953 kernel: registered taskstats version 1 Mar 13 00:38:05.937970 kernel: Loading compiled-in X.509 certificates Mar 13 00:38:05.937987 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:38:05.938003 kernel: Demotion targets for Node 0: null Mar 13 00:38:05.938020 kernel: Key type .fscrypt registered Mar 13 00:38:05.938036 kernel: Key type fscrypt-provisioning registered Mar 13 00:38:05.938053 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 00:38:05.938070 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:38:05.938090 kernel: ima: No architecture policies found Mar 13 00:38:05.938114 kernel: clk: Disabling unused clocks Mar 13 00:38:05.938134 kernel: Warning: unable to open an initial console. Mar 13 00:38:05.938154 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:38:05.938175 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:38:05.938199 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:38:05.938223 kernel: Run /init as init process Mar 13 00:38:05.938244 kernel: with arguments: Mar 13 00:38:05.938264 kernel: /init Mar 13 00:38:05.938284 kernel: with environment: Mar 13 00:38:05.938303 kernel: HOME=/ Mar 13 00:38:05.938324 kernel: TERM=linux Mar 13 00:38:05.938346 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:38:05.938373 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:38:05.938398 systemd[1]: Detected virtualization amazon. Mar 13 00:38:05.938431 systemd[1]: Detected architecture x86-64. Mar 13 00:38:05.938452 systemd[1]: Running in initrd. Mar 13 00:38:05.938472 systemd[1]: No hostname configured, using default hostname. Mar 13 00:38:05.938494 systemd[1]: Hostname set to . Mar 13 00:38:05.938516 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:38:05.938537 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:38:05.938558 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:38:05.938587 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:38:05.938610 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:38:05.938631 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:38:05.938652 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:38:05.938675 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:38:05.938699 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:38:05.938724 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:38:05.938745 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:38:05.938766 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:38:05.938787 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:38:05.938809 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:38:05.938830 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:38:05.938851 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:38:05.938872 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:38:05.938894 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:38:05.938918 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:38:05.938941 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:38:05.938963 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:38:05.938984 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:38:05.939006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:38:05.939028 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:38:05.939050 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:38:05.939072 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:38:05.939097 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:38:05.939118 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:38:05.939138 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:38:05.939154 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:38:05.939169 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:38:05.939184 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:38:05.939200 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:38:05.939222 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:38:05.939239 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:38:05.939256 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:38:05.940903 systemd-journald[188]: Collecting audit messages is disabled. Mar 13 00:38:05.940960 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:38:05.940980 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:38:05.940998 systemd-journald[188]: Journal started Mar 13 00:38:05.941033 systemd-journald[188]: Runtime Journal (/run/log/journal/ec2839e4588fb250024b8ab2431cf0d6) is 4.7M, max 38.1M, 33.3M free. Mar 13 00:38:05.904483 systemd-modules-load[189]: Inserted module 'overlay' Mar 13 00:38:05.950434 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:38:05.953457 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:38:05.957543 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:38:05.960293 kernel: Bridge firewalling registered Mar 13 00:38:05.958130 systemd-modules-load[189]: Inserted module 'br_netfilter' Mar 13 00:38:05.960058 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:38:05.965592 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:38:05.968561 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:38:05.975546 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:38:05.980384 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:38:05.990254 systemd-tmpfiles[210]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:38:05.998354 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:38:06.005500 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 13 00:38:06.004444 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:38:06.008604 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:38:06.010517 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:38:06.013651 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:38:06.037736 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:38:06.068378 systemd-resolved[226]: Positive Trust Anchors: Mar 13 00:38:06.069507 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:38:06.069573 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:38:06.076682 systemd-resolved[226]: Defaulting to hostname 'linux'. Mar 13 00:38:06.078128 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:38:06.080584 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:38:06.135450 kernel: SCSI subsystem initialized Mar 13 00:38:06.145438 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:38:06.156443 kernel: iscsi: registered transport (tcp) Mar 13 00:38:06.178450 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:38:06.178542 kernel: QLogic iSCSI HBA Driver Mar 13 00:38:06.197255 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:38:06.215792 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:38:06.217046 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:38:06.264035 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:38:06.266155 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:38:06.323440 kernel: raid6: avx512x4 gen() 17780 MB/s Mar 13 00:38:06.341439 kernel: raid6: avx512x2 gen() 17821 MB/s Mar 13 00:38:06.359437 kernel: raid6: avx512x1 gen() 17682 MB/s Mar 13 00:38:06.377434 kernel: raid6: avx2x4 gen() 17726 MB/s Mar 13 00:38:06.395433 kernel: raid6: avx2x2 gen() 17660 MB/s Mar 13 00:38:06.413689 kernel: raid6: avx2x1 gen() 13712 MB/s Mar 13 00:38:06.413743 kernel: raid6: using algorithm avx512x2 gen() 17821 MB/s Mar 13 00:38:06.432912 kernel: raid6: .... xor() 24740 MB/s, rmw enabled Mar 13 00:38:06.432966 kernel: raid6: using avx512x2 recovery algorithm Mar 13 00:38:06.453443 kernel: xor: automatically using best checksumming function avx Mar 13 00:38:06.621444 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:38:06.628239 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:38:06.630569 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:38:06.658152 systemd-udevd[436]: Using default interface naming scheme 'v255'. Mar 13 00:38:06.664942 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:38:06.669203 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:38:06.691630 dracut-pre-trigger[443]: rd.md=0: removing MD RAID activation Mar 13 00:38:06.717657 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:38:06.720599 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:38:06.782116 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:38:06.786803 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:38:06.884447 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 13 00:38:06.886435 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 13 00:38:06.895427 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:38:06.911145 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 13 00:38:06.911428 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 13 00:38:06.911628 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 13 00:38:06.914742 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 00:38:06.914800 kernel: GPT:9289727 != 33554431 Mar 13 00:38:06.914820 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 13 00:38:06.915057 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 00:38:06.918164 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:38:06.927920 kernel: GPT:9289727 != 33554431 Mar 13 00:38:06.927957 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 00:38:06.927988 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 13 00:38:06.928008 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:91:7d:ef:20:3d Mar 13 00:38:06.918506 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:38:06.930245 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:38:06.933758 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:38:06.935897 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:38:06.946838 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Mar 13 00:38:06.948359 (udev-worker)[493]: Network interface NamePolicy= disabled on kernel command line. Mar 13 00:38:06.968437 kernel: AES CTR mode by8 optimization enabled Mar 13 00:38:06.992253 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:38:07.005444 kernel: nvme nvme0: using unchecked data buffer Mar 13 00:38:07.095110 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 13 00:38:07.148021 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 13 00:38:07.160850 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:38:07.180112 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 13 00:38:07.189274 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 13 00:38:07.189913 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 13 00:38:07.191200 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:38:07.192307 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:38:07.193531 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:38:07.195250 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:38:07.200554 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:38:07.216634 disk-uuid[671]: Primary Header is updated. Mar 13 00:38:07.216634 disk-uuid[671]: Secondary Entries is updated. Mar 13 00:38:07.216634 disk-uuid[671]: Secondary Header is updated. Mar 13 00:38:07.224083 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:38:07.227901 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 13 00:38:07.233438 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 13 00:38:08.236462 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 13 00:38:08.237379 disk-uuid[674]: The operation has completed successfully. Mar 13 00:38:08.373983 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:38:08.374117 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:38:08.411877 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:38:08.427061 sh[939]: Success Mar 13 00:38:08.452469 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:38:08.452546 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:38:08.453820 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:38:08.466435 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Mar 13 00:38:08.564996 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:38:08.569504 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:38:08.580120 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:38:08.599439 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (962) Mar 13 00:38:08.604113 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:38:08.604183 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:38:08.737656 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 13 00:38:08.737733 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:38:08.740089 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:38:08.765121 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:38:08.766218 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:38:08.767237 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:38:08.768274 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:38:08.770545 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:38:08.810577 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (999) Mar 13 00:38:08.816672 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:38:08.816772 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:38:08.837669 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 13 00:38:08.837748 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 13 00:38:08.845479 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:38:08.847097 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:38:08.849565 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:38:08.876950 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:38:08.879799 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:38:08.922400 systemd-networkd[1131]: lo: Link UP Mar 13 00:38:08.922422 systemd-networkd[1131]: lo: Gained carrier Mar 13 00:38:08.924085 systemd-networkd[1131]: Enumeration completed Mar 13 00:38:08.924533 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:38:08.924651 systemd-networkd[1131]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:38:08.924656 systemd-networkd[1131]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:38:08.926927 systemd[1]: Reached target network.target - Network. Mar 13 00:38:08.928834 systemd-networkd[1131]: eth0: Link UP Mar 13 00:38:08.928840 systemd-networkd[1131]: eth0: Gained carrier Mar 13 00:38:08.928857 systemd-networkd[1131]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:38:08.938513 systemd-networkd[1131]: eth0: DHCPv4 address 172.31.22.244/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 13 00:38:09.344781 ignition[1098]: Ignition 2.22.0 Mar 13 00:38:09.344799 ignition[1098]: Stage: fetch-offline Mar 13 00:38:09.345033 ignition[1098]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:38:09.345044 ignition[1098]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 13 00:38:09.345332 ignition[1098]: Ignition finished successfully Mar 13 00:38:09.348528 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:38:09.350129 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 00:38:09.384279 ignition[1140]: Ignition 2.22.0 Mar 13 00:38:09.384296 ignition[1140]: Stage: fetch Mar 13 00:38:09.384680 ignition[1140]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:38:09.384692 ignition[1140]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 13 00:38:09.385320 ignition[1140]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 13 00:38:09.403211 ignition[1140]: PUT result: OK Mar 13 00:38:09.405956 ignition[1140]: parsed url from cmdline: "" Mar 13 00:38:09.405968 ignition[1140]: no config URL provided Mar 13 00:38:09.405978 ignition[1140]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:38:09.405993 ignition[1140]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:38:09.406022 ignition[1140]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 13 00:38:09.406686 ignition[1140]: PUT result: OK Mar 13 00:38:09.406764 ignition[1140]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 13 00:38:09.411884 ignition[1140]: GET result: OK Mar 13 00:38:09.412032 ignition[1140]: parsing config with SHA512: 50c8c6d271de4d907b4a8c21343b1131a68c0351c004cf65abcf9136cc47b411c8035d83fd87ad788cf0e2b7243b2f15d6b19ff413b7fe773eccecc821187516 Mar 13 00:38:09.419517 unknown[1140]: fetched base config from "system" Mar 13 00:38:09.419532 unknown[1140]: fetched base config from "system" Mar 13 00:38:09.420031 ignition[1140]: fetch: fetch complete Mar 13 00:38:09.419538 unknown[1140]: fetched user config from "aws" Mar 13 00:38:09.420038 ignition[1140]: fetch: fetch passed Mar 13 00:38:09.420117 ignition[1140]: Ignition finished successfully Mar 13 00:38:09.423015 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 00:38:09.425373 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:38:09.459603 ignition[1147]: Ignition 2.22.0 Mar 13 00:38:09.459887 ignition[1147]: Stage: kargs Mar 13 00:38:09.460303 ignition[1147]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:38:09.460316 ignition[1147]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 13 00:38:09.461015 ignition[1147]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 13 00:38:09.461859 ignition[1147]: PUT result: OK Mar 13 00:38:09.464240 ignition[1147]: kargs: kargs passed Mar 13 00:38:09.464308 ignition[1147]: Ignition finished successfully Mar 13 00:38:09.466496 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:38:09.467930 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:38:09.497432 ignition[1154]: Ignition 2.22.0 Mar 13 00:38:09.497448 ignition[1154]: Stage: disks Mar 13 00:38:09.497732 ignition[1154]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:38:09.497740 ignition[1154]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 13 00:38:09.497813 ignition[1154]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 13 00:38:09.498678 ignition[1154]: PUT result: OK Mar 13 00:38:09.501693 ignition[1154]: disks: disks passed Mar 13 00:38:09.501768 ignition[1154]: Ignition finished successfully Mar 13 00:38:09.503605 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:38:09.504188 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:38:09.504540 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:38:09.505130 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:38:09.505638 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:38:09.506145 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:38:09.507750 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:38:09.556331 systemd-fsck[1162]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 13 00:38:09.560324 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:38:09.562234 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:38:09.713443 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:38:09.714304 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:38:09.715449 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:38:09.717696 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:38:09.720026 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:38:09.723003 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 13 00:38:09.723962 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:38:09.723988 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:38:09.732216 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:38:09.734353 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:38:09.752436 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1181) Mar 13 00:38:09.756661 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:38:09.756790 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:38:09.765682 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 13 00:38:09.765761 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 13 00:38:09.769124 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:38:10.107070 initrd-setup-root[1205]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:38:10.125826 initrd-setup-root[1212]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:38:10.131040 initrd-setup-root[1219]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:38:10.135644 initrd-setup-root[1226]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:38:10.399042 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:38:10.401203 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:38:10.403569 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:38:10.418064 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:38:10.421572 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:38:10.451124 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:38:10.458588 ignition[1294]: INFO : Ignition 2.22.0 Mar 13 00:38:10.458588 ignition[1294]: INFO : Stage: mount Mar 13 00:38:10.460108 ignition[1294]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:38:10.460108 ignition[1294]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 13 00:38:10.460108 ignition[1294]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 13 00:38:10.460108 ignition[1294]: INFO : PUT result: OK Mar 13 00:38:10.464199 ignition[1294]: INFO : mount: mount passed Mar 13 00:38:10.464971 ignition[1294]: INFO : Ignition finished successfully Mar 13 00:38:10.466589 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:38:10.468044 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:38:10.561690 systemd-networkd[1131]: eth0: Gained IPv6LL Mar 13 00:38:10.716064 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:38:10.745442 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1305) Mar 13 00:38:10.749392 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:38:10.749483 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:38:10.757151 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 13 00:38:10.757231 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 13 00:38:10.759328 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:38:10.797607 ignition[1321]: INFO : Ignition 2.22.0 Mar 13 00:38:10.797607 ignition[1321]: INFO : Stage: files Mar 13 00:38:10.799116 ignition[1321]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:38:10.799116 ignition[1321]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 13 00:38:10.799116 ignition[1321]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 13 00:38:10.800613 ignition[1321]: INFO : PUT result: OK Mar 13 00:38:10.804264 ignition[1321]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:38:10.818885 ignition[1321]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:38:10.818885 ignition[1321]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:38:10.833430 ignition[1321]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:38:10.834501 ignition[1321]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:38:10.834501 ignition[1321]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:38:10.834087 unknown[1321]: wrote ssh authorized keys file for user: core Mar 13 00:38:10.837429 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:38:10.838275 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:38:10.914019 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:38:11.114822 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:38:11.114822 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:38:11.117259 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:38:11.117259 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:38:11.117259 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:38:11.117259 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:38:11.117259 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:38:11.117259 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:38:11.117259 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:38:11.122639 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:38:11.122639 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:38:11.122639 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:38:11.125624 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:38:11.125624 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:38:11.125624 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 13 00:38:11.585825 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 13 00:38:12.302831 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:38:12.302831 ignition[1321]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 13 00:38:12.305482 ignition[1321]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:38:12.309124 ignition[1321]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:38:12.309124 ignition[1321]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 13 00:38:12.309124 ignition[1321]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:38:12.315153 ignition[1321]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:38:12.315153 ignition[1321]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:38:12.315153 ignition[1321]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:38:12.315153 ignition[1321]: INFO : files: files passed Mar 13 00:38:12.315153 ignition[1321]: INFO : Ignition finished successfully Mar 13 00:38:12.311811 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:38:12.314589 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:38:12.321135 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:38:12.330458 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:38:12.330563 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:38:12.349627 initrd-setup-root-after-ignition[1351]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:38:12.349627 initrd-setup-root-after-ignition[1351]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:38:12.352948 initrd-setup-root-after-ignition[1355]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:38:12.355102 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:38:12.355801 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:38:12.357854 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:38:12.411540 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:38:12.411689 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:38:12.413349 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:38:12.414207 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:38:12.415056 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:38:12.416188 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:38:12.441544 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:38:12.443743 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:38:12.463750 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:38:12.464635 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:38:12.465572 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:38:12.466448 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:38:12.466682 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:38:12.467750 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:38:12.468586 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:38:12.469497 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:38:12.470289 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:38:12.471057 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:38:12.471833 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:38:12.472598 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:38:12.473454 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:38:12.474227 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:38:12.475460 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:38:12.476236 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:38:12.477024 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:38:12.477207 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:38:12.478307 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:38:12.479138 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:38:12.479813 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:38:12.479963 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:38:12.480620 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:38:12.480885 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:38:12.482215 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:38:12.482471 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:38:12.483172 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:38:12.483366 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:38:12.485529 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:38:12.490580 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:38:12.491081 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:38:12.491315 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:38:12.493148 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:38:12.493352 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:38:12.505325 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:38:12.508535 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:38:12.523746 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:38:12.531232 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:38:12.532112 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:38:12.534197 ignition[1375]: INFO : Ignition 2.22.0 Mar 13 00:38:12.534197 ignition[1375]: INFO : Stage: umount Mar 13 00:38:12.534197 ignition[1375]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:38:12.534197 ignition[1375]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 13 00:38:12.534197 ignition[1375]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 13 00:38:12.537060 ignition[1375]: INFO : PUT result: OK Mar 13 00:38:12.537610 ignition[1375]: INFO : umount: umount passed Mar 13 00:38:12.537610 ignition[1375]: INFO : Ignition finished successfully Mar 13 00:38:12.539217 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:38:12.539362 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:38:12.540807 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:38:12.540890 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:38:12.541583 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:38:12.541645 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:38:12.542173 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 00:38:12.542232 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 00:38:12.542821 systemd[1]: Stopped target network.target - Network. Mar 13 00:38:12.543372 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:38:12.543506 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:38:12.544036 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:38:12.544591 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:38:12.548483 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:38:12.548948 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:38:12.549869 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:38:12.550529 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:38:12.550585 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:38:12.551114 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:38:12.551162 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:38:12.551725 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:38:12.551800 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:38:12.552375 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:38:12.552453 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:38:12.553105 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:38:12.553168 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:38:12.553956 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:38:12.554594 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:38:12.559943 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:38:12.560084 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:38:12.564275 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:38:12.564875 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:38:12.565022 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:38:12.567124 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:38:12.568260 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:38:12.568954 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:38:12.569006 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:38:12.570599 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:38:12.571706 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:38:12.571771 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:38:12.572575 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:38:12.572630 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:38:12.574669 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:38:12.574723 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:38:12.577463 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:38:12.577526 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:38:12.578558 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:38:12.584571 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:38:12.584693 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:38:12.594106 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:38:12.595781 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:38:12.598120 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:38:12.598178 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:38:12.598729 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:38:12.598771 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:38:12.599586 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:38:12.599650 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:38:12.600667 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:38:12.600843 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:38:12.601870 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:38:12.601942 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:38:12.604008 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:38:12.604479 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:38:12.604540 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:38:12.607546 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:38:12.607607 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:38:12.610092 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 13 00:38:12.610136 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:38:12.610626 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:38:12.610678 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:38:12.611827 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:38:12.611886 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:38:12.616330 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 00:38:12.616399 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 13 00:38:12.616468 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 00:38:12.616521 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:38:12.617016 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:38:12.617142 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:38:12.622769 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:38:12.622902 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:38:12.623867 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:38:12.626106 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:38:12.646348 systemd[1]: Switching root. Mar 13 00:38:12.708109 systemd-journald[188]: Journal stopped Mar 13 00:38:14.581925 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Mar 13 00:38:14.582027 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:38:14.582054 kernel: SELinux: policy capability open_perms=1 Mar 13 00:38:14.582075 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:38:14.582096 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:38:14.582117 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:38:14.582143 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:38:14.582164 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:38:14.582181 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:38:14.582201 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:38:14.582220 kernel: audit: type=1403 audit(1773362293.146:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:38:14.582242 systemd[1]: Successfully loaded SELinux policy in 75.849ms. Mar 13 00:38:14.582284 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.283ms. Mar 13 00:38:14.582308 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:38:14.582334 systemd[1]: Detected virtualization amazon. Mar 13 00:38:14.582355 systemd[1]: Detected architecture x86-64. Mar 13 00:38:14.582376 systemd[1]: Detected first boot. Mar 13 00:38:14.582397 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:38:14.582458 zram_generator::config[1419]: No configuration found. Mar 13 00:38:14.582480 kernel: Guest personality initialized and is inactive Mar 13 00:38:14.582498 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 13 00:38:14.582518 kernel: Initialized host personality Mar 13 00:38:14.582535 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:38:14.582557 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:38:14.582578 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:38:14.582599 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:38:14.582625 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:38:14.582653 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:38:14.582675 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:38:14.582699 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:38:14.582722 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:38:14.582754 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:38:14.582781 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:38:14.582802 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:38:14.582826 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:38:14.582851 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:38:14.582875 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:38:14.582902 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:38:14.582927 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:38:14.582950 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:38:14.582981 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:38:14.583008 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:38:14.583033 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:38:14.583059 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:38:14.583083 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:38:14.583109 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:38:14.583133 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:38:14.583158 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:38:14.583188 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:38:14.583211 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:38:14.583238 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:38:14.583262 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:38:14.583286 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:38:14.583312 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:38:14.583336 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:38:14.583362 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:38:14.583386 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:38:14.601174 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:38:14.601217 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:38:14.601240 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:38:14.601263 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:38:14.601286 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:38:14.601308 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:38:14.601331 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:38:14.601353 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:38:14.601376 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:38:14.601404 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:38:14.601458 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:38:14.601481 systemd[1]: Reached target machines.target - Containers. Mar 13 00:38:14.601505 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:38:14.601528 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:38:14.601550 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:38:14.601573 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:38:14.601595 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:38:14.601621 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:38:14.601644 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:38:14.601666 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:38:14.601688 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:38:14.601710 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:38:14.601732 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:38:14.601755 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:38:14.601777 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:38:14.601801 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:38:14.601826 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:38:14.601849 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:38:14.601872 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:38:14.601895 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:38:14.601917 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:38:14.601940 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:38:14.601966 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:38:14.601991 kernel: loop: module loaded Mar 13 00:38:14.602013 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:38:14.602036 systemd[1]: Stopped verity-setup.service. Mar 13 00:38:14.602062 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:38:14.602086 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:38:14.602108 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:38:14.602131 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:38:14.602153 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:38:14.602179 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:38:14.602201 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:38:14.602223 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:38:14.602246 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:38:14.602272 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:38:14.602295 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:38:14.602317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:38:14.602341 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:38:14.602363 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:38:14.602386 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:38:14.608454 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:38:14.608524 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:38:14.608560 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:38:14.608587 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:38:14.608612 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:38:14.608638 kernel: fuse: init (API version 7.41) Mar 13 00:38:14.608674 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:38:14.608701 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:38:14.608728 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:38:14.608753 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:38:14.608779 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:38:14.608810 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:38:14.608837 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:38:14.608909 systemd-journald[1502]: Collecting audit messages is disabled. Mar 13 00:38:14.608962 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:38:14.608990 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:38:14.609016 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:38:14.609041 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:38:14.609068 systemd-journald[1502]: Journal started Mar 13 00:38:14.609118 systemd-journald[1502]: Runtime Journal (/run/log/journal/ec2839e4588fb250024b8ab2431cf0d6) is 4.7M, max 38.1M, 33.3M free. Mar 13 00:38:14.141289 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:38:14.614906 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:38:14.156926 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 13 00:38:14.157381 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:38:14.625298 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:38:14.625062 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:38:14.625533 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:38:14.631660 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:38:14.633346 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:38:14.666830 kernel: ACPI: bus type drm_connector registered Mar 13 00:38:14.659932 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:38:14.662158 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:38:14.666175 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:38:14.670644 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:38:14.675646 systemd-tmpfiles[1519]: ACLs are not supported, ignoring. Mar 13 00:38:14.675676 systemd-tmpfiles[1519]: ACLs are not supported, ignoring. Mar 13 00:38:14.681928 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:38:14.690741 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:38:14.698163 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:38:14.716750 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:38:14.727459 kernel: loop0: detected capacity change from 0 to 110984 Mar 13 00:38:14.751153 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:38:14.765389 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:38:14.769765 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:38:14.771854 systemd-journald[1502]: Time spent on flushing to /var/log/journal/ec2839e4588fb250024b8ab2431cf0d6 is 51.486ms for 1026 entries. Mar 13 00:38:14.771854 systemd-journald[1502]: System Journal (/var/log/journal/ec2839e4588fb250024b8ab2431cf0d6) is 8M, max 195.6M, 187.6M free. Mar 13 00:38:14.840357 systemd-journald[1502]: Received client request to flush runtime journal. Mar 13 00:38:14.778711 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:38:14.809854 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:38:14.843900 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:38:14.872432 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:38:14.879051 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:38:14.885582 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:38:14.901539 kernel: loop1: detected capacity change from 0 to 72368 Mar 13 00:38:14.917527 systemd-tmpfiles[1572]: ACLs are not supported, ignoring. Mar 13 00:38:14.918846 systemd-tmpfiles[1572]: ACLs are not supported, ignoring. Mar 13 00:38:14.927428 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:38:15.041447 kernel: loop2: detected capacity change from 0 to 228704 Mar 13 00:38:15.159249 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:38:15.162523 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:38:15.184363 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:38:15.201433 kernel: loop3: detected capacity change from 0 to 128560 Mar 13 00:38:15.351105 kernel: loop4: detected capacity change from 0 to 110984 Mar 13 00:38:15.376439 kernel: loop5: detected capacity change from 0 to 72368 Mar 13 00:38:15.399847 kernel: loop6: detected capacity change from 0 to 228704 Mar 13 00:38:15.430438 kernel: loop7: detected capacity change from 0 to 128560 Mar 13 00:38:15.443852 (sd-merge)[1583]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 13 00:38:15.444521 (sd-merge)[1583]: Merged extensions into '/usr'. Mar 13 00:38:15.454103 systemd[1]: Reload requested from client PID 1528 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:38:15.454275 systemd[1]: Reloading... Mar 13 00:38:15.583442 zram_generator::config[1607]: No configuration found. Mar 13 00:38:15.840075 systemd[1]: Reloading finished in 385 ms. Mar 13 00:38:15.866254 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:38:15.867196 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:38:15.874582 systemd[1]: Starting ensure-sysext.service... Mar 13 00:38:15.879587 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:38:15.886572 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:38:15.904563 systemd[1]: Reload requested from client PID 1661 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:38:15.904585 systemd[1]: Reloading... Mar 13 00:38:15.921588 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:38:15.922373 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:38:15.922909 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:38:15.923892 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:38:15.927931 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:38:15.928945 systemd-tmpfiles[1662]: ACLs are not supported, ignoring. Mar 13 00:38:15.929874 systemd-tmpfiles[1662]: ACLs are not supported, ignoring. Mar 13 00:38:15.943998 systemd-tmpfiles[1662]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:38:15.944013 systemd-tmpfiles[1662]: Skipping /boot Mar 13 00:38:15.966009 systemd-tmpfiles[1662]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:38:15.966025 systemd-tmpfiles[1662]: Skipping /boot Mar 13 00:38:16.004441 zram_generator::config[1690]: No configuration found. Mar 13 00:38:16.015945 systemd-udevd[1663]: Using default interface naming scheme 'v255'. Mar 13 00:38:16.090558 ldconfig[1520]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:38:16.314963 (udev-worker)[1710]: Network interface NamePolicy= disabled on kernel command line. Mar 13 00:38:16.432436 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 13 00:38:16.449432 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:38:16.456434 kernel: ACPI: button: Power Button [PWRF] Mar 13 00:38:16.462131 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Mar 13 00:38:16.470434 kernel: ACPI: button: Sleep Button [SLPF] Mar 13 00:38:16.514432 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 13 00:38:16.514088 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:38:16.515311 systemd[1]: Reloading finished in 609 ms. Mar 13 00:38:16.526900 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:38:16.530006 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:38:16.531106 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:38:16.556519 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:38:16.563579 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:38:16.567638 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:38:16.572022 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:38:16.579137 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:38:16.581683 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:38:16.590333 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:38:16.591820 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:38:16.593798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:38:16.596877 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:38:16.602800 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:38:16.603573 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:38:16.603733 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:38:16.603867 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:38:16.610023 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:38:16.610308 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:38:16.610550 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:38:16.610680 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:38:16.610812 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:38:16.619062 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:38:16.619634 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:38:16.625910 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:38:16.626886 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:38:16.627201 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:38:16.627939 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:38:16.628657 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:38:16.637376 systemd[1]: Finished ensure-sysext.service. Mar 13 00:38:16.642657 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:38:16.643860 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:38:16.646993 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:38:16.649784 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:38:16.650029 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:38:16.653932 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:38:16.654182 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:38:16.658489 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:38:16.675386 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:38:16.676512 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:38:16.700159 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:38:16.717553 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:38:16.729499 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:38:16.735895 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:38:16.783357 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:38:16.818454 augenrules[1857]: No rules Mar 13 00:38:16.819847 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:38:16.820141 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:38:16.840134 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:38:16.841261 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:38:16.841974 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:38:16.977778 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:38:16.990243 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:38:16.990526 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:38:16.998853 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:38:17.054482 systemd-networkd[1795]: lo: Link UP Mar 13 00:38:17.054495 systemd-networkd[1795]: lo: Gained carrier Mar 13 00:38:17.056699 systemd-networkd[1795]: Enumeration completed Mar 13 00:38:17.057179 systemd-networkd[1795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:38:17.057191 systemd-networkd[1795]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:38:17.060216 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:38:17.061021 systemd-networkd[1795]: eth0: Link UP Mar 13 00:38:17.066486 systemd-networkd[1795]: eth0: Gained carrier Mar 13 00:38:17.066525 systemd-networkd[1795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:38:17.068213 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:38:17.073316 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:38:17.078526 systemd-networkd[1795]: eth0: DHCPv4 address 172.31.22.244/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 13 00:38:17.097664 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 13 00:38:17.101079 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:38:17.166311 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:38:17.171928 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:38:17.179082 systemd-resolved[1796]: Positive Trust Anchors: Mar 13 00:38:17.179100 systemd-resolved[1796]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:38:17.179150 systemd-resolved[1796]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:38:17.185592 systemd-resolved[1796]: Defaulting to hostname 'linux'. Mar 13 00:38:17.188595 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:38:17.189315 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:38:17.189915 systemd[1]: Reached target network.target - Network. Mar 13 00:38:17.190305 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:38:17.190702 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:38:17.191151 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:38:17.191580 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:38:17.191932 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:38:17.192437 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:38:17.192857 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:38:17.193210 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:38:17.193759 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:38:17.193797 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:38:17.194158 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:38:17.195813 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:38:17.197455 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:38:17.199937 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:38:17.200486 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:38:17.200886 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:38:17.203356 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:38:17.204093 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:38:17.205192 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:38:17.206424 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:38:17.206813 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:38:17.207237 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:38:17.207277 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:38:17.208339 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:38:17.211602 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 13 00:38:17.219792 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:38:17.225069 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:38:17.227055 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:38:17.231147 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:38:17.231757 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:38:17.235649 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:38:17.239581 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:38:17.245997 systemd[1]: Started ntpd.service - Network Time Service. Mar 13 00:38:17.253087 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:38:17.270259 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 13 00:38:17.281120 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:38:17.285757 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:38:17.297339 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:38:17.300548 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 00:38:17.303173 jq[1946]: false Mar 13 00:38:17.304825 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:38:17.311695 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:38:17.316826 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:38:17.321438 google_oslogin_nss_cache[1948]: oslogin_cache_refresh[1948]: Refreshing passwd entry cache Mar 13 00:38:17.320424 oslogin_cache_refresh[1948]: Refreshing passwd entry cache Mar 13 00:38:17.327089 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:38:17.328865 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:38:17.329138 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:38:17.334746 google_oslogin_nss_cache[1948]: oslogin_cache_refresh[1948]: Failure getting users, quitting Mar 13 00:38:17.334837 oslogin_cache_refresh[1948]: Failure getting users, quitting Mar 13 00:38:17.334924 google_oslogin_nss_cache[1948]: oslogin_cache_refresh[1948]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:38:17.334969 oslogin_cache_refresh[1948]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:38:17.335099 google_oslogin_nss_cache[1948]: oslogin_cache_refresh[1948]: Refreshing group entry cache Mar 13 00:38:17.335147 oslogin_cache_refresh[1948]: Refreshing group entry cache Mar 13 00:38:17.340074 google_oslogin_nss_cache[1948]: oslogin_cache_refresh[1948]: Failure getting groups, quitting Mar 13 00:38:17.340074 google_oslogin_nss_cache[1948]: oslogin_cache_refresh[1948]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:38:17.339983 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:38:17.338469 oslogin_cache_refresh[1948]: Failure getting groups, quitting Mar 13 00:38:17.338484 oslogin_cache_refresh[1948]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:38:17.341510 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:38:17.369959 extend-filesystems[1947]: Found /dev/nvme0n1p6 Mar 13 00:38:17.375363 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:38:17.378792 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:38:17.393597 extend-filesystems[1947]: Found /dev/nvme0n1p9 Mar 13 00:38:17.394460 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:38:17.394746 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:38:17.406465 update_engine[1961]: I20260313 00:38:17.406355 1961 main.cc:92] Flatcar Update Engine starting Mar 13 00:38:17.418902 extend-filesystems[1947]: Checking size of /dev/nvme0n1p9 Mar 13 00:38:17.424488 tar[1967]: linux-amd64/LICENSE Mar 13 00:38:17.424488 tar[1967]: linux-amd64/helm Mar 13 00:38:17.429875 (ntainerd)[1987]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:38:17.435633 jq[1962]: true Mar 13 00:38:17.493670 extend-filesystems[1947]: Resized partition /dev/nvme0n1p9 Mar 13 00:38:17.505230 ntpd[1950]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:38:17.508217 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:38:17.508217 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:38:17.508217 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: ---------------------------------------------------- Mar 13 00:38:17.508217 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:38:17.508217 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:38:17.508217 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: corporation. Support and training for ntp-4 are Mar 13 00:38:17.508217 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: available at https://www.nwtime.org/support Mar 13 00:38:17.508217 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: ---------------------------------------------------- Mar 13 00:38:17.505296 ntpd[1950]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:38:17.505307 ntpd[1950]: ---------------------------------------------------- Mar 13 00:38:17.505317 ntpd[1950]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:38:17.505326 ntpd[1950]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:38:17.505335 ntpd[1950]: corporation. Support and training for ntp-4 are Mar 13 00:38:17.505345 ntpd[1950]: available at https://www.nwtime.org/support Mar 13 00:38:17.505353 ntpd[1950]: ---------------------------------------------------- Mar 13 00:38:17.513464 extend-filesystems[2000]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 00:38:17.518277 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: proto: precision = 0.062 usec (-24) Mar 13 00:38:17.518277 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: basedate set to 2026-02-28 Mar 13 00:38:17.518277 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: gps base set to 2026-03-01 (week 2408) Mar 13 00:38:17.518277 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:38:17.518277 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:38:17.525842 kernel: ntpd[1950]: segfault at 24 ip 0000555b480f8aeb sp 00007ffe873b7d50 error 4 in ntpd[68aeb,555b48096000+80000] likely on CPU 1 (core 0, socket 0) Mar 13 00:38:17.525885 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Mar 13 00:38:17.525909 coreos-metadata[1943]: Mar 13 00:38:17.514 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 13 00:38:17.525909 coreos-metadata[1943]: Mar 13 00:38:17.518 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 13 00:38:17.517024 ntpd[1950]: proto: precision = 0.062 usec (-24) Mar 13 00:38:17.526479 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:38:17.526479 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: Listen normally on 3 eth0 172.31.22.244:123 Mar 13 00:38:17.526479 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: Listen normally on 4 lo [::1]:123 Mar 13 00:38:17.526479 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: bind(21) AF_INET6 [fe80::491:7dff:feef:203d%2]:123 flags 0x811 failed: Cannot assign requested address Mar 13 00:38:17.526479 ntpd[1950]: 13 Mar 00:38:17 ntpd[1950]: unable to create socket on eth0 (5) for [fe80::491:7dff:feef:203d%2]:123 Mar 13 00:38:17.526649 coreos-metadata[1943]: Mar 13 00:38:17.525 INFO Fetch successful Mar 13 00:38:17.526649 coreos-metadata[1943]: Mar 13 00:38:17.525 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 13 00:38:17.517896 ntpd[1950]: basedate set to 2026-02-28 Mar 13 00:38:17.517914 ntpd[1950]: gps base set to 2026-03-01 (week 2408) Mar 13 00:38:17.518036 ntpd[1950]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:38:17.518064 ntpd[1950]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:38:17.519020 ntpd[1950]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:38:17.519054 ntpd[1950]: Listen normally on 3 eth0 172.31.22.244:123 Mar 13 00:38:17.519094 ntpd[1950]: Listen normally on 4 lo [::1]:123 Mar 13 00:38:17.519133 ntpd[1950]: bind(21) AF_INET6 [fe80::491:7dff:feef:203d%2]:123 flags 0x811 failed: Cannot assign requested address Mar 13 00:38:17.519156 ntpd[1950]: unable to create socket on eth0 (5) for [fe80::491:7dff:feef:203d%2]:123 Mar 13 00:38:17.527684 dbus-daemon[1944]: [system] SELinux support is enabled Mar 13 00:38:17.530456 coreos-metadata[1943]: Mar 13 00:38:17.528 INFO Fetch successful Mar 13 00:38:17.530456 coreos-metadata[1943]: Mar 13 00:38:17.528 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 13 00:38:17.527982 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:38:17.533813 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 13 00:38:17.533897 coreos-metadata[1943]: Mar 13 00:38:17.531 INFO Fetch successful Mar 13 00:38:17.533897 coreos-metadata[1943]: Mar 13 00:38:17.531 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 13 00:38:17.535438 jq[1993]: true Mar 13 00:38:17.536978 coreos-metadata[1943]: Mar 13 00:38:17.536 INFO Fetch successful Mar 13 00:38:17.537070 coreos-metadata[1943]: Mar 13 00:38:17.536 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 13 00:38:17.543219 coreos-metadata[1943]: Mar 13 00:38:17.543 INFO Fetch failed with 404: resource not found Mar 13 00:38:17.543219 coreos-metadata[1943]: Mar 13 00:38:17.543 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 13 00:38:17.553501 coreos-metadata[1943]: Mar 13 00:38:17.545 INFO Fetch successful Mar 13 00:38:17.553501 coreos-metadata[1943]: Mar 13 00:38:17.545 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 13 00:38:17.553501 coreos-metadata[1943]: Mar 13 00:38:17.546 INFO Fetch successful Mar 13 00:38:17.553501 coreos-metadata[1943]: Mar 13 00:38:17.546 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 13 00:38:17.553501 coreos-metadata[1943]: Mar 13 00:38:17.548 INFO Fetch successful Mar 13 00:38:17.553501 coreos-metadata[1943]: Mar 13 00:38:17.548 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 13 00:38:17.553501 coreos-metadata[1943]: Mar 13 00:38:17.550 INFO Fetch successful Mar 13 00:38:17.553501 coreos-metadata[1943]: Mar 13 00:38:17.550 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 13 00:38:17.549347 dbus-daemon[1944]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1795 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 13 00:38:17.554802 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 13 00:38:17.556453 coreos-metadata[1943]: Mar 13 00:38:17.555 INFO Fetch successful Mar 13 00:38:17.557879 systemd-coredump[2003]: Process 1950 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 13 00:38:17.564300 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Mar 13 00:38:17.566052 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:38:17.566091 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:38:17.566827 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:38:17.566859 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:38:17.571796 systemd[1]: Started systemd-coredump@0-2003-0.service - Process Core Dump (PID 2003/UID 0). Mar 13 00:38:17.580564 dbus-daemon[1944]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 13 00:38:17.582266 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:38:17.583454 update_engine[1961]: I20260313 00:38:17.583380 1961 update_check_scheduler.cc:74] Next update check in 9m23s Mar 13 00:38:17.602676 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 13 00:38:17.609225 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:38:17.649864 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 13 00:38:17.650792 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:38:17.664306 systemd-logind[1959]: Watching system buttons on /dev/input/event2 (Power Button) Mar 13 00:38:17.664586 systemd-logind[1959]: Watching system buttons on /dev/input/event3 (Sleep Button) Mar 13 00:38:17.664614 systemd-logind[1959]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:38:17.669284 systemd-logind[1959]: New seat seat0. Mar 13 00:38:17.673997 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:38:17.678439 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 13 00:38:17.692933 extend-filesystems[2000]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 13 00:38:17.692933 extend-filesystems[2000]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 13 00:38:17.692933 extend-filesystems[2000]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 13 00:38:17.696334 extend-filesystems[1947]: Resized filesystem in /dev/nvme0n1p9 Mar 13 00:38:17.696649 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:38:17.696966 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:38:17.739667 bash[2033]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:38:17.734922 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:38:17.746061 systemd[1]: Starting sshkeys.service... Mar 13 00:38:17.772684 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 13 00:38:17.775355 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 13 00:38:18.070292 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 13 00:38:18.074528 dbus-daemon[1944]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 13 00:38:18.075203 dbus-daemon[1944]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2011 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 13 00:38:18.088826 systemd[1]: Starting polkit.service - Authorization Manager... Mar 13 00:38:18.179171 coreos-metadata[2063]: Mar 13 00:38:18.179 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 13 00:38:18.182929 coreos-metadata[2063]: Mar 13 00:38:18.182 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 13 00:38:18.187430 coreos-metadata[2063]: Mar 13 00:38:18.185 INFO Fetch successful Mar 13 00:38:18.187430 coreos-metadata[2063]: Mar 13 00:38:18.185 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 13 00:38:18.193265 coreos-metadata[2063]: Mar 13 00:38:18.193 INFO Fetch successful Mar 13 00:38:18.194677 unknown[2063]: wrote ssh authorized keys file for user: core Mar 13 00:38:18.219235 systemd-coredump[2007]: Process 1950 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1950: #0 0x0000555b480f8aeb n/a (ntpd + 0x68aeb) #1 0x0000555b480a1cdf n/a (ntpd + 0x11cdf) #2 0x0000555b480a2575 n/a (ntpd + 0x12575) #3 0x0000555b4809dd8a n/a (ntpd + 0xdd8a) #4 0x0000555b4809f5d3 n/a (ntpd + 0xf5d3) #5 0x0000555b480a7fd1 n/a (ntpd + 0x17fd1) #6 0x0000555b48098c2d n/a (ntpd + 0x8c2d) #7 0x00007fd49decc16c n/a (libc.so.6 + 0x2716c) #8 0x00007fd49decc229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000555b48098c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Mar 13 00:38:18.225931 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 13 00:38:18.226118 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 13 00:38:18.232994 systemd[1]: systemd-coredump@0-2003-0.service: Deactivated successfully. Mar 13 00:38:18.234601 locksmithd[2017]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:38:18.287882 update-ssh-keys[2124]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:38:18.287206 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 13 00:38:18.304262 containerd[1987]: time="2026-03-13T00:38:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:38:18.304262 containerd[1987]: time="2026-03-13T00:38:18.300592499Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:38:18.297931 systemd[1]: Finished sshkeys.service. Mar 13 00:38:18.331491 containerd[1987]: time="2026-03-13T00:38:18.329925297Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.961µs" Mar 13 00:38:18.331491 containerd[1987]: time="2026-03-13T00:38:18.329978139Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:38:18.331491 containerd[1987]: time="2026-03-13T00:38:18.330002937Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:38:18.331491 containerd[1987]: time="2026-03-13T00:38:18.330173051Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:38:18.331491 containerd[1987]: time="2026-03-13T00:38:18.330197507Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:38:18.331491 containerd[1987]: time="2026-03-13T00:38:18.330228762Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:38:18.331491 containerd[1987]: time="2026-03-13T00:38:18.330301192Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:38:18.331491 containerd[1987]: time="2026-03-13T00:38:18.330316167Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:38:18.331491 containerd[1987]: time="2026-03-13T00:38:18.330625407Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:38:18.331491 containerd[1987]: time="2026-03-13T00:38:18.330648495Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:38:18.331491 containerd[1987]: time="2026-03-13T00:38:18.330668278Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:38:18.331491 containerd[1987]: time="2026-03-13T00:38:18.330680795Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:38:18.331989 containerd[1987]: time="2026-03-13T00:38:18.330773383Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:38:18.331989 containerd[1987]: time="2026-03-13T00:38:18.331009718Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:38:18.331989 containerd[1987]: time="2026-03-13T00:38:18.331041988Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:38:18.331989 containerd[1987]: time="2026-03-13T00:38:18.331055657Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:38:18.331989 containerd[1987]: time="2026-03-13T00:38:18.331083822Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:38:18.336433 containerd[1987]: time="2026-03-13T00:38:18.334166373Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:38:18.336433 containerd[1987]: time="2026-03-13T00:38:18.334292396Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:38:18.340447 containerd[1987]: time="2026-03-13T00:38:18.338585114Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:38:18.340447 containerd[1987]: time="2026-03-13T00:38:18.338668442Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:38:18.340447 containerd[1987]: time="2026-03-13T00:38:18.338690299Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:38:18.340447 containerd[1987]: time="2026-03-13T00:38:18.338753118Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:38:18.340447 containerd[1987]: time="2026-03-13T00:38:18.338773500Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:38:18.340447 containerd[1987]: time="2026-03-13T00:38:18.338788719Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:38:18.340447 containerd[1987]: time="2026-03-13T00:38:18.338809024Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:38:18.340447 containerd[1987]: time="2026-03-13T00:38:18.338825412Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:38:18.340447 containerd[1987]: time="2026-03-13T00:38:18.338841108Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:38:18.340447 containerd[1987]: time="2026-03-13T00:38:18.338862672Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:38:18.340447 containerd[1987]: time="2026-03-13T00:38:18.338876772Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:38:18.340447 containerd[1987]: time="2026-03-13T00:38:18.338895733Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:38:18.340447 containerd[1987]: time="2026-03-13T00:38:18.339036865Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:38:18.340447 containerd[1987]: time="2026-03-13T00:38:18.339066915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:38:18.341023 containerd[1987]: time="2026-03-13T00:38:18.339087773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:38:18.341023 containerd[1987]: time="2026-03-13T00:38:18.339109810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:38:18.341023 containerd[1987]: time="2026-03-13T00:38:18.339127237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:38:18.341023 containerd[1987]: time="2026-03-13T00:38:18.339143790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:38:18.341023 containerd[1987]: time="2026-03-13T00:38:18.339160663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:38:18.341023 containerd[1987]: time="2026-03-13T00:38:18.339176246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:38:18.341023 containerd[1987]: time="2026-03-13T00:38:18.339193437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:38:18.341023 containerd[1987]: time="2026-03-13T00:38:18.339209405Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:38:18.341023 containerd[1987]: time="2026-03-13T00:38:18.339223567Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:38:18.341023 containerd[1987]: time="2026-03-13T00:38:18.339280663Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:38:18.341023 containerd[1987]: time="2026-03-13T00:38:18.339297963Z" level=info msg="Start snapshots syncer" Mar 13 00:38:18.341023 containerd[1987]: time="2026-03-13T00:38:18.340784707Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:38:18.341514 containerd[1987]: time="2026-03-13T00:38:18.341220311Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:38:18.341514 containerd[1987]: time="2026-03-13T00:38:18.341282743Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:38:18.342494 containerd[1987]: time="2026-03-13T00:38:18.342454140Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:38:18.343260 containerd[1987]: time="2026-03-13T00:38:18.343225677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:38:18.343333 containerd[1987]: time="2026-03-13T00:38:18.343271747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:38:18.343333 containerd[1987]: time="2026-03-13T00:38:18.343289984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:38:18.343333 containerd[1987]: time="2026-03-13T00:38:18.343305419Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:38:18.343333 containerd[1987]: time="2026-03-13T00:38:18.343324776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:38:18.343504 containerd[1987]: time="2026-03-13T00:38:18.343339917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:38:18.343504 containerd[1987]: time="2026-03-13T00:38:18.343365267Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:38:18.343504 containerd[1987]: time="2026-03-13T00:38:18.343402723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:38:18.343504 containerd[1987]: time="2026-03-13T00:38:18.343468412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:38:18.343504 containerd[1987]: time="2026-03-13T00:38:18.343492756Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:38:18.343845 containerd[1987]: time="2026-03-13T00:38:18.343824560Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:38:18.345426 containerd[1987]: time="2026-03-13T00:38:18.344366484Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:38:18.345426 containerd[1987]: time="2026-03-13T00:38:18.344390176Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:38:18.345426 containerd[1987]: time="2026-03-13T00:38:18.344426357Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:38:18.345426 containerd[1987]: time="2026-03-13T00:38:18.344441148Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:38:18.345426 containerd[1987]: time="2026-03-13T00:38:18.344457169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:38:18.345426 containerd[1987]: time="2026-03-13T00:38:18.344482201Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:38:18.345426 containerd[1987]: time="2026-03-13T00:38:18.344504377Z" level=info msg="runtime interface created" Mar 13 00:38:18.345426 containerd[1987]: time="2026-03-13T00:38:18.344512498Z" level=info msg="created NRI interface" Mar 13 00:38:18.345426 containerd[1987]: time="2026-03-13T00:38:18.344524291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:38:18.345426 containerd[1987]: time="2026-03-13T00:38:18.344543362Z" level=info msg="Connect containerd service" Mar 13 00:38:18.345426 containerd[1987]: time="2026-03-13T00:38:18.344575797Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:38:18.347047 containerd[1987]: time="2026-03-13T00:38:18.347012276Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:38:18.348380 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Mar 13 00:38:18.354531 systemd[1]: Started ntpd.service - Network Time Service. Mar 13 00:38:18.491251 ntpd[2147]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:38:18.515993 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:38:18.515993 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:38:18.515993 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: ---------------------------------------------------- Mar 13 00:38:18.515993 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:38:18.515993 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:38:18.515993 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: corporation. Support and training for ntp-4 are Mar 13 00:38:18.515993 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: available at https://www.nwtime.org/support Mar 13 00:38:18.515993 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: ---------------------------------------------------- Mar 13 00:38:18.515993 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: proto: precision = 0.100 usec (-23) Mar 13 00:38:18.513560 ntpd[2147]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:38:18.513574 ntpd[2147]: ---------------------------------------------------- Mar 13 00:38:18.513586 ntpd[2147]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:38:18.513595 ntpd[2147]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:38:18.513605 ntpd[2147]: corporation. Support and training for ntp-4 are Mar 13 00:38:18.513614 ntpd[2147]: available at https://www.nwtime.org/support Mar 13 00:38:18.513625 ntpd[2147]: ---------------------------------------------------- Mar 13 00:38:18.514350 ntpd[2147]: proto: precision = 0.100 usec (-23) Mar 13 00:38:18.531445 kernel: ntpd[2147]: segfault at 24 ip 0000560cdb19baeb sp 00007ffc35ea5e30 error 4 in ntpd[68aeb,560cdb139000+80000] likely on CPU 1 (core 0, socket 0) Mar 13 00:38:18.531547 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Mar 13 00:38:18.519896 ntpd[2147]: basedate set to 2026-02-28 Mar 13 00:38:18.531655 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: basedate set to 2026-02-28 Mar 13 00:38:18.531655 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: gps base set to 2026-03-01 (week 2408) Mar 13 00:38:18.531655 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:38:18.531655 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:38:18.531655 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:38:18.531655 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: Listen normally on 3 eth0 172.31.22.244:123 Mar 13 00:38:18.531655 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: Listen normally on 4 lo [::1]:123 Mar 13 00:38:18.531655 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: bind(21) AF_INET6 [fe80::491:7dff:feef:203d%2]:123 flags 0x811 failed: Cannot assign requested address Mar 13 00:38:18.531655 ntpd[2147]: 13 Mar 00:38:18 ntpd[2147]: unable to create socket on eth0 (5) for [fe80::491:7dff:feef:203d%2]:123 Mar 13 00:38:18.519921 ntpd[2147]: gps base set to 2026-03-01 (week 2408) Mar 13 00:38:18.520027 ntpd[2147]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:38:18.520054 ntpd[2147]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:38:18.520244 ntpd[2147]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:38:18.520270 ntpd[2147]: Listen normally on 3 eth0 172.31.22.244:123 Mar 13 00:38:18.520300 ntpd[2147]: Listen normally on 4 lo [::1]:123 Mar 13 00:38:18.520332 ntpd[2147]: bind(21) AF_INET6 [fe80::491:7dff:feef:203d%2]:123 flags 0x811 failed: Cannot assign requested address Mar 13 00:38:18.520353 ntpd[2147]: unable to create socket on eth0 (5) for [fe80::491:7dff:feef:203d%2]:123 Mar 13 00:38:18.620377 systemd-coredump[2166]: Process 2147 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 13 00:38:18.629720 systemd[1]: Started systemd-coredump@1-2166-0.service - Process Core Dump (PID 2166/UID 0). Mar 13 00:38:18.703910 polkitd[2108]: Started polkitd version 126 Mar 13 00:38:18.725033 polkitd[2108]: Loading rules from directory /etc/polkit-1/rules.d Mar 13 00:38:18.739018 polkitd[2108]: Loading rules from directory /run/polkit-1/rules.d Mar 13 00:38:18.739110 polkitd[2108]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 13 00:38:18.741017 polkitd[2108]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 13 00:38:18.741067 polkitd[2108]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 13 00:38:18.741122 polkitd[2108]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 13 00:38:18.741826 polkitd[2108]: Finished loading, compiling and executing 2 rules Mar 13 00:38:18.742138 systemd[1]: Started polkit.service - Authorization Manager. Mar 13 00:38:18.745544 dbus-daemon[1944]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 13 00:38:18.745859 polkitd[2108]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 13 00:38:18.753790 systemd-networkd[1795]: eth0: Gained IPv6LL Mar 13 00:38:18.766429 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:38:18.767711 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:38:18.770201 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 13 00:38:18.775757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:38:18.778061 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:38:18.800600 containerd[1987]: time="2026-03-13T00:38:18.795403357Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:38:18.800600 containerd[1987]: time="2026-03-13T00:38:18.795488735Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:38:18.800600 containerd[1987]: time="2026-03-13T00:38:18.795519503Z" level=info msg="Start subscribing containerd event" Mar 13 00:38:18.800600 containerd[1987]: time="2026-03-13T00:38:18.795550246Z" level=info msg="Start recovering state" Mar 13 00:38:18.800600 containerd[1987]: time="2026-03-13T00:38:18.795650153Z" level=info msg="Start event monitor" Mar 13 00:38:18.800600 containerd[1987]: time="2026-03-13T00:38:18.795663394Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:38:18.800600 containerd[1987]: time="2026-03-13T00:38:18.795675091Z" level=info msg="Start streaming server" Mar 13 00:38:18.800600 containerd[1987]: time="2026-03-13T00:38:18.795685528Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:38:18.800600 containerd[1987]: time="2026-03-13T00:38:18.795695024Z" level=info msg="runtime interface starting up..." Mar 13 00:38:18.800600 containerd[1987]: time="2026-03-13T00:38:18.795703400Z" level=info msg="starting plugins..." Mar 13 00:38:18.800600 containerd[1987]: time="2026-03-13T00:38:18.795718234Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:38:18.800600 containerd[1987]: time="2026-03-13T00:38:18.795839684Z" level=info msg="containerd successfully booted in 0.497674s" Mar 13 00:38:18.796544 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:38:18.819701 systemd-hostnamed[2011]: Hostname set to (transient) Mar 13 00:38:18.822146 systemd-resolved[1796]: System hostname changed to 'ip-172-31-22-244'. Mar 13 00:38:18.888756 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:38:18.964633 systemd-coredump[2167]: Process 2147 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2147: #0 0x0000560cdb19baeb n/a (ntpd + 0x68aeb) #1 0x0000560cdb144cdf n/a (ntpd + 0x11cdf) #2 0x0000560cdb145575 n/a (ntpd + 0x12575) #3 0x0000560cdb140d8a n/a (ntpd + 0xdd8a) #4 0x0000560cdb1425d3 n/a (ntpd + 0xf5d3) #5 0x0000560cdb14afd1 n/a (ntpd + 0x17fd1) #6 0x0000560cdb13bc2d n/a (ntpd + 0x8c2d) #7 0x00007fddb0f6516c n/a (libc.so.6 + 0x2716c) #8 0x00007fddb0f65229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000560cdb13bc55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Mar 13 00:38:18.970781 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 13 00:38:18.970967 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 13 00:38:18.977073 systemd[1]: systemd-coredump@1-2166-0.service: Deactivated successfully. Mar 13 00:38:19.004997 amazon-ssm-agent[2180]: Initializing new seelog logger Mar 13 00:38:19.007682 amazon-ssm-agent[2180]: New Seelog Logger Creation Complete Mar 13 00:38:19.008747 amazon-ssm-agent[2180]: 2026/03/13 00:38:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:38:19.008863 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:38:19.010677 amazon-ssm-agent[2180]: 2026/03/13 00:38:19 processing appconfig overrides Mar 13 00:38:19.014336 amazon-ssm-agent[2180]: 2026/03/13 00:38:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:38:19.014431 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.0110 INFO Proxy environment variables: Mar 13 00:38:19.014860 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:38:19.015048 amazon-ssm-agent[2180]: 2026/03/13 00:38:19 processing appconfig overrides Mar 13 00:38:19.016543 amazon-ssm-agent[2180]: 2026/03/13 00:38:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:38:19.016543 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:38:19.016543 amazon-ssm-agent[2180]: 2026/03/13 00:38:19 processing appconfig overrides Mar 13 00:38:19.020118 amazon-ssm-agent[2180]: 2026/03/13 00:38:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:38:19.020118 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:38:19.020220 amazon-ssm-agent[2180]: 2026/03/13 00:38:19 processing appconfig overrides Mar 13 00:38:19.020429 tar[1967]: linux-amd64/README.md Mar 13 00:38:19.048084 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:38:19.115014 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.0142 INFO https_proxy: Mar 13 00:38:19.119787 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Mar 13 00:38:19.126715 systemd[1]: Started ntpd.service - Network Time Service. Mar 13 00:38:19.170813 ntpd[2211]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:38:19.172960 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:38:19.173440 ntpd[2211]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:38:19.173643 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:38:19.173695 ntpd[2211]: ---------------------------------------------------- Mar 13 00:38:19.173759 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: ---------------------------------------------------- Mar 13 00:38:19.173812 ntpd[2211]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:38:19.173869 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:38:19.173917 ntpd[2211]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:38:19.173978 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:38:19.174026 ntpd[2211]: corporation. Support and training for ntp-4 are Mar 13 00:38:19.174086 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: corporation. Support and training for ntp-4 are Mar 13 00:38:19.175416 ntpd[2211]: available at https://www.nwtime.org/support Mar 13 00:38:19.176543 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: available at https://www.nwtime.org/support Mar 13 00:38:19.176543 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: ---------------------------------------------------- Mar 13 00:38:19.176543 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: proto: precision = 0.092 usec (-23) Mar 13 00:38:19.175435 ntpd[2211]: ---------------------------------------------------- Mar 13 00:38:19.176190 ntpd[2211]: proto: precision = 0.092 usec (-23) Mar 13 00:38:19.178470 ntpd[2211]: basedate set to 2026-02-28 Mar 13 00:38:19.179463 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: basedate set to 2026-02-28 Mar 13 00:38:19.179463 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: gps base set to 2026-03-01 (week 2408) Mar 13 00:38:19.179463 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:38:19.179463 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:38:19.179463 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:38:19.179463 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: Listen normally on 3 eth0 172.31.22.244:123 Mar 13 00:38:19.179463 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: Listen normally on 4 lo [::1]:123 Mar 13 00:38:19.179463 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: Listen normally on 5 eth0 [fe80::491:7dff:feef:203d%2]:123 Mar 13 00:38:19.179463 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: Listening on routing socket on fd #22 for interface updates Mar 13 00:38:19.178489 ntpd[2211]: gps base set to 2026-03-01 (week 2408) Mar 13 00:38:19.178578 ntpd[2211]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:38:19.178604 ntpd[2211]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:38:19.178783 ntpd[2211]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:38:19.178808 ntpd[2211]: Listen normally on 3 eth0 172.31.22.244:123 Mar 13 00:38:19.178833 ntpd[2211]: Listen normally on 4 lo [::1]:123 Mar 13 00:38:19.178858 ntpd[2211]: Listen normally on 5 eth0 [fe80::491:7dff:feef:203d%2]:123 Mar 13 00:38:19.178883 ntpd[2211]: Listening on routing socket on fd #22 for interface updates Mar 13 00:38:19.184697 ntpd[2211]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 13 00:38:19.185529 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 13 00:38:19.185529 ntpd[2211]: 13 Mar 00:38:19 ntpd[2211]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 13 00:38:19.184737 ntpd[2211]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 13 00:38:19.209272 sshd_keygen[1992]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:38:19.214888 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.0142 INFO http_proxy: Mar 13 00:38:19.245135 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:38:19.248045 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:38:19.280908 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:38:19.281353 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:38:19.289152 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:38:19.313570 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.0142 INFO no_proxy: Mar 13 00:38:19.323379 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:38:19.329774 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:38:19.333680 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:38:19.335724 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:38:19.412776 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.0154 INFO Checking if agent identity type OnPrem can be assumed Mar 13 00:38:19.463969 amazon-ssm-agent[2180]: 2026/03/13 00:38:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:38:19.463969 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 13 00:38:19.464204 amazon-ssm-agent[2180]: 2026/03/13 00:38:19 processing appconfig overrides Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.0156 INFO Checking if agent identity type EC2 can be assumed Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.1129 INFO Agent will take identity from EC2 Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.1186 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.1187 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.1187 INFO [amazon-ssm-agent] Starting Core Agent Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.1187 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.1187 INFO [Registrar] Starting registrar module Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.1212 INFO [EC2Identity] Checking disk for registration info Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.1213 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.1213 INFO [EC2Identity] Generating registration keypair Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.4249 INFO [EC2Identity] Checking write access before registering Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.4253 INFO [EC2Identity] Registering EC2 instance with Systems Manager Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.4637 INFO [EC2Identity] EC2 registration was successful. Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.4637 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.4638 INFO [CredentialRefresher] credentialRefresher has started Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.4638 INFO [CredentialRefresher] Starting credentials refresher loop Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.4889 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 13 00:38:19.489390 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.4891 INFO [CredentialRefresher] Credentials ready Mar 13 00:38:19.512143 amazon-ssm-agent[2180]: 2026-03-13 00:38:19.4893 INFO [CredentialRefresher] Next credential rotation will be in 29.999993386383334 minutes Mar 13 00:38:19.785368 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:38:19.787810 systemd[1]: Started sshd@0-172.31.22.244:22-20.161.92.111:44686.service - OpenSSH per-connection server daemon (20.161.92.111:44686). Mar 13 00:38:20.244193 sshd[2232]: Accepted publickey for core from 20.161.92.111 port 44686 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:38:20.247554 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:38:20.255435 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:38:20.257554 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:38:20.270342 systemd-logind[1959]: New session 1 of user core. Mar 13 00:38:20.287269 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:38:20.292824 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:38:20.310344 (systemd)[2237]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:38:20.313338 systemd-logind[1959]: New session c1 of user core. Mar 13 00:38:20.489977 systemd[2237]: Queued start job for default target default.target. Mar 13 00:38:20.494300 systemd[2237]: Created slice app.slice - User Application Slice. Mar 13 00:38:20.494343 systemd[2237]: Reached target paths.target - Paths. Mar 13 00:38:20.494530 systemd[2237]: Reached target timers.target - Timers. Mar 13 00:38:20.496644 systemd[2237]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:38:20.511089 amazon-ssm-agent[2180]: 2026-03-13 00:38:20.5088 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 13 00:38:20.523616 systemd[2237]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:38:20.523810 systemd[2237]: Reached target sockets.target - Sockets. Mar 13 00:38:20.523877 systemd[2237]: Reached target basic.target - Basic System. Mar 13 00:38:20.523935 systemd[2237]: Reached target default.target - Main User Target. Mar 13 00:38:20.523963 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:38:20.523977 systemd[2237]: Startup finished in 200ms. Mar 13 00:38:20.530766 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:38:20.610700 amazon-ssm-agent[2180]: 2026-03-13 00:38:20.5251 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2246) started Mar 13 00:38:20.711437 amazon-ssm-agent[2180]: 2026-03-13 00:38:20.5251 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 13 00:38:20.785756 systemd[1]: Started sshd@1-172.31.22.244:22-20.161.92.111:44690.service - OpenSSH per-connection server daemon (20.161.92.111:44690). Mar 13 00:38:21.097331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:38:21.098671 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:38:21.100440 systemd[1]: Startup finished in 2.629s (kernel) + 7.496s (initrd) + 8.027s (userspace) = 18.152s. Mar 13 00:38:21.105495 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:38:21.223235 sshd[2263]: Accepted publickey for core from 20.161.92.111 port 44690 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:38:21.225088 sshd-session[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:38:21.231673 systemd-logind[1959]: New session 2 of user core. Mar 13 00:38:21.239659 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:38:21.460909 sshd[2276]: Connection closed by 20.161.92.111 port 44690 Mar 13 00:38:21.461581 sshd-session[2263]: pam_unix(sshd:session): session closed for user core Mar 13 00:38:21.466311 systemd-logind[1959]: Session 2 logged out. Waiting for processes to exit. Mar 13 00:38:21.468302 systemd[1]: sshd@1-172.31.22.244:22-20.161.92.111:44690.service: Deactivated successfully. Mar 13 00:38:21.470186 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 00:38:21.472074 systemd-logind[1959]: Removed session 2. Mar 13 00:38:21.551369 systemd[1]: Started sshd@2-172.31.22.244:22-20.161.92.111:44692.service - OpenSSH per-connection server daemon (20.161.92.111:44692). Mar 13 00:38:21.989375 sshd[2287]: Accepted publickey for core from 20.161.92.111 port 44692 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:38:21.990888 sshd-session[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:38:21.998567 systemd-logind[1959]: New session 3 of user core. Mar 13 00:38:22.003627 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:38:22.222479 sshd[2290]: Connection closed by 20.161.92.111 port 44692 Mar 13 00:38:22.223698 sshd-session[2287]: pam_unix(sshd:session): session closed for user core Mar 13 00:38:22.229193 systemd[1]: sshd@2-172.31.22.244:22-20.161.92.111:44692.service: Deactivated successfully. Mar 13 00:38:22.232098 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 00:38:22.235054 systemd-logind[1959]: Session 3 logged out. Waiting for processes to exit. Mar 13 00:38:22.236754 systemd-logind[1959]: Removed session 3. Mar 13 00:38:22.264201 kubelet[2271]: E0313 00:38:22.264076 2271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:38:22.267236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:38:22.267451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:38:22.268036 systemd[1]: kubelet.service: Consumed 1.043s CPU time, 266.5M memory peak. Mar 13 00:38:22.313899 systemd[1]: Started sshd@3-172.31.22.244:22-20.161.92.111:44706.service - OpenSSH per-connection server daemon (20.161.92.111:44706). Mar 13 00:38:22.745438 sshd[2297]: Accepted publickey for core from 20.161.92.111 port 44706 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:38:22.746800 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:38:22.752468 systemd-logind[1959]: New session 4 of user core. Mar 13 00:38:22.759620 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:38:22.981545 sshd[2300]: Connection closed by 20.161.92.111 port 44706 Mar 13 00:38:22.983343 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Mar 13 00:38:22.987740 systemd-logind[1959]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:38:22.988099 systemd[1]: sshd@3-172.31.22.244:22-20.161.92.111:44706.service: Deactivated successfully. Mar 13 00:38:22.990178 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:38:22.992089 systemd-logind[1959]: Removed session 4. Mar 13 00:38:23.073765 systemd[1]: Started sshd@4-172.31.22.244:22-20.161.92.111:44718.service - OpenSSH per-connection server daemon (20.161.92.111:44718). Mar 13 00:38:23.504461 sshd[2306]: Accepted publickey for core from 20.161.92.111 port 44718 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:38:23.505754 sshd-session[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:38:23.510490 systemd-logind[1959]: New session 5 of user core. Mar 13 00:38:23.517660 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:38:23.678962 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:38:23.679338 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:38:23.691791 sudo[2310]: pam_unix(sudo:session): session closed for user root Mar 13 00:38:23.769764 sshd[2309]: Connection closed by 20.161.92.111 port 44718 Mar 13 00:38:23.770630 sshd-session[2306]: pam_unix(sshd:session): session closed for user core Mar 13 00:38:23.775191 systemd[1]: sshd@4-172.31.22.244:22-20.161.92.111:44718.service: Deactivated successfully. Mar 13 00:38:23.777523 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:38:23.779475 systemd-logind[1959]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:38:23.781072 systemd-logind[1959]: Removed session 5. Mar 13 00:38:23.862027 systemd[1]: Started sshd@5-172.31.22.244:22-20.161.92.111:44728.service - OpenSSH per-connection server daemon (20.161.92.111:44728). Mar 13 00:38:24.289744 sshd[2316]: Accepted publickey for core from 20.161.92.111 port 44728 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:38:24.291138 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:38:24.297425 systemd-logind[1959]: New session 6 of user core. Mar 13 00:38:24.305655 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:38:24.449911 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:38:24.450277 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:38:24.455671 sudo[2321]: pam_unix(sudo:session): session closed for user root Mar 13 00:38:24.461274 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:38:24.461727 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:38:24.472871 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:38:24.511855 augenrules[2343]: No rules Mar 13 00:38:24.513525 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:38:24.513787 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:38:24.514837 sudo[2320]: pam_unix(sudo:session): session closed for user root Mar 13 00:38:24.592398 sshd[2319]: Connection closed by 20.161.92.111 port 44728 Mar 13 00:38:24.594005 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Mar 13 00:38:24.598893 systemd[1]: sshd@5-172.31.22.244:22-20.161.92.111:44728.service: Deactivated successfully. Mar 13 00:38:24.601106 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:38:24.602359 systemd-logind[1959]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:38:24.603627 systemd-logind[1959]: Removed session 6. Mar 13 00:38:24.683654 systemd[1]: Started sshd@6-172.31.22.244:22-20.161.92.111:44732.service - OpenSSH per-connection server daemon (20.161.92.111:44732). Mar 13 00:38:25.115128 sshd[2352]: Accepted publickey for core from 20.161.92.111 port 44732 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:38:25.116559 sshd-session[2352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:38:25.121473 systemd-logind[1959]: New session 7 of user core. Mar 13 00:38:25.133666 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:38:25.275616 sudo[2356]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:38:25.275986 sudo[2356]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:38:25.711662 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:38:25.725003 (dockerd)[2374]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:38:26.035051 dockerd[2374]: time="2026-03-13T00:38:26.034607665Z" level=info msg="Starting up" Mar 13 00:38:26.037928 dockerd[2374]: time="2026-03-13T00:38:26.037893546Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:38:26.049641 dockerd[2374]: time="2026-03-13T00:38:26.049590224Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:38:26.081893 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3637737852-merged.mount: Deactivated successfully. Mar 13 00:38:26.150790 dockerd[2374]: time="2026-03-13T00:38:26.150738124Z" level=info msg="Loading containers: start." Mar 13 00:38:26.164437 kernel: Initializing XFRM netlink socket Mar 13 00:38:26.929051 systemd-resolved[1796]: Clock change detected. Flushing caches. Mar 13 00:38:27.162591 (udev-worker)[2396]: Network interface NamePolicy= disabled on kernel command line. Mar 13 00:38:27.209658 systemd-networkd[1795]: docker0: Link UP Mar 13 00:38:27.222890 dockerd[2374]: time="2026-03-13T00:38:27.222812705Z" level=info msg="Loading containers: done." Mar 13 00:38:27.238905 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck394156169-merged.mount: Deactivated successfully. Mar 13 00:38:27.248669 dockerd[2374]: time="2026-03-13T00:38:27.248615689Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:38:27.248899 dockerd[2374]: time="2026-03-13T00:38:27.248720548Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:38:27.248899 dockerd[2374]: time="2026-03-13T00:38:27.248830295Z" level=info msg="Initializing buildkit" Mar 13 00:38:27.304385 dockerd[2374]: time="2026-03-13T00:38:27.304326585Z" level=info msg="Completed buildkit initialization" Mar 13 00:38:27.313162 dockerd[2374]: time="2026-03-13T00:38:27.313105931Z" level=info msg="Daemon has completed initialization" Mar 13 00:38:27.313446 dockerd[2374]: time="2026-03-13T00:38:27.313273152Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:38:27.313419 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:38:28.101869 containerd[1987]: time="2026-03-13T00:38:28.101826017Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 13 00:38:28.703037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount356142369.mount: Deactivated successfully. Mar 13 00:38:30.410772 containerd[1987]: time="2026-03-13T00:38:30.410719627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:30.411924 containerd[1987]: time="2026-03-13T00:38:30.411875354Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 13 00:38:30.412864 containerd[1987]: time="2026-03-13T00:38:30.412812663Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:30.416475 containerd[1987]: time="2026-03-13T00:38:30.416437718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:30.419554 containerd[1987]: time="2026-03-13T00:38:30.419516585Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 2.317645272s" Mar 13 00:38:30.419642 containerd[1987]: time="2026-03-13T00:38:30.419562166Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 13 00:38:30.422729 containerd[1987]: time="2026-03-13T00:38:30.422511249Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 13 00:38:33.118999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:38:33.122463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:38:33.391420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:38:33.401849 (kubelet)[2656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:38:33.414385 containerd[1987]: time="2026-03-13T00:38:33.413786722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:33.417393 containerd[1987]: time="2026-03-13T00:38:33.417062029Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 13 00:38:33.420076 containerd[1987]: time="2026-03-13T00:38:33.420008191Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:33.426592 containerd[1987]: time="2026-03-13T00:38:33.426527276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:33.428014 containerd[1987]: time="2026-03-13T00:38:33.427824510Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 3.005277433s" Mar 13 00:38:33.428014 containerd[1987]: time="2026-03-13T00:38:33.427864979Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 13 00:38:33.428762 containerd[1987]: time="2026-03-13T00:38:33.428537140Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 13 00:38:33.457146 kubelet[2656]: E0313 00:38:33.457091 2656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:38:33.462482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:38:33.462672 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:38:33.463168 systemd[1]: kubelet.service: Consumed 216ms CPU time, 108.5M memory peak. Mar 13 00:38:34.945932 containerd[1987]: time="2026-03-13T00:38:34.945883117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:34.946950 containerd[1987]: time="2026-03-13T00:38:34.946902845Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 13 00:38:34.948148 containerd[1987]: time="2026-03-13T00:38:34.948093660Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:34.950960 containerd[1987]: time="2026-03-13T00:38:34.950894059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:34.952091 containerd[1987]: time="2026-03-13T00:38:34.951909077Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.523337673s" Mar 13 00:38:34.952091 containerd[1987]: time="2026-03-13T00:38:34.951948727Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 13 00:38:34.952615 containerd[1987]: time="2026-03-13T00:38:34.952593588Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 13 00:38:35.992442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851215109.mount: Deactivated successfully. Mar 13 00:38:36.575406 containerd[1987]: time="2026-03-13T00:38:36.575331516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:36.576398 containerd[1987]: time="2026-03-13T00:38:36.576334080Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 13 00:38:36.577690 containerd[1987]: time="2026-03-13T00:38:36.577630234Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:36.579708 containerd[1987]: time="2026-03-13T00:38:36.579658463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:36.580372 containerd[1987]: time="2026-03-13T00:38:36.580190630Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.627476396s" Mar 13 00:38:36.580372 containerd[1987]: time="2026-03-13T00:38:36.580227488Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 13 00:38:36.580865 containerd[1987]: time="2026-03-13T00:38:36.580839619Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 13 00:38:37.053750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount753904952.mount: Deactivated successfully. Mar 13 00:38:38.297778 containerd[1987]: time="2026-03-13T00:38:38.297720586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:38.302339 containerd[1987]: time="2026-03-13T00:38:38.302282905Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 13 00:38:38.306545 containerd[1987]: time="2026-03-13T00:38:38.306463666Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:38.312945 containerd[1987]: time="2026-03-13T00:38:38.312873346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:38.314183 containerd[1987]: time="2026-03-13T00:38:38.314034082Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.733156065s" Mar 13 00:38:38.314183 containerd[1987]: time="2026-03-13T00:38:38.314076608Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 13 00:38:38.315030 containerd[1987]: time="2026-03-13T00:38:38.314901062Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 13 00:38:38.828438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295105239.mount: Deactivated successfully. Mar 13 00:38:38.840064 containerd[1987]: time="2026-03-13T00:38:38.840006493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:38:38.842024 containerd[1987]: time="2026-03-13T00:38:38.841805871Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 13 00:38:38.844108 containerd[1987]: time="2026-03-13T00:38:38.844071496Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:38:38.847413 containerd[1987]: time="2026-03-13T00:38:38.847339837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:38:38.848257 containerd[1987]: time="2026-03-13T00:38:38.848105273Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 533.170104ms" Mar 13 00:38:38.848257 containerd[1987]: time="2026-03-13T00:38:38.848142329Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 13 00:38:38.848646 containerd[1987]: time="2026-03-13T00:38:38.848613165Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 13 00:38:39.390465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1673197978.mount: Deactivated successfully. Mar 13 00:38:40.782818 containerd[1987]: time="2026-03-13T00:38:40.782756909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:40.784841 containerd[1987]: time="2026-03-13T00:38:40.784625651Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 13 00:38:40.787022 containerd[1987]: time="2026-03-13T00:38:40.786974418Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:40.790943 containerd[1987]: time="2026-03-13T00:38:40.790874291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:40.792114 containerd[1987]: time="2026-03-13T00:38:40.791941029Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.943296177s" Mar 13 00:38:40.792114 containerd[1987]: time="2026-03-13T00:38:40.791982375Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 13 00:38:43.618754 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 13 00:38:43.623245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:38:43.925541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:38:43.934879 (kubelet)[2824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:38:43.997770 kubelet[2824]: E0313 00:38:43.997725 2824 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:38:44.002195 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:38:44.002416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:38:44.002848 systemd[1]: kubelet.service: Consumed 206ms CPU time, 107.9M memory peak. Mar 13 00:38:45.150822 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:38:45.151071 systemd[1]: kubelet.service: Consumed 206ms CPU time, 107.9M memory peak. Mar 13 00:38:45.154084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:38:45.190760 systemd[1]: Reload requested from client PID 2837 ('systemctl') (unit session-7.scope)... Mar 13 00:38:45.190777 systemd[1]: Reloading... Mar 13 00:38:45.311384 zram_generator::config[2878]: No configuration found. Mar 13 00:38:45.617175 systemd[1]: Reloading finished in 425 ms. Mar 13 00:38:45.684001 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:38:45.684112 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:38:45.684458 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:38:45.684518 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.2M memory peak. Mar 13 00:38:45.686484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:38:45.902730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:38:45.912859 (kubelet)[2945]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:38:45.971647 kubelet[2945]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:38:45.971647 kubelet[2945]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:38:45.971647 kubelet[2945]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:38:45.972182 kubelet[2945]: I0313 00:38:45.971723 2945 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:38:46.920472 kubelet[2945]: I0313 00:38:46.920427 2945 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 13 00:38:46.920472 kubelet[2945]: I0313 00:38:46.920455 2945 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:38:46.920769 kubelet[2945]: I0313 00:38:46.920748 2945 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:38:46.989464 kubelet[2945]: I0313 00:38:46.989004 2945 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:38:46.991681 kubelet[2945]: E0313 00:38:46.991632 2945 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.22.244:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.244:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:38:47.016370 kubelet[2945]: I0313 00:38:47.016321 2945 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:38:47.025959 kubelet[2945]: I0313 00:38:47.025923 2945 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 13 00:38:47.028265 kubelet[2945]: I0313 00:38:47.028216 2945 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:38:47.032136 kubelet[2945]: I0313 00:38:47.028262 2945 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-244","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:38:47.033027 kubelet[2945]: I0313 00:38:47.032996 2945 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:38:47.033027 kubelet[2945]: I0313 00:38:47.033028 2945 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 00:38:47.033217 kubelet[2945]: I0313 00:38:47.033193 2945 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:38:47.040022 kubelet[2945]: I0313 00:38:47.039982 2945 kubelet.go:480] "Attempting to sync node with API server" Mar 13 00:38:47.040022 kubelet[2945]: I0313 00:38:47.040025 2945 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:38:47.041314 kubelet[2945]: I0313 00:38:47.041284 2945 kubelet.go:386] "Adding apiserver pod source" Mar 13 00:38:47.045597 kubelet[2945]: I0313 00:38:47.045165 2945 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:38:47.052944 kubelet[2945]: E0313 00:38:47.052902 2945 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.22.244:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-244&limit=500&resourceVersion=0\": dial tcp 172.31.22.244:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:38:47.053077 kubelet[2945]: E0313 00:38:47.053055 2945 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.22.244:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.244:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:38:47.053724 kubelet[2945]: I0313 00:38:47.053701 2945 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:38:47.054419 kubelet[2945]: I0313 00:38:47.054396 2945 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:38:47.055472 kubelet[2945]: W0313 00:38:47.055440 2945 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:38:47.062734 kubelet[2945]: I0313 00:38:47.062704 2945 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 13 00:38:47.062860 kubelet[2945]: I0313 00:38:47.062766 2945 server.go:1289] "Started kubelet" Mar 13 00:38:47.063030 kubelet[2945]: I0313 00:38:47.062998 2945 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:38:47.063886 kubelet[2945]: I0313 00:38:47.063853 2945 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:38:47.069139 kubelet[2945]: I0313 00:38:47.069101 2945 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:38:47.070867 kubelet[2945]: I0313 00:38:47.070397 2945 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:38:47.071081 kubelet[2945]: I0313 00:38:47.071056 2945 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:38:47.074081 kubelet[2945]: E0313 00:38:47.071578 2945 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.244:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.244:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-244.189c3fa40aa24318 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-244,UID:ip-172-31-22-244,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-244,},FirstTimestamp:2026-03-13 00:38:47.0627254 +0000 UTC m=+1.144436554,LastTimestamp:2026-03-13 00:38:47.0627254 +0000 UTC m=+1.144436554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-244,}" Mar 13 00:38:47.075116 kubelet[2945]: I0313 00:38:47.075061 2945 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:38:47.080818 kubelet[2945]: E0313 00:38:47.080786 2945 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-244\" not found" Mar 13 00:38:47.081059 kubelet[2945]: I0313 00:38:47.080961 2945 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 13 00:38:47.083337 kubelet[2945]: I0313 00:38:47.082670 2945 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 13 00:38:47.083337 kubelet[2945]: I0313 00:38:47.082737 2945 reconciler.go:26] "Reconciler: start to sync state" Mar 13 00:38:47.083337 kubelet[2945]: E0313 00:38:47.083187 2945 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.22.244:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.244:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:38:47.084641 kubelet[2945]: I0313 00:38:47.084621 2945 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:38:47.084821 kubelet[2945]: I0313 00:38:47.084803 2945 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:38:47.087780 kubelet[2945]: I0313 00:38:47.087756 2945 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:38:47.088112 kubelet[2945]: E0313 00:38:47.088030 2945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-244?timeout=10s\": dial tcp 172.31.22.244:6443: connect: connection refused" interval="200ms" Mar 13 00:38:47.091614 kubelet[2945]: E0313 00:38:47.091424 2945 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:38:47.112411 kubelet[2945]: I0313 00:38:47.112388 2945 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:38:47.112564 kubelet[2945]: I0313 00:38:47.112552 2945 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:38:47.112638 kubelet[2945]: I0313 00:38:47.112630 2945 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:38:47.118300 kubelet[2945]: I0313 00:38:47.118274 2945 policy_none.go:49] "None policy: Start" Mar 13 00:38:47.118752 kubelet[2945]: I0313 00:38:47.118469 2945 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 13 00:38:47.118752 kubelet[2945]: I0313 00:38:47.118493 2945 state_mem.go:35] "Initializing new in-memory state store" Mar 13 00:38:47.122835 kubelet[2945]: I0313 00:38:47.122803 2945 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 13 00:38:47.125533 kubelet[2945]: I0313 00:38:47.125130 2945 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 13 00:38:47.125533 kubelet[2945]: I0313 00:38:47.125148 2945 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 13 00:38:47.125533 kubelet[2945]: I0313 00:38:47.125167 2945 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:38:47.125533 kubelet[2945]: I0313 00:38:47.125181 2945 kubelet.go:2436] "Starting kubelet main sync loop" Mar 13 00:38:47.125533 kubelet[2945]: E0313 00:38:47.125262 2945 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:38:47.131572 kubelet[2945]: E0313 00:38:47.131508 2945 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.22.244:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.244:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:38:47.135765 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:38:47.147379 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:38:47.151941 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:38:47.165703 kubelet[2945]: E0313 00:38:47.165665 2945 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:38:47.165906 kubelet[2945]: I0313 00:38:47.165886 2945 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:38:47.166012 kubelet[2945]: I0313 00:38:47.165969 2945 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:38:47.166668 kubelet[2945]: I0313 00:38:47.166492 2945 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:38:47.168349 kubelet[2945]: E0313 00:38:47.168327 2945 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:38:47.168598 kubelet[2945]: E0313 00:38:47.168583 2945 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-244\" not found" Mar 13 00:38:47.239642 systemd[1]: Created slice kubepods-burstable-pod03d13b8f42972aaf4c64544bcd64b3cd.slice - libcontainer container kubepods-burstable-pod03d13b8f42972aaf4c64544bcd64b3cd.slice. Mar 13 00:38:47.256377 kubelet[2945]: E0313 00:38:47.256323 2945 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-244\" not found" node="ip-172-31-22-244" Mar 13 00:38:47.261802 systemd[1]: Created slice kubepods-burstable-pod5e7ea028a5627ab25edda690ce0e0011.slice - libcontainer container kubepods-burstable-pod5e7ea028a5627ab25edda690ce0e0011.slice. Mar 13 00:38:47.269460 kubelet[2945]: E0313 00:38:47.269275 2945 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-244\" not found" node="ip-172-31-22-244" Mar 13 00:38:47.271471 kubelet[2945]: I0313 00:38:47.270523 2945 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-244" Mar 13 00:38:47.271578 kubelet[2945]: E0313 00:38:47.271493 2945 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.244:6443/api/v1/nodes\": dial tcp 172.31.22.244:6443: connect: connection refused" node="ip-172-31-22-244" Mar 13 00:38:47.273122 systemd[1]: Created slice kubepods-burstable-pod35307706b4f0126ef3653e2f8699de14.slice - libcontainer container kubepods-burstable-pod35307706b4f0126ef3653e2f8699de14.slice. Mar 13 00:38:47.275218 kubelet[2945]: E0313 00:38:47.275193 2945 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-244\" not found" node="ip-172-31-22-244" Mar 13 00:38:47.288925 kubelet[2945]: E0313 00:38:47.288880 2945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-244?timeout=10s\": dial tcp 172.31.22.244:6443: connect: connection refused" interval="400ms" Mar 13 00:38:47.383448 kubelet[2945]: I0313 00:38:47.383395 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35307706b4f0126ef3653e2f8699de14-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-244\" (UID: \"35307706b4f0126ef3653e2f8699de14\") " pod="kube-system/kube-apiserver-ip-172-31-22-244" Mar 13 00:38:47.383448 kubelet[2945]: I0313 00:38:47.383444 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03d13b8f42972aaf4c64544bcd64b3cd-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-244\" (UID: \"03d13b8f42972aaf4c64544bcd64b3cd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-244" Mar 13 00:38:47.383657 kubelet[2945]: I0313 00:38:47.383469 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03d13b8f42972aaf4c64544bcd64b3cd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-244\" (UID: \"03d13b8f42972aaf4c64544bcd64b3cd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-244" Mar 13 00:38:47.383657 kubelet[2945]: I0313 00:38:47.383490 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e7ea028a5627ab25edda690ce0e0011-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-244\" (UID: \"5e7ea028a5627ab25edda690ce0e0011\") " pod="kube-system/kube-scheduler-ip-172-31-22-244" Mar 13 00:38:47.383657 kubelet[2945]: I0313 00:38:47.383512 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35307706b4f0126ef3653e2f8699de14-ca-certs\") pod \"kube-apiserver-ip-172-31-22-244\" (UID: \"35307706b4f0126ef3653e2f8699de14\") " pod="kube-system/kube-apiserver-ip-172-31-22-244" Mar 13 00:38:47.383657 kubelet[2945]: I0313 00:38:47.383533 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03d13b8f42972aaf4c64544bcd64b3cd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-244\" (UID: \"03d13b8f42972aaf4c64544bcd64b3cd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-244" Mar 13 00:38:47.383657 kubelet[2945]: I0313 00:38:47.383553 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03d13b8f42972aaf4c64544bcd64b3cd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-244\" (UID: \"03d13b8f42972aaf4c64544bcd64b3cd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-244" Mar 13 00:38:47.383840 kubelet[2945]: I0313 00:38:47.383577 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03d13b8f42972aaf4c64544bcd64b3cd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-244\" (UID: \"03d13b8f42972aaf4c64544bcd64b3cd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-244" Mar 13 00:38:47.383840 kubelet[2945]: I0313 00:38:47.383604 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35307706b4f0126ef3653e2f8699de14-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-244\" (UID: \"35307706b4f0126ef3653e2f8699de14\") " pod="kube-system/kube-apiserver-ip-172-31-22-244" Mar 13 00:38:47.474292 kubelet[2945]: I0313 00:38:47.474199 2945 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-244" Mar 13 00:38:47.474891 kubelet[2945]: E0313 00:38:47.474858 2945 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.244:6443/api/v1/nodes\": dial tcp 172.31.22.244:6443: connect: connection refused" node="ip-172-31-22-244" Mar 13 00:38:47.557894 containerd[1987]: time="2026-03-13T00:38:47.557776795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-244,Uid:03d13b8f42972aaf4c64544bcd64b3cd,Namespace:kube-system,Attempt:0,}" Mar 13 00:38:47.570706 containerd[1987]: time="2026-03-13T00:38:47.570568533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-244,Uid:5e7ea028a5627ab25edda690ce0e0011,Namespace:kube-system,Attempt:0,}" Mar 13 00:38:47.579084 containerd[1987]: time="2026-03-13T00:38:47.579042089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-244,Uid:35307706b4f0126ef3653e2f8699de14,Namespace:kube-system,Attempt:0,}" Mar 13 00:38:47.690824 kubelet[2945]: E0313 00:38:47.690775 2945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-244?timeout=10s\": dial tcp 172.31.22.244:6443: connect: connection refused" interval="800ms" Mar 13 00:38:47.736150 containerd[1987]: time="2026-03-13T00:38:47.736061723Z" level=info msg="connecting to shim 23420165abc6b8c103869ee803fa331d7de6c68d4f956f6021a9b2f2aa38bf44" address="unix:///run/containerd/s/e9d521c4806949f7d53964824fbf008b88674d95e9f691db81a1da1df784d8d0" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:38:47.740972 containerd[1987]: time="2026-03-13T00:38:47.740923447Z" level=info msg="connecting to shim c83a49e78cd7bb4f8e4ac9fb739745da4b2e2016927a2d46960a7915631abecf" address="unix:///run/containerd/s/5712f244d965907ebea17e81815334006b93500cb228845eb8c2e5e327211fee" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:38:47.752733 containerd[1987]: time="2026-03-13T00:38:47.752531760Z" level=info msg="connecting to shim 9043efe745c541ff6cfeb22e48dffc48f676c89c798621034105f726be8dd83a" address="unix:///run/containerd/s/2aaff2c70c720f86a73475baf6a45d3bfc50fc2b12aef4ab18c1fb86fe2140e7" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:38:47.862712 systemd[1]: Started cri-containerd-23420165abc6b8c103869ee803fa331d7de6c68d4f956f6021a9b2f2aa38bf44.scope - libcontainer container 23420165abc6b8c103869ee803fa331d7de6c68d4f956f6021a9b2f2aa38bf44. Mar 13 00:38:47.865752 systemd[1]: Started cri-containerd-9043efe745c541ff6cfeb22e48dffc48f676c89c798621034105f726be8dd83a.scope - libcontainer container 9043efe745c541ff6cfeb22e48dffc48f676c89c798621034105f726be8dd83a. Mar 13 00:38:47.868489 systemd[1]: Started cri-containerd-c83a49e78cd7bb4f8e4ac9fb739745da4b2e2016927a2d46960a7915631abecf.scope - libcontainer container c83a49e78cd7bb4f8e4ac9fb739745da4b2e2016927a2d46960a7915631abecf. Mar 13 00:38:47.881848 kubelet[2945]: I0313 00:38:47.881275 2945 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-244" Mar 13 00:38:47.882802 kubelet[2945]: E0313 00:38:47.882650 2945 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.244:6443/api/v1/nodes\": dial tcp 172.31.22.244:6443: connect: connection refused" node="ip-172-31-22-244" Mar 13 00:38:47.989387 containerd[1987]: time="2026-03-13T00:38:47.989328129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-244,Uid:35307706b4f0126ef3653e2f8699de14,Namespace:kube-system,Attempt:0,} returns sandbox id \"23420165abc6b8c103869ee803fa331d7de6c68d4f956f6021a9b2f2aa38bf44\"" Mar 13 00:38:48.004129 containerd[1987]: time="2026-03-13T00:38:48.004081498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-244,Uid:5e7ea028a5627ab25edda690ce0e0011,Namespace:kube-system,Attempt:0,} returns sandbox id \"c83a49e78cd7bb4f8e4ac9fb739745da4b2e2016927a2d46960a7915631abecf\"" Mar 13 00:38:48.007500 containerd[1987]: time="2026-03-13T00:38:48.007390469Z" level=info msg="CreateContainer within sandbox \"23420165abc6b8c103869ee803fa331d7de6c68d4f956f6021a9b2f2aa38bf44\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:38:48.023518 containerd[1987]: time="2026-03-13T00:38:48.023467640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-244,Uid:03d13b8f42972aaf4c64544bcd64b3cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"9043efe745c541ff6cfeb22e48dffc48f676c89c798621034105f726be8dd83a\"" Mar 13 00:38:48.025792 containerd[1987]: time="2026-03-13T00:38:48.025140771Z" level=info msg="CreateContainer within sandbox \"c83a49e78cd7bb4f8e4ac9fb739745da4b2e2016927a2d46960a7915631abecf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:38:48.031809 containerd[1987]: time="2026-03-13T00:38:48.031762710Z" level=info msg="CreateContainer within sandbox \"9043efe745c541ff6cfeb22e48dffc48f676c89c798621034105f726be8dd83a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:38:48.036388 containerd[1987]: time="2026-03-13T00:38:48.036334027Z" level=info msg="Container 54fddb8a919f33dcb3bc8db9f29ad6c0f88263952db21da491bee70ad648b163: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:48.049913 containerd[1987]: time="2026-03-13T00:38:48.049843846Z" level=info msg="Container 3655280b920a2a1a9b4e43ada6c8d61a519e5dae4feaf44b1559bfff4ef37378: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:48.059270 containerd[1987]: time="2026-03-13T00:38:48.059215471Z" level=info msg="Container 04824b78804d76f8e1db17d3049bcd183690589d3e8d025d26ac52b2f1c335c8: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:48.062490 containerd[1987]: time="2026-03-13T00:38:48.062448301Z" level=info msg="CreateContainer within sandbox \"23420165abc6b8c103869ee803fa331d7de6c68d4f956f6021a9b2f2aa38bf44\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"54fddb8a919f33dcb3bc8db9f29ad6c0f88263952db21da491bee70ad648b163\"" Mar 13 00:38:48.064324 containerd[1987]: time="2026-03-13T00:38:48.064296154Z" level=info msg="StartContainer for \"54fddb8a919f33dcb3bc8db9f29ad6c0f88263952db21da491bee70ad648b163\"" Mar 13 00:38:48.067800 containerd[1987]: time="2026-03-13T00:38:48.067769588Z" level=info msg="connecting to shim 54fddb8a919f33dcb3bc8db9f29ad6c0f88263952db21da491bee70ad648b163" address="unix:///run/containerd/s/e9d521c4806949f7d53964824fbf008b88674d95e9f691db81a1da1df784d8d0" protocol=ttrpc version=3 Mar 13 00:38:48.070079 containerd[1987]: time="2026-03-13T00:38:48.070042258Z" level=info msg="CreateContainer within sandbox \"c83a49e78cd7bb4f8e4ac9fb739745da4b2e2016927a2d46960a7915631abecf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3655280b920a2a1a9b4e43ada6c8d61a519e5dae4feaf44b1559bfff4ef37378\"" Mar 13 00:38:48.070948 containerd[1987]: time="2026-03-13T00:38:48.070923416Z" level=info msg="StartContainer for \"3655280b920a2a1a9b4e43ada6c8d61a519e5dae4feaf44b1559bfff4ef37378\"" Mar 13 00:38:48.072615 containerd[1987]: time="2026-03-13T00:38:48.072571598Z" level=info msg="connecting to shim 3655280b920a2a1a9b4e43ada6c8d61a519e5dae4feaf44b1559bfff4ef37378" address="unix:///run/containerd/s/5712f244d965907ebea17e81815334006b93500cb228845eb8c2e5e327211fee" protocol=ttrpc version=3 Mar 13 00:38:48.079558 containerd[1987]: time="2026-03-13T00:38:48.079503446Z" level=info msg="CreateContainer within sandbox \"9043efe745c541ff6cfeb22e48dffc48f676c89c798621034105f726be8dd83a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"04824b78804d76f8e1db17d3049bcd183690589d3e8d025d26ac52b2f1c335c8\"" Mar 13 00:38:48.082494 containerd[1987]: time="2026-03-13T00:38:48.082454124Z" level=info msg="StartContainer for \"04824b78804d76f8e1db17d3049bcd183690589d3e8d025d26ac52b2f1c335c8\"" Mar 13 00:38:48.084773 containerd[1987]: time="2026-03-13T00:38:48.084723624Z" level=info msg="connecting to shim 04824b78804d76f8e1db17d3049bcd183690589d3e8d025d26ac52b2f1c335c8" address="unix:///run/containerd/s/2aaff2c70c720f86a73475baf6a45d3bfc50fc2b12aef4ab18c1fb86fe2140e7" protocol=ttrpc version=3 Mar 13 00:38:48.102724 systemd[1]: Started cri-containerd-54fddb8a919f33dcb3bc8db9f29ad6c0f88263952db21da491bee70ad648b163.scope - libcontainer container 54fddb8a919f33dcb3bc8db9f29ad6c0f88263952db21da491bee70ad648b163. Mar 13 00:38:48.114202 systemd[1]: Started cri-containerd-3655280b920a2a1a9b4e43ada6c8d61a519e5dae4feaf44b1559bfff4ef37378.scope - libcontainer container 3655280b920a2a1a9b4e43ada6c8d61a519e5dae4feaf44b1559bfff4ef37378. Mar 13 00:38:48.131298 kubelet[2945]: E0313 00:38:48.131270 2945 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.22.244:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.244:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:38:48.134196 systemd[1]: Started cri-containerd-04824b78804d76f8e1db17d3049bcd183690589d3e8d025d26ac52b2f1c335c8.scope - libcontainer container 04824b78804d76f8e1db17d3049bcd183690589d3e8d025d26ac52b2f1c335c8. Mar 13 00:38:48.234343 containerd[1987]: time="2026-03-13T00:38:48.233662829Z" level=info msg="StartContainer for \"3655280b920a2a1a9b4e43ada6c8d61a519e5dae4feaf44b1559bfff4ef37378\" returns successfully" Mar 13 00:38:48.250660 containerd[1987]: time="2026-03-13T00:38:48.250617583Z" level=info msg="StartContainer for \"54fddb8a919f33dcb3bc8db9f29ad6c0f88263952db21da491bee70ad648b163\" returns successfully" Mar 13 00:38:48.286374 containerd[1987]: time="2026-03-13T00:38:48.286072474Z" level=info msg="StartContainer for \"04824b78804d76f8e1db17d3049bcd183690589d3e8d025d26ac52b2f1c335c8\" returns successfully" Mar 13 00:38:48.492385 kubelet[2945]: E0313 00:38:48.492283 2945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-244?timeout=10s\": dial tcp 172.31.22.244:6443: connect: connection refused" interval="1.6s" Mar 13 00:38:48.528385 kubelet[2945]: E0313 00:38:48.527989 2945 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.22.244:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.244:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:38:48.538747 kubelet[2945]: E0313 00:38:48.538701 2945 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.22.244:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.244:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:38:48.651438 kubelet[2945]: E0313 00:38:48.651397 2945 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.22.244:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-244&limit=500&resourceVersion=0\": dial tcp 172.31.22.244:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:38:48.685497 kubelet[2945]: I0313 00:38:48.685467 2945 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-244" Mar 13 00:38:49.170852 kubelet[2945]: E0313 00:38:49.170817 2945 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-244\" not found" node="ip-172-31-22-244" Mar 13 00:38:49.178385 kubelet[2945]: E0313 00:38:49.176939 2945 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-244\" not found" node="ip-172-31-22-244" Mar 13 00:38:49.178385 kubelet[2945]: E0313 00:38:49.178110 2945 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-244\" not found" node="ip-172-31-22-244" Mar 13 00:38:49.608033 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 13 00:38:50.181725 kubelet[2945]: E0313 00:38:50.181691 2945 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-244\" not found" node="ip-172-31-22-244" Mar 13 00:38:50.182913 kubelet[2945]: E0313 00:38:50.182882 2945 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-244\" not found" node="ip-172-31-22-244" Mar 13 00:38:50.183368 kubelet[2945]: E0313 00:38:50.183332 2945 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-244\" not found" node="ip-172-31-22-244" Mar 13 00:38:51.181642 kubelet[2945]: E0313 00:38:51.181589 2945 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-244\" not found" node="ip-172-31-22-244" Mar 13 00:38:51.182121 kubelet[2945]: E0313 00:38:51.182078 2945 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-244\" not found" node="ip-172-31-22-244" Mar 13 00:38:51.904831 kubelet[2945]: E0313 00:38:51.904794 2945 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-244\" not found" node="ip-172-31-22-244" Mar 13 00:38:51.967855 kubelet[2945]: I0313 00:38:51.967445 2945 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-244" Mar 13 00:38:51.967855 kubelet[2945]: E0313 00:38:51.967505 2945 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-22-244\": node \"ip-172-31-22-244\" not found" Mar 13 00:38:51.983363 kubelet[2945]: I0313 00:38:51.983323 2945 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-244" Mar 13 00:38:52.008152 kubelet[2945]: E0313 00:38:52.008087 2945 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-22-244\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-22-244" Mar 13 00:38:52.008377 kubelet[2945]: I0313 00:38:52.008336 2945 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-244" Mar 13 00:38:52.011040 kubelet[2945]: E0313 00:38:52.011003 2945 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-244\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-22-244" Mar 13 00:38:52.011040 kubelet[2945]: I0313 00:38:52.011037 2945 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-244" Mar 13 00:38:52.014015 kubelet[2945]: E0313 00:38:52.013972 2945 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-244\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-22-244" Mar 13 00:38:52.052864 kubelet[2945]: I0313 00:38:52.052814 2945 apiserver.go:52] "Watching apiserver" Mar 13 00:38:52.083861 kubelet[2945]: I0313 00:38:52.083802 2945 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 13 00:38:53.545031 kubelet[2945]: I0313 00:38:53.544998 2945 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-244" Mar 13 00:38:53.883424 systemd[1]: Reload requested from client PID 3224 ('systemctl') (unit session-7.scope)... Mar 13 00:38:53.883442 systemd[1]: Reloading... Mar 13 00:38:54.010388 zram_generator::config[3268]: No configuration found. Mar 13 00:38:54.282426 systemd[1]: Reloading finished in 398 ms. Mar 13 00:38:54.310314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:38:54.327078 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:38:54.327528 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:38:54.327625 systemd[1]: kubelet.service: Consumed 1.561s CPU time, 125.9M memory peak. Mar 13 00:38:54.329881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:38:54.635925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:38:54.650006 (kubelet)[3328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:38:54.712660 kubelet[3328]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:38:54.713923 kubelet[3328]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:38:54.713923 kubelet[3328]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:38:54.713923 kubelet[3328]: I0313 00:38:54.713208 3328 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:38:54.731268 kubelet[3328]: I0313 00:38:54.731234 3328 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 13 00:38:54.731496 kubelet[3328]: I0313 00:38:54.731419 3328 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:38:54.731840 kubelet[3328]: I0313 00:38:54.731816 3328 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:38:54.733554 kubelet[3328]: I0313 00:38:54.733526 3328 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:38:54.747926 kubelet[3328]: I0313 00:38:54.747839 3328 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:38:54.754823 kubelet[3328]: I0313 00:38:54.754507 3328 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:38:54.760978 kubelet[3328]: I0313 00:38:54.760886 3328 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 13 00:38:54.761204 kubelet[3328]: I0313 00:38:54.761135 3328 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:38:54.761547 kubelet[3328]: I0313 00:38:54.761188 3328 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-244","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:38:54.761699 kubelet[3328]: I0313 00:38:54.761606 3328 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:38:54.761699 kubelet[3328]: I0313 00:38:54.761625 3328 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 00:38:54.761699 kubelet[3328]: I0313 00:38:54.761684 3328 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:38:54.764705 kubelet[3328]: I0313 00:38:54.764541 3328 kubelet.go:480] "Attempting to sync node with API server" Mar 13 00:38:54.764705 kubelet[3328]: I0313 00:38:54.764570 3328 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:38:54.764705 kubelet[3328]: I0313 00:38:54.764607 3328 kubelet.go:386] "Adding apiserver pod source" Mar 13 00:38:54.764705 kubelet[3328]: I0313 00:38:54.764621 3328 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:38:54.775381 kubelet[3328]: I0313 00:38:54.771995 3328 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:38:54.775381 kubelet[3328]: I0313 00:38:54.775087 3328 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:38:54.786308 kubelet[3328]: I0313 00:38:54.786286 3328 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 13 00:38:54.787808 kubelet[3328]: I0313 00:38:54.787792 3328 server.go:1289] "Started kubelet" Mar 13 00:38:54.790889 kubelet[3328]: I0313 00:38:54.790865 3328 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:38:54.800395 kubelet[3328]: I0313 00:38:54.800237 3328 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:38:54.801776 kubelet[3328]: I0313 00:38:54.801675 3328 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:38:54.808437 kubelet[3328]: I0313 00:38:54.807346 3328 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:38:54.808437 kubelet[3328]: I0313 00:38:54.807992 3328 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:38:54.808437 kubelet[3328]: I0313 00:38:54.808032 3328 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 13 00:38:54.809390 kubelet[3328]: I0313 00:38:54.808962 3328 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:38:54.825429 kubelet[3328]: I0313 00:38:54.810174 3328 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 13 00:38:54.827948 kubelet[3328]: I0313 00:38:54.810192 3328 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 13 00:38:54.828590 kubelet[3328]: E0313 00:38:54.810384 3328 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-244\" not found" Mar 13 00:38:54.828811 kubelet[3328]: I0313 00:38:54.823939 3328 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:38:54.829034 kubelet[3328]: E0313 00:38:54.826451 3328 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:38:54.829034 kubelet[3328]: I0313 00:38:54.828284 3328 reconciler.go:26] "Reconciler: start to sync state" Mar 13 00:38:54.832329 kubelet[3328]: I0313 00:38:54.830444 3328 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:38:54.832329 kubelet[3328]: I0313 00:38:54.830481 3328 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:38:54.845062 kubelet[3328]: I0313 00:38:54.844851 3328 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 13 00:38:54.845062 kubelet[3328]: I0313 00:38:54.844880 3328 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 13 00:38:54.845062 kubelet[3328]: I0313 00:38:54.844904 3328 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:38:54.845062 kubelet[3328]: I0313 00:38:54.844914 3328 kubelet.go:2436] "Starting kubelet main sync loop" Mar 13 00:38:54.845062 kubelet[3328]: E0313 00:38:54.844962 3328 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:38:54.931641 kubelet[3328]: I0313 00:38:54.930523 3328 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:38:54.931641 kubelet[3328]: I0313 00:38:54.930541 3328 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:38:54.931641 kubelet[3328]: I0313 00:38:54.930566 3328 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:38:54.931641 kubelet[3328]: I0313 00:38:54.930730 3328 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 00:38:54.931641 kubelet[3328]: I0313 00:38:54.930751 3328 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 00:38:54.931641 kubelet[3328]: I0313 00:38:54.930772 3328 policy_none.go:49] "None policy: Start" Mar 13 00:38:54.931641 kubelet[3328]: I0313 00:38:54.930784 3328 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 13 00:38:54.931641 kubelet[3328]: I0313 00:38:54.930796 3328 state_mem.go:35] "Initializing new in-memory state store" Mar 13 00:38:54.931641 kubelet[3328]: I0313 00:38:54.930909 3328 state_mem.go:75] "Updated machine memory state" Mar 13 00:38:54.942036 kubelet[3328]: E0313 00:38:54.940859 3328 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:38:54.942036 kubelet[3328]: I0313 00:38:54.941735 3328 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:38:54.942036 kubelet[3328]: I0313 00:38:54.941751 3328 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:38:54.947613 kubelet[3328]: I0313 00:38:54.947021 3328 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:38:54.952539 kubelet[3328]: I0313 00:38:54.952512 3328 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-244" Mar 13 00:38:54.958003 kubelet[3328]: I0313 00:38:54.955422 3328 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-244" Mar 13 00:38:54.958003 kubelet[3328]: I0313 00:38:54.957336 3328 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-244" Mar 13 00:38:54.961314 kubelet[3328]: E0313 00:38:54.961243 3328 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:38:54.972715 kubelet[3328]: E0313 00:38:54.972584 3328 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-244\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-244" Mar 13 00:38:55.029577 kubelet[3328]: I0313 00:38:55.029462 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e7ea028a5627ab25edda690ce0e0011-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-244\" (UID: \"5e7ea028a5627ab25edda690ce0e0011\") " pod="kube-system/kube-scheduler-ip-172-31-22-244" Mar 13 00:38:55.029577 kubelet[3328]: I0313 00:38:55.029504 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35307706b4f0126ef3653e2f8699de14-ca-certs\") pod \"kube-apiserver-ip-172-31-22-244\" (UID: \"35307706b4f0126ef3653e2f8699de14\") " pod="kube-system/kube-apiserver-ip-172-31-22-244" Mar 13 00:38:55.029577 kubelet[3328]: I0313 00:38:55.029542 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35307706b4f0126ef3653e2f8699de14-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-244\" (UID: \"35307706b4f0126ef3653e2f8699de14\") " pod="kube-system/kube-apiserver-ip-172-31-22-244" Mar 13 00:38:55.030350 kubelet[3328]: I0313 00:38:55.029582 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03d13b8f42972aaf4c64544bcd64b3cd-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-244\" (UID: \"03d13b8f42972aaf4c64544bcd64b3cd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-244" Mar 13 00:38:55.030350 kubelet[3328]: I0313 00:38:55.029612 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03d13b8f42972aaf4c64544bcd64b3cd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-244\" (UID: \"03d13b8f42972aaf4c64544bcd64b3cd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-244" Mar 13 00:38:55.030350 kubelet[3328]: I0313 00:38:55.029634 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03d13b8f42972aaf4c64544bcd64b3cd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-244\" (UID: \"03d13b8f42972aaf4c64544bcd64b3cd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-244" Mar 13 00:38:55.030350 kubelet[3328]: I0313 00:38:55.029659 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35307706b4f0126ef3653e2f8699de14-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-244\" (UID: \"35307706b4f0126ef3653e2f8699de14\") " pod="kube-system/kube-apiserver-ip-172-31-22-244" Mar 13 00:38:55.030350 kubelet[3328]: I0313 00:38:55.029681 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03d13b8f42972aaf4c64544bcd64b3cd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-244\" (UID: \"03d13b8f42972aaf4c64544bcd64b3cd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-244" Mar 13 00:38:55.030532 kubelet[3328]: I0313 00:38:55.029703 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03d13b8f42972aaf4c64544bcd64b3cd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-244\" (UID: \"03d13b8f42972aaf4c64544bcd64b3cd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-244" Mar 13 00:38:55.068867 kubelet[3328]: I0313 00:38:55.067582 3328 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-244" Mar 13 00:38:55.078937 kubelet[3328]: I0313 00:38:55.078263 3328 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-22-244" Mar 13 00:38:55.078937 kubelet[3328]: I0313 00:38:55.078574 3328 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-244" Mar 13 00:38:55.769806 kubelet[3328]: I0313 00:38:55.768867 3328 apiserver.go:52] "Watching apiserver" Mar 13 00:38:55.829485 kubelet[3328]: I0313 00:38:55.829446 3328 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 13 00:38:55.931146 kubelet[3328]: I0313 00:38:55.930904 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-244" podStartSLOduration=1.930888386 podStartE2EDuration="1.930888386s" podCreationTimestamp="2026-03-13 00:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:38:55.928078382 +0000 UTC m=+1.267159875" watchObservedRunningTime="2026-03-13 00:38:55.930888386 +0000 UTC m=+1.269969880" Mar 13 00:38:55.962153 kubelet[3328]: I0313 00:38:55.961847 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-244" podStartSLOduration=2.961826695 podStartE2EDuration="2.961826695s" podCreationTimestamp="2026-03-13 00:38:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:38:55.949854085 +0000 UTC m=+1.288935580" watchObservedRunningTime="2026-03-13 00:38:55.961826695 +0000 UTC m=+1.300908189" Mar 13 00:38:55.975755 kubelet[3328]: I0313 00:38:55.975516 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-244" podStartSLOduration=1.975496227 podStartE2EDuration="1.975496227s" podCreationTimestamp="2026-03-13 00:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:38:55.962556644 +0000 UTC m=+1.301638139" watchObservedRunningTime="2026-03-13 00:38:55.975496227 +0000 UTC m=+1.314577710" Mar 13 00:38:59.542701 kubelet[3328]: I0313 00:38:59.542642 3328 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:38:59.543910 kubelet[3328]: I0313 00:38:59.543409 3328 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:38:59.544043 containerd[1987]: time="2026-03-13T00:38:59.543040946Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:39:00.635010 systemd[1]: Created slice kubepods-besteffort-pod02eb0261_f799_47c2_83ed_71965d612e20.slice - libcontainer container kubepods-besteffort-pod02eb0261_f799_47c2_83ed_71965d612e20.slice. Mar 13 00:39:00.666280 kubelet[3328]: I0313 00:39:00.665459 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02eb0261-f799-47c2-83ed-71965d612e20-xtables-lock\") pod \"kube-proxy-djkhp\" (UID: \"02eb0261-f799-47c2-83ed-71965d612e20\") " pod="kube-system/kube-proxy-djkhp" Mar 13 00:39:00.667039 kubelet[3328]: I0313 00:39:00.666862 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/02eb0261-f799-47c2-83ed-71965d612e20-kube-proxy\") pod \"kube-proxy-djkhp\" (UID: \"02eb0261-f799-47c2-83ed-71965d612e20\") " pod="kube-system/kube-proxy-djkhp" Mar 13 00:39:00.667039 kubelet[3328]: I0313 00:39:00.666940 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02eb0261-f799-47c2-83ed-71965d612e20-lib-modules\") pod \"kube-proxy-djkhp\" (UID: \"02eb0261-f799-47c2-83ed-71965d612e20\") " pod="kube-system/kube-proxy-djkhp" Mar 13 00:39:00.667039 kubelet[3328]: I0313 00:39:00.666964 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7f8b\" (UniqueName: \"kubernetes.io/projected/02eb0261-f799-47c2-83ed-71965d612e20-kube-api-access-j7f8b\") pod \"kube-proxy-djkhp\" (UID: \"02eb0261-f799-47c2-83ed-71965d612e20\") " pod="kube-system/kube-proxy-djkhp" Mar 13 00:39:00.812185 systemd[1]: Created slice kubepods-besteffort-podffee2c6e_b71b_4ac4_a844_4dcd4dc0c960.slice - libcontainer container kubepods-besteffort-podffee2c6e_b71b_4ac4_a844_4dcd4dc0c960.slice. Mar 13 00:39:00.868995 kubelet[3328]: I0313 00:39:00.868932 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pwh7\" (UniqueName: \"kubernetes.io/projected/ffee2c6e-b71b-4ac4-a844-4dcd4dc0c960-kube-api-access-8pwh7\") pod \"tigera-operator-6bf85f8dd-fg7k5\" (UID: \"ffee2c6e-b71b-4ac4-a844-4dcd4dc0c960\") " pod="tigera-operator/tigera-operator-6bf85f8dd-fg7k5" Mar 13 00:39:00.869174 kubelet[3328]: I0313 00:39:00.869141 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ffee2c6e-b71b-4ac4-a844-4dcd4dc0c960-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-fg7k5\" (UID: \"ffee2c6e-b71b-4ac4-a844-4dcd4dc0c960\") " pod="tigera-operator/tigera-operator-6bf85f8dd-fg7k5" Mar 13 00:39:00.942926 containerd[1987]: time="2026-03-13T00:39:00.942826173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-djkhp,Uid:02eb0261-f799-47c2-83ed-71965d612e20,Namespace:kube-system,Attempt:0,}" Mar 13 00:39:00.986717 containerd[1987]: time="2026-03-13T00:39:00.985586958Z" level=info msg="connecting to shim 8f9bd63e51e5c14b488dab87177ad6994e624e6cdb2a6942e09c14cd826a7189" address="unix:///run/containerd/s/9c2db4478853032e5fa9b569c6ba8240eb72496e04f0ba1a41fb41d806a5e502" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:01.032679 systemd[1]: Started cri-containerd-8f9bd63e51e5c14b488dab87177ad6994e624e6cdb2a6942e09c14cd826a7189.scope - libcontainer container 8f9bd63e51e5c14b488dab87177ad6994e624e6cdb2a6942e09c14cd826a7189. Mar 13 00:39:01.066309 containerd[1987]: time="2026-03-13T00:39:01.066205123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-djkhp,Uid:02eb0261-f799-47c2-83ed-71965d612e20,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f9bd63e51e5c14b488dab87177ad6994e624e6cdb2a6942e09c14cd826a7189\"" Mar 13 00:39:01.074060 containerd[1987]: time="2026-03-13T00:39:01.074016405Z" level=info msg="CreateContainer within sandbox \"8f9bd63e51e5c14b488dab87177ad6994e624e6cdb2a6942e09c14cd826a7189\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:39:01.093148 containerd[1987]: time="2026-03-13T00:39:01.093100596Z" level=info msg="Container dcbbf7c0d2d694830d062c23e7568f22dbe99ba5e3ede4519af6491673394bd0: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:01.107260 containerd[1987]: time="2026-03-13T00:39:01.106983457Z" level=info msg="CreateContainer within sandbox \"8f9bd63e51e5c14b488dab87177ad6994e624e6cdb2a6942e09c14cd826a7189\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dcbbf7c0d2d694830d062c23e7568f22dbe99ba5e3ede4519af6491673394bd0\"" Mar 13 00:39:01.107766 containerd[1987]: time="2026-03-13T00:39:01.107734769Z" level=info msg="StartContainer for \"dcbbf7c0d2d694830d062c23e7568f22dbe99ba5e3ede4519af6491673394bd0\"" Mar 13 00:39:01.110803 containerd[1987]: time="2026-03-13T00:39:01.110762464Z" level=info msg="connecting to shim dcbbf7c0d2d694830d062c23e7568f22dbe99ba5e3ede4519af6491673394bd0" address="unix:///run/containerd/s/9c2db4478853032e5fa9b569c6ba8240eb72496e04f0ba1a41fb41d806a5e502" protocol=ttrpc version=3 Mar 13 00:39:01.119063 containerd[1987]: time="2026-03-13T00:39:01.118945551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-fg7k5,Uid:ffee2c6e-b71b-4ac4-a844-4dcd4dc0c960,Namespace:tigera-operator,Attempt:0,}" Mar 13 00:39:01.136867 systemd[1]: Started cri-containerd-dcbbf7c0d2d694830d062c23e7568f22dbe99ba5e3ede4519af6491673394bd0.scope - libcontainer container dcbbf7c0d2d694830d062c23e7568f22dbe99ba5e3ede4519af6491673394bd0. Mar 13 00:39:01.152889 containerd[1987]: time="2026-03-13T00:39:01.152536262Z" level=info msg="connecting to shim 9bee0b059026818c34d24d4e6b1a73f875e58e71b5b1c3a6b007c39f5803e2e3" address="unix:///run/containerd/s/8ac2029598166c8067b4d582f2c959e4f3209ae10569c94057d4203ccb51cf7e" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:01.186584 systemd[1]: Started cri-containerd-9bee0b059026818c34d24d4e6b1a73f875e58e71b5b1c3a6b007c39f5803e2e3.scope - libcontainer container 9bee0b059026818c34d24d4e6b1a73f875e58e71b5b1c3a6b007c39f5803e2e3. Mar 13 00:39:01.232168 containerd[1987]: time="2026-03-13T00:39:01.230687015Z" level=info msg="StartContainer for \"dcbbf7c0d2d694830d062c23e7568f22dbe99ba5e3ede4519af6491673394bd0\" returns successfully" Mar 13 00:39:01.275326 containerd[1987]: time="2026-03-13T00:39:01.275220340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-fg7k5,Uid:ffee2c6e-b71b-4ac4-a844-4dcd4dc0c960,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9bee0b059026818c34d24d4e6b1a73f875e58e71b5b1c3a6b007c39f5803e2e3\"" Mar 13 00:39:01.278847 containerd[1987]: time="2026-03-13T00:39:01.278572277Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 13 00:39:01.941649 kubelet[3328]: I0313 00:39:01.941560 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-djkhp" podStartSLOduration=1.941481917 podStartE2EDuration="1.941481917s" podCreationTimestamp="2026-03-13 00:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:39:01.925146632 +0000 UTC m=+7.264228128" watchObservedRunningTime="2026-03-13 00:39:01.941481917 +0000 UTC m=+7.280563410" Mar 13 00:39:03.110551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1195623498.mount: Deactivated successfully. Mar 13 00:39:03.701629 update_engine[1961]: I20260313 00:39:03.701562 1961 update_attempter.cc:509] Updating boot flags... Mar 13 00:39:05.139818 containerd[1987]: time="2026-03-13T00:39:05.139760451Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:05.144779 containerd[1987]: time="2026-03-13T00:39:05.144714908Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 13 00:39:05.152122 containerd[1987]: time="2026-03-13T00:39:05.152059195Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:05.157672 containerd[1987]: time="2026-03-13T00:39:05.157577144Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:05.158681 containerd[1987]: time="2026-03-13T00:39:05.158645265Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.880030523s" Mar 13 00:39:05.158842 containerd[1987]: time="2026-03-13T00:39:05.158820497Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 13 00:39:05.167931 containerd[1987]: time="2026-03-13T00:39:05.167890965Z" level=info msg="CreateContainer within sandbox \"9bee0b059026818c34d24d4e6b1a73f875e58e71b5b1c3a6b007c39f5803e2e3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 13 00:39:05.202765 containerd[1987]: time="2026-03-13T00:39:05.202027510Z" level=info msg="Container a447da88b30044079b56e97bc806a381a964a1f01b5c0b408360ad59728abea3: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:05.208368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3124879579.mount: Deactivated successfully. Mar 13 00:39:05.241458 containerd[1987]: time="2026-03-13T00:39:05.241412665Z" level=info msg="CreateContainer within sandbox \"9bee0b059026818c34d24d4e6b1a73f875e58e71b5b1c3a6b007c39f5803e2e3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a447da88b30044079b56e97bc806a381a964a1f01b5c0b408360ad59728abea3\"" Mar 13 00:39:05.242790 containerd[1987]: time="2026-03-13T00:39:05.242534520Z" level=info msg="StartContainer for \"a447da88b30044079b56e97bc806a381a964a1f01b5c0b408360ad59728abea3\"" Mar 13 00:39:05.243655 containerd[1987]: time="2026-03-13T00:39:05.243592095Z" level=info msg="connecting to shim a447da88b30044079b56e97bc806a381a964a1f01b5c0b408360ad59728abea3" address="unix:///run/containerd/s/8ac2029598166c8067b4d582f2c959e4f3209ae10569c94057d4203ccb51cf7e" protocol=ttrpc version=3 Mar 13 00:39:05.274598 systemd[1]: Started cri-containerd-a447da88b30044079b56e97bc806a381a964a1f01b5c0b408360ad59728abea3.scope - libcontainer container a447da88b30044079b56e97bc806a381a964a1f01b5c0b408360ad59728abea3. Mar 13 00:39:05.310389 containerd[1987]: time="2026-03-13T00:39:05.310287997Z" level=info msg="StartContainer for \"a447da88b30044079b56e97bc806a381a964a1f01b5c0b408360ad59728abea3\" returns successfully" Mar 13 00:39:12.337065 sudo[2356]: pam_unix(sudo:session): session closed for user root Mar 13 00:39:12.414539 sshd[2355]: Connection closed by 20.161.92.111 port 44732 Mar 13 00:39:12.417886 sshd-session[2352]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:12.426163 systemd[1]: sshd@6-172.31.22.244:22-20.161.92.111:44732.service: Deactivated successfully. Mar 13 00:39:12.431772 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:39:12.433160 systemd[1]: session-7.scope: Consumed 6.952s CPU time, 157M memory peak. Mar 13 00:39:12.436157 systemd-logind[1959]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:39:12.441886 systemd-logind[1959]: Removed session 7. Mar 13 00:39:16.018452 kubelet[3328]: I0313 00:39:16.018254 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-fg7k5" podStartSLOduration=12.136478594 podStartE2EDuration="16.018217634s" podCreationTimestamp="2026-03-13 00:39:00 +0000 UTC" firstStartedPulling="2026-03-13 00:39:01.278221581 +0000 UTC m=+6.617303075" lastFinishedPulling="2026-03-13 00:39:05.159960629 +0000 UTC m=+10.499042115" observedRunningTime="2026-03-13 00:39:05.935581823 +0000 UTC m=+11.274663355" watchObservedRunningTime="2026-03-13 00:39:16.018217634 +0000 UTC m=+21.357299131" Mar 13 00:39:16.037037 systemd[1]: Created slice kubepods-besteffort-pod54b1d5a0_3e27_4e03_83fa_6125838868b4.slice - libcontainer container kubepods-besteffort-pod54b1d5a0_3e27_4e03_83fa_6125838868b4.slice. Mar 13 00:39:16.172226 kubelet[3328]: I0313 00:39:16.172077 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/54b1d5a0-3e27-4e03-83fa-6125838868b4-typha-certs\") pod \"calico-typha-c5454cd4f-fqwlb\" (UID: \"54b1d5a0-3e27-4e03-83fa-6125838868b4\") " pod="calico-system/calico-typha-c5454cd4f-fqwlb" Mar 13 00:39:16.172226 kubelet[3328]: I0313 00:39:16.172126 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ps4r\" (UniqueName: \"kubernetes.io/projected/54b1d5a0-3e27-4e03-83fa-6125838868b4-kube-api-access-7ps4r\") pod \"calico-typha-c5454cd4f-fqwlb\" (UID: \"54b1d5a0-3e27-4e03-83fa-6125838868b4\") " pod="calico-system/calico-typha-c5454cd4f-fqwlb" Mar 13 00:39:16.172226 kubelet[3328]: I0313 00:39:16.172168 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54b1d5a0-3e27-4e03-83fa-6125838868b4-tigera-ca-bundle\") pod \"calico-typha-c5454cd4f-fqwlb\" (UID: \"54b1d5a0-3e27-4e03-83fa-6125838868b4\") " pod="calico-system/calico-typha-c5454cd4f-fqwlb" Mar 13 00:39:16.183712 systemd[1]: Created slice kubepods-besteffort-pod701018d6_0be1_4899_b7fc_4ad9e4a14401.slice - libcontainer container kubepods-besteffort-pod701018d6_0be1_4899_b7fc_4ad9e4a14401.slice. Mar 13 00:39:16.272885 kubelet[3328]: I0313 00:39:16.272459 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/701018d6-0be1-4899-b7fc-4ad9e4a14401-bpffs\") pod \"calico-node-dc2zf\" (UID: \"701018d6-0be1-4899-b7fc-4ad9e4a14401\") " pod="calico-system/calico-node-dc2zf" Mar 13 00:39:16.272885 kubelet[3328]: I0313 00:39:16.272512 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/701018d6-0be1-4899-b7fc-4ad9e4a14401-cni-log-dir\") pod \"calico-node-dc2zf\" (UID: \"701018d6-0be1-4899-b7fc-4ad9e4a14401\") " pod="calico-system/calico-node-dc2zf" Mar 13 00:39:16.272885 kubelet[3328]: I0313 00:39:16.272545 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/701018d6-0be1-4899-b7fc-4ad9e4a14401-cni-bin-dir\") pod \"calico-node-dc2zf\" (UID: \"701018d6-0be1-4899-b7fc-4ad9e4a14401\") " pod="calico-system/calico-node-dc2zf" Mar 13 00:39:16.272885 kubelet[3328]: I0313 00:39:16.272570 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l79c6\" (UniqueName: \"kubernetes.io/projected/701018d6-0be1-4899-b7fc-4ad9e4a14401-kube-api-access-l79c6\") pod \"calico-node-dc2zf\" (UID: \"701018d6-0be1-4899-b7fc-4ad9e4a14401\") " pod="calico-system/calico-node-dc2zf" Mar 13 00:39:16.272885 kubelet[3328]: I0313 00:39:16.272627 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/701018d6-0be1-4899-b7fc-4ad9e4a14401-cni-net-dir\") pod \"calico-node-dc2zf\" (UID: \"701018d6-0be1-4899-b7fc-4ad9e4a14401\") " pod="calico-system/calico-node-dc2zf" Mar 13 00:39:16.273259 kubelet[3328]: I0313 00:39:16.272654 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/701018d6-0be1-4899-b7fc-4ad9e4a14401-var-run-calico\") pod \"calico-node-dc2zf\" (UID: \"701018d6-0be1-4899-b7fc-4ad9e4a14401\") " pod="calico-system/calico-node-dc2zf" Mar 13 00:39:16.273259 kubelet[3328]: I0313 00:39:16.272677 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/701018d6-0be1-4899-b7fc-4ad9e4a14401-flexvol-driver-host\") pod \"calico-node-dc2zf\" (UID: \"701018d6-0be1-4899-b7fc-4ad9e4a14401\") " pod="calico-system/calico-node-dc2zf" Mar 13 00:39:16.273259 kubelet[3328]: I0313 00:39:16.272702 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/701018d6-0be1-4899-b7fc-4ad9e4a14401-sys-fs\") pod \"calico-node-dc2zf\" (UID: \"701018d6-0be1-4899-b7fc-4ad9e4a14401\") " pod="calico-system/calico-node-dc2zf" Mar 13 00:39:16.273259 kubelet[3328]: I0313 00:39:16.272733 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/701018d6-0be1-4899-b7fc-4ad9e4a14401-policysync\") pod \"calico-node-dc2zf\" (UID: \"701018d6-0be1-4899-b7fc-4ad9e4a14401\") " pod="calico-system/calico-node-dc2zf" Mar 13 00:39:16.273259 kubelet[3328]: I0313 00:39:16.272755 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/701018d6-0be1-4899-b7fc-4ad9e4a14401-xtables-lock\") pod \"calico-node-dc2zf\" (UID: \"701018d6-0be1-4899-b7fc-4ad9e4a14401\") " pod="calico-system/calico-node-dc2zf" Mar 13 00:39:16.274730 kubelet[3328]: I0313 00:39:16.272777 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/701018d6-0be1-4899-b7fc-4ad9e4a14401-nodeproc\") pod \"calico-node-dc2zf\" (UID: \"701018d6-0be1-4899-b7fc-4ad9e4a14401\") " pod="calico-system/calico-node-dc2zf" Mar 13 00:39:16.274730 kubelet[3328]: I0313 00:39:16.272813 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/701018d6-0be1-4899-b7fc-4ad9e4a14401-lib-modules\") pod \"calico-node-dc2zf\" (UID: \"701018d6-0be1-4899-b7fc-4ad9e4a14401\") " pod="calico-system/calico-node-dc2zf" Mar 13 00:39:16.274730 kubelet[3328]: I0313 00:39:16.272851 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/701018d6-0be1-4899-b7fc-4ad9e4a14401-node-certs\") pod \"calico-node-dc2zf\" (UID: \"701018d6-0be1-4899-b7fc-4ad9e4a14401\") " pod="calico-system/calico-node-dc2zf" Mar 13 00:39:16.274730 kubelet[3328]: I0313 00:39:16.272875 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/701018d6-0be1-4899-b7fc-4ad9e4a14401-tigera-ca-bundle\") pod \"calico-node-dc2zf\" (UID: \"701018d6-0be1-4899-b7fc-4ad9e4a14401\") " pod="calico-system/calico-node-dc2zf" Mar 13 00:39:16.274730 kubelet[3328]: I0313 00:39:16.272896 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/701018d6-0be1-4899-b7fc-4ad9e4a14401-var-lib-calico\") pod \"calico-node-dc2zf\" (UID: \"701018d6-0be1-4899-b7fc-4ad9e4a14401\") " pod="calico-system/calico-node-dc2zf" Mar 13 00:39:16.286090 kubelet[3328]: E0313 00:39:16.286027 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkmzp" podUID="c8d85f17-e076-4723-880f-19418eb52308" Mar 13 00:39:16.345932 containerd[1987]: time="2026-03-13T00:39:16.345648962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c5454cd4f-fqwlb,Uid:54b1d5a0-3e27-4e03-83fa-6125838868b4,Namespace:calico-system,Attempt:0,}" Mar 13 00:39:16.415753 kubelet[3328]: E0313 00:39:16.415440 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.415753 kubelet[3328]: W0313 00:39:16.415479 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.415753 kubelet[3328]: E0313 00:39:16.415624 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.417844 kubelet[3328]: E0313 00:39:16.417663 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.417844 kubelet[3328]: W0313 00:39:16.417686 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.417844 kubelet[3328]: E0313 00:39:16.417717 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.419333 kubelet[3328]: E0313 00:39:16.419106 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.419333 kubelet[3328]: W0313 00:39:16.419127 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.419333 kubelet[3328]: E0313 00:39:16.419155 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.419531 containerd[1987]: time="2026-03-13T00:39:16.419499884Z" level=info msg="connecting to shim 90e1a34604abaed0a57a74be923e09c182c3c1277692a87b7513108b705cfd84" address="unix:///run/containerd/s/73e0b2e3a09fe9cbca54e2a09093cf404035486a231487e7cf8fb178f09a4cbc" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:16.422803 kubelet[3328]: E0313 00:39:16.422782 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.423613 kubelet[3328]: W0313 00:39:16.423586 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.424121 kubelet[3328]: E0313 00:39:16.424092 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.425020 kubelet[3328]: E0313 00:39:16.425002 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.425208 kubelet[3328]: W0313 00:39:16.425191 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.425596 kubelet[3328]: E0313 00:39:16.425578 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.449487 kubelet[3328]: E0313 00:39:16.449456 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.449487 kubelet[3328]: W0313 00:39:16.449483 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.449680 kubelet[3328]: E0313 00:39:16.449507 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.462800 systemd[1]: Started cri-containerd-90e1a34604abaed0a57a74be923e09c182c3c1277692a87b7513108b705cfd84.scope - libcontainer container 90e1a34604abaed0a57a74be923e09c182c3c1277692a87b7513108b705cfd84. Mar 13 00:39:16.475666 kubelet[3328]: E0313 00:39:16.475629 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.475666 kubelet[3328]: W0313 00:39:16.475666 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.475860 kubelet[3328]: E0313 00:39:16.475690 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.476455 kubelet[3328]: I0313 00:39:16.475722 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c8d85f17-e076-4723-880f-19418eb52308-registration-dir\") pod \"csi-node-driver-bkmzp\" (UID: \"c8d85f17-e076-4723-880f-19418eb52308\") " pod="calico-system/csi-node-driver-bkmzp" Mar 13 00:39:16.476792 kubelet[3328]: E0313 00:39:16.476768 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.476792 kubelet[3328]: W0313 00:39:16.476786 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.476911 kubelet[3328]: E0313 00:39:16.476804 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.477140 kubelet[3328]: E0313 00:39:16.477096 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.477140 kubelet[3328]: W0313 00:39:16.477130 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.478419 kubelet[3328]: E0313 00:39:16.478388 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.478732 kubelet[3328]: E0313 00:39:16.478711 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.478732 kubelet[3328]: W0313 00:39:16.478726 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.478834 kubelet[3328]: E0313 00:39:16.478757 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.478834 kubelet[3328]: I0313 00:39:16.478795 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8d85f17-e076-4723-880f-19418eb52308-kubelet-dir\") pod \"csi-node-driver-bkmzp\" (UID: \"c8d85f17-e076-4723-880f-19418eb52308\") " pod="calico-system/csi-node-driver-bkmzp" Mar 13 00:39:16.479086 kubelet[3328]: E0313 00:39:16.479057 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.479086 kubelet[3328]: W0313 00:39:16.479081 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.479199 kubelet[3328]: E0313 00:39:16.479096 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.479406 kubelet[3328]: E0313 00:39:16.479382 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.479406 kubelet[3328]: W0313 00:39:16.479394 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.479507 kubelet[3328]: E0313 00:39:16.479407 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.479691 kubelet[3328]: E0313 00:39:16.479670 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.479691 kubelet[3328]: W0313 00:39:16.479683 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.479792 kubelet[3328]: E0313 00:39:16.479696 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.480175 kubelet[3328]: I0313 00:39:16.480146 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c8d85f17-e076-4723-880f-19418eb52308-socket-dir\") pod \"csi-node-driver-bkmzp\" (UID: \"c8d85f17-e076-4723-880f-19418eb52308\") " pod="calico-system/csi-node-driver-bkmzp" Mar 13 00:39:16.480610 kubelet[3328]: E0313 00:39:16.480472 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.480610 kubelet[3328]: W0313 00:39:16.480490 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.480610 kubelet[3328]: E0313 00:39:16.480506 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.480784 kubelet[3328]: E0313 00:39:16.480745 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.480784 kubelet[3328]: W0313 00:39:16.480756 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.480784 kubelet[3328]: E0313 00:39:16.480769 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.481341 kubelet[3328]: E0313 00:39:16.481320 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.481669 kubelet[3328]: W0313 00:39:16.481642 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.481669 kubelet[3328]: E0313 00:39:16.481665 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.482014 kubelet[3328]: I0313 00:39:16.481987 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c8d85f17-e076-4723-880f-19418eb52308-varrun\") pod \"csi-node-driver-bkmzp\" (UID: \"c8d85f17-e076-4723-880f-19418eb52308\") " pod="calico-system/csi-node-driver-bkmzp" Mar 13 00:39:16.482183 kubelet[3328]: E0313 00:39:16.482162 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.482183 kubelet[3328]: W0313 00:39:16.482177 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.482348 kubelet[3328]: E0313 00:39:16.482192 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.482908 kubelet[3328]: E0313 00:39:16.482877 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.483063 kubelet[3328]: W0313 00:39:16.482918 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.483063 kubelet[3328]: E0313 00:39:16.482935 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.483063 kubelet[3328]: I0313 00:39:16.482967 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcs2w\" (UniqueName: \"kubernetes.io/projected/c8d85f17-e076-4723-880f-19418eb52308-kube-api-access-zcs2w\") pod \"csi-node-driver-bkmzp\" (UID: \"c8d85f17-e076-4723-880f-19418eb52308\") " pod="calico-system/csi-node-driver-bkmzp" Mar 13 00:39:16.483340 kubelet[3328]: E0313 00:39:16.483251 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.483340 kubelet[3328]: W0313 00:39:16.483261 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.483340 kubelet[3328]: E0313 00:39:16.483274 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.483840 kubelet[3328]: E0313 00:39:16.483822 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.484015 kubelet[3328]: W0313 00:39:16.483914 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.484015 kubelet[3328]: E0313 00:39:16.483932 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.484230 kubelet[3328]: E0313 00:39:16.484217 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.484305 kubelet[3328]: W0313 00:39:16.484295 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.484465 kubelet[3328]: E0313 00:39:16.484436 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.488891 containerd[1987]: time="2026-03-13T00:39:16.488775098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dc2zf,Uid:701018d6-0be1-4899-b7fc-4ad9e4a14401,Namespace:calico-system,Attempt:0,}" Mar 13 00:39:16.536522 containerd[1987]: time="2026-03-13T00:39:16.535625354Z" level=info msg="connecting to shim 16c8d4881d284d9c4a3041ddd0df64100e496f2495436c7c600dad37177aff0b" address="unix:///run/containerd/s/a4c4c6440554f2ead5134d27a1c811abeda988eb26416934090ed21239797355" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:16.545026 containerd[1987]: time="2026-03-13T00:39:16.544972816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c5454cd4f-fqwlb,Uid:54b1d5a0-3e27-4e03-83fa-6125838868b4,Namespace:calico-system,Attempt:0,} returns sandbox id \"90e1a34604abaed0a57a74be923e09c182c3c1277692a87b7513108b705cfd84\"" Mar 13 00:39:16.548997 containerd[1987]: time="2026-03-13T00:39:16.548920986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 13 00:39:16.577853 systemd[1]: Started cri-containerd-16c8d4881d284d9c4a3041ddd0df64100e496f2495436c7c600dad37177aff0b.scope - libcontainer container 16c8d4881d284d9c4a3041ddd0df64100e496f2495436c7c600dad37177aff0b. Mar 13 00:39:16.584950 kubelet[3328]: E0313 00:39:16.584894 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.585341 kubelet[3328]: W0313 00:39:16.584918 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.585341 kubelet[3328]: E0313 00:39:16.585052 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.585840 kubelet[3328]: E0313 00:39:16.585790 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.585953 kubelet[3328]: W0313 00:39:16.585804 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.585953 kubelet[3328]: E0313 00:39:16.585929 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.586556 kubelet[3328]: E0313 00:39:16.586521 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.586828 kubelet[3328]: W0313 00:39:16.586650 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.586828 kubelet[3328]: E0313 00:39:16.586672 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.587300 kubelet[3328]: E0313 00:39:16.587237 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.587300 kubelet[3328]: W0313 00:39:16.587256 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.587549 kubelet[3328]: E0313 00:39:16.587273 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.588044 kubelet[3328]: E0313 00:39:16.587987 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.588044 kubelet[3328]: W0313 00:39:16.588000 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.588044 kubelet[3328]: E0313 00:39:16.588013 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.588684 kubelet[3328]: E0313 00:39:16.588644 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.588858 kubelet[3328]: W0313 00:39:16.588767 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.588858 kubelet[3328]: E0313 00:39:16.588788 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.589299 kubelet[3328]: E0313 00:39:16.589273 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.589482 kubelet[3328]: W0313 00:39:16.589448 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.589653 kubelet[3328]: E0313 00:39:16.589547 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.590129 kubelet[3328]: E0313 00:39:16.590065 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.590129 kubelet[3328]: W0313 00:39:16.590079 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.590129 kubelet[3328]: E0313 00:39:16.590093 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.590730 kubelet[3328]: E0313 00:39:16.590715 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.590865 kubelet[3328]: W0313 00:39:16.590768 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.590865 kubelet[3328]: E0313 00:39:16.590783 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.591231 kubelet[3328]: E0313 00:39:16.591186 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.591231 kubelet[3328]: W0313 00:39:16.591201 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.591231 kubelet[3328]: E0313 00:39:16.591216 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.591600 kubelet[3328]: E0313 00:39:16.591588 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.591674 kubelet[3328]: W0313 00:39:16.591661 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.591751 kubelet[3328]: E0313 00:39:16.591735 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.592150 kubelet[3328]: E0313 00:39:16.592107 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.592150 kubelet[3328]: W0313 00:39:16.592122 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.592150 kubelet[3328]: E0313 00:39:16.592136 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.592625 kubelet[3328]: E0313 00:39:16.592575 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.592625 kubelet[3328]: W0313 00:39:16.592590 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.592625 kubelet[3328]: E0313 00:39:16.592605 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.593094 kubelet[3328]: E0313 00:39:16.593032 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.593094 kubelet[3328]: W0313 00:39:16.593048 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.593094 kubelet[3328]: E0313 00:39:16.593076 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.593865 kubelet[3328]: E0313 00:39:16.593819 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.593865 kubelet[3328]: W0313 00:39:16.593834 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.593865 kubelet[3328]: E0313 00:39:16.593847 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.594329 kubelet[3328]: E0313 00:39:16.594290 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.594329 kubelet[3328]: W0313 00:39:16.594303 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.594329 kubelet[3328]: E0313 00:39:16.594316 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.594778 kubelet[3328]: E0313 00:39:16.594735 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.594778 kubelet[3328]: W0313 00:39:16.594748 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.594778 kubelet[3328]: E0313 00:39:16.594764 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.595161 kubelet[3328]: E0313 00:39:16.595122 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.595161 kubelet[3328]: W0313 00:39:16.595134 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.595161 kubelet[3328]: E0313 00:39:16.595147 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.595742 kubelet[3328]: E0313 00:39:16.595681 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.595945 kubelet[3328]: W0313 00:39:16.595866 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.595945 kubelet[3328]: E0313 00:39:16.595887 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.596401 kubelet[3328]: E0313 00:39:16.596339 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.596531 kubelet[3328]: W0313 00:39:16.596468 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.596531 kubelet[3328]: E0313 00:39:16.596489 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.596937 kubelet[3328]: E0313 00:39:16.596908 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.597077 kubelet[3328]: W0313 00:39:16.597022 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.597077 kubelet[3328]: E0313 00:39:16.597039 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.597721 kubelet[3328]: E0313 00:39:16.597708 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.597942 kubelet[3328]: W0313 00:39:16.597768 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.597942 kubelet[3328]: E0313 00:39:16.597786 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.598311 kubelet[3328]: E0313 00:39:16.598280 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.598518 kubelet[3328]: W0313 00:39:16.598504 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.598753 kubelet[3328]: E0313 00:39:16.598561 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.599049 kubelet[3328]: E0313 00:39:16.598997 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.599049 kubelet[3328]: W0313 00:39:16.599011 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.599049 kubelet[3328]: E0313 00:39:16.599025 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.600420 kubelet[3328]: E0313 00:39:16.600401 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.600706 kubelet[3328]: W0313 00:39:16.600582 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.600706 kubelet[3328]: E0313 00:39:16.600654 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.612385 kubelet[3328]: E0313 00:39:16.612063 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:16.612385 kubelet[3328]: W0313 00:39:16.612089 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:16.612385 kubelet[3328]: E0313 00:39:16.612117 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:16.625192 containerd[1987]: time="2026-03-13T00:39:16.625022848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dc2zf,Uid:701018d6-0be1-4899-b7fc-4ad9e4a14401,Namespace:calico-system,Attempt:0,} returns sandbox id \"16c8d4881d284d9c4a3041ddd0df64100e496f2495436c7c600dad37177aff0b\"" Mar 13 00:39:17.847403 kubelet[3328]: E0313 00:39:17.845538 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkmzp" podUID="c8d85f17-e076-4723-880f-19418eb52308" Mar 13 00:39:18.027653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1644611023.mount: Deactivated successfully. Mar 13 00:39:18.876755 containerd[1987]: time="2026-03-13T00:39:18.876697341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:18.878555 containerd[1987]: time="2026-03-13T00:39:18.878513921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 13 00:39:18.880817 containerd[1987]: time="2026-03-13T00:39:18.880745126Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:18.884253 containerd[1987]: time="2026-03-13T00:39:18.884199592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:18.885389 containerd[1987]: time="2026-03-13T00:39:18.884834752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.335868543s" Mar 13 00:39:18.885389 containerd[1987]: time="2026-03-13T00:39:18.884871133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 13 00:39:18.886485 containerd[1987]: time="2026-03-13T00:39:18.886454563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 13 00:39:18.936033 containerd[1987]: time="2026-03-13T00:39:18.935978742Z" level=info msg="CreateContainer within sandbox \"90e1a34604abaed0a57a74be923e09c182c3c1277692a87b7513108b705cfd84\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 13 00:39:18.974988 containerd[1987]: time="2026-03-13T00:39:18.972492780Z" level=info msg="Container 017395c7c6e0a4a0edd430f587588b87ec1e515e431b5d70eef488181c54339e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:18.989019 containerd[1987]: time="2026-03-13T00:39:18.988970902Z" level=info msg="CreateContainer within sandbox \"90e1a34604abaed0a57a74be923e09c182c3c1277692a87b7513108b705cfd84\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"017395c7c6e0a4a0edd430f587588b87ec1e515e431b5d70eef488181c54339e\"" Mar 13 00:39:18.990375 containerd[1987]: time="2026-03-13T00:39:18.989862996Z" level=info msg="StartContainer for \"017395c7c6e0a4a0edd430f587588b87ec1e515e431b5d70eef488181c54339e\"" Mar 13 00:39:18.991678 containerd[1987]: time="2026-03-13T00:39:18.991647887Z" level=info msg="connecting to shim 017395c7c6e0a4a0edd430f587588b87ec1e515e431b5d70eef488181c54339e" address="unix:///run/containerd/s/73e0b2e3a09fe9cbca54e2a09093cf404035486a231487e7cf8fb178f09a4cbc" protocol=ttrpc version=3 Mar 13 00:39:19.020600 systemd[1]: Started cri-containerd-017395c7c6e0a4a0edd430f587588b87ec1e515e431b5d70eef488181c54339e.scope - libcontainer container 017395c7c6e0a4a0edd430f587588b87ec1e515e431b5d70eef488181c54339e. Mar 13 00:39:19.095718 containerd[1987]: time="2026-03-13T00:39:19.095674729Z" level=info msg="StartContainer for \"017395c7c6e0a4a0edd430f587588b87ec1e515e431b5d70eef488181c54339e\" returns successfully" Mar 13 00:39:19.845552 kubelet[3328]: E0313 00:39:19.845476 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkmzp" podUID="c8d85f17-e076-4723-880f-19418eb52308" Mar 13 00:39:20.002045 kubelet[3328]: I0313 00:39:20.001976 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c5454cd4f-fqwlb" podStartSLOduration=2.663431881 podStartE2EDuration="5.001957993s" podCreationTimestamp="2026-03-13 00:39:15 +0000 UTC" firstStartedPulling="2026-03-13 00:39:16.547468077 +0000 UTC m=+21.886549564" lastFinishedPulling="2026-03-13 00:39:18.88599418 +0000 UTC m=+24.225075676" observedRunningTime="2026-03-13 00:39:19.996124433 +0000 UTC m=+25.335205956" watchObservedRunningTime="2026-03-13 00:39:20.001957993 +0000 UTC m=+25.341039487" Mar 13 00:39:20.015262 kubelet[3328]: E0313 00:39:20.015230 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.015262 kubelet[3328]: W0313 00:39:20.015254 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.022662 kubelet[3328]: E0313 00:39:20.022614 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.022984 kubelet[3328]: E0313 00:39:20.022959 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.022984 kubelet[3328]: W0313 00:39:20.022980 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.023141 kubelet[3328]: E0313 00:39:20.023003 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.023241 kubelet[3328]: E0313 00:39:20.023224 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.023288 kubelet[3328]: W0313 00:39:20.023238 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.023288 kubelet[3328]: E0313 00:39:20.023253 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.023715 kubelet[3328]: E0313 00:39:20.023695 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.023715 kubelet[3328]: W0313 00:39:20.023709 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.023844 kubelet[3328]: E0313 00:39:20.023723 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.024113 kubelet[3328]: E0313 00:39:20.024092 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.024113 kubelet[3328]: W0313 00:39:20.024109 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.024238 kubelet[3328]: E0313 00:39:20.024125 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.024344 kubelet[3328]: E0313 00:39:20.024328 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.024344 kubelet[3328]: W0313 00:39:20.024341 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.024473 kubelet[3328]: E0313 00:39:20.024394 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.024639 kubelet[3328]: E0313 00:39:20.024577 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.024639 kubelet[3328]: W0313 00:39:20.024588 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.024639 kubelet[3328]: E0313 00:39:20.024599 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.025265 kubelet[3328]: E0313 00:39:20.024769 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.025265 kubelet[3328]: W0313 00:39:20.024778 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.025265 kubelet[3328]: E0313 00:39:20.024789 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.025265 kubelet[3328]: E0313 00:39:20.025026 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.025265 kubelet[3328]: W0313 00:39:20.025129 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.025265 kubelet[3328]: E0313 00:39:20.025145 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.026449 kubelet[3328]: E0313 00:39:20.025324 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.026449 kubelet[3328]: W0313 00:39:20.025334 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.026449 kubelet[3328]: E0313 00:39:20.025345 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.026449 kubelet[3328]: E0313 00:39:20.025706 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.026449 kubelet[3328]: W0313 00:39:20.025716 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.026449 kubelet[3328]: E0313 00:39:20.025729 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.026449 kubelet[3328]: E0313 00:39:20.025956 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.026449 kubelet[3328]: W0313 00:39:20.025968 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.026449 kubelet[3328]: E0313 00:39:20.026002 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.026449 kubelet[3328]: E0313 00:39:20.026246 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.027232 kubelet[3328]: W0313 00:39:20.026257 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.027232 kubelet[3328]: E0313 00:39:20.026269 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.027232 kubelet[3328]: E0313 00:39:20.026502 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.027232 kubelet[3328]: W0313 00:39:20.026512 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.027232 kubelet[3328]: E0313 00:39:20.026544 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.027232 kubelet[3328]: E0313 00:39:20.026743 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.027232 kubelet[3328]: W0313 00:39:20.026753 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.027232 kubelet[3328]: E0313 00:39:20.026786 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.027232 kubelet[3328]: E0313 00:39:20.027120 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.027232 kubelet[3328]: W0313 00:39:20.027130 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.027945 kubelet[3328]: E0313 00:39:20.027142 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.027945 kubelet[3328]: E0313 00:39:20.027430 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.027945 kubelet[3328]: W0313 00:39:20.027460 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.027945 kubelet[3328]: E0313 00:39:20.027474 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.027945 kubelet[3328]: E0313 00:39:20.027754 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.027945 kubelet[3328]: W0313 00:39:20.027764 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.027945 kubelet[3328]: E0313 00:39:20.027777 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.030577 kubelet[3328]: E0313 00:39:20.028145 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.030577 kubelet[3328]: W0313 00:39:20.028156 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.030577 kubelet[3328]: E0313 00:39:20.028169 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.030577 kubelet[3328]: E0313 00:39:20.028435 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.030577 kubelet[3328]: W0313 00:39:20.028477 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.030577 kubelet[3328]: E0313 00:39:20.028494 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.030577 kubelet[3328]: E0313 00:39:20.029146 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.030577 kubelet[3328]: W0313 00:39:20.029158 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.030577 kubelet[3328]: E0313 00:39:20.029191 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.030577 kubelet[3328]: E0313 00:39:20.029500 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.033532 kubelet[3328]: W0313 00:39:20.029509 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.033532 kubelet[3328]: E0313 00:39:20.029520 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.033532 kubelet[3328]: E0313 00:39:20.029933 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.033532 kubelet[3328]: W0313 00:39:20.029944 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.033532 kubelet[3328]: E0313 00:39:20.029957 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.033532 kubelet[3328]: E0313 00:39:20.030233 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.033532 kubelet[3328]: W0313 00:39:20.030243 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.033532 kubelet[3328]: E0313 00:39:20.030263 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.033532 kubelet[3328]: E0313 00:39:20.030511 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.033532 kubelet[3328]: W0313 00:39:20.030521 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.034062 kubelet[3328]: E0313 00:39:20.030532 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.034062 kubelet[3328]: E0313 00:39:20.030769 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.034062 kubelet[3328]: W0313 00:39:20.030779 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.034062 kubelet[3328]: E0313 00:39:20.030792 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.034062 kubelet[3328]: E0313 00:39:20.031080 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.034062 kubelet[3328]: W0313 00:39:20.031090 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.034062 kubelet[3328]: E0313 00:39:20.031102 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.034062 kubelet[3328]: E0313 00:39:20.031388 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.034062 kubelet[3328]: W0313 00:39:20.031399 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.034062 kubelet[3328]: E0313 00:39:20.031412 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.035679 kubelet[3328]: E0313 00:39:20.031715 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.035679 kubelet[3328]: W0313 00:39:20.031726 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.035679 kubelet[3328]: E0313 00:39:20.031740 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.035679 kubelet[3328]: E0313 00:39:20.032269 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.035679 kubelet[3328]: W0313 00:39:20.032279 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.035679 kubelet[3328]: E0313 00:39:20.032292 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.035679 kubelet[3328]: E0313 00:39:20.032748 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.035679 kubelet[3328]: W0313 00:39:20.032758 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.035679 kubelet[3328]: E0313 00:39:20.032772 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.035679 kubelet[3328]: E0313 00:39:20.035561 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.036349 kubelet[3328]: W0313 00:39:20.035574 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.036349 kubelet[3328]: E0313 00:39:20.035589 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.036780 kubelet[3328]: E0313 00:39:20.036571 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:39:20.036780 kubelet[3328]: W0313 00:39:20.036583 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:39:20.036780 kubelet[3328]: E0313 00:39:20.036599 3328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:39:20.294639 containerd[1987]: time="2026-03-13T00:39:20.294587934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:20.296684 containerd[1987]: time="2026-03-13T00:39:20.296540237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 13 00:39:20.298845 containerd[1987]: time="2026-03-13T00:39:20.298804451Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:20.302428 containerd[1987]: time="2026-03-13T00:39:20.302388943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:20.303743 containerd[1987]: time="2026-03-13T00:39:20.303233345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.41673717s" Mar 13 00:39:20.303743 containerd[1987]: time="2026-03-13T00:39:20.303271634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 13 00:39:20.309769 containerd[1987]: time="2026-03-13T00:39:20.309725821Z" level=info msg="CreateContainer within sandbox \"16c8d4881d284d9c4a3041ddd0df64100e496f2495436c7c600dad37177aff0b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 13 00:39:20.326344 containerd[1987]: time="2026-03-13T00:39:20.324608719Z" level=info msg="Container d496c40fff4c5e040199138fb19c2c3ca9556066ba19519ddb8ec57f6d4f9ef5: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:20.340461 containerd[1987]: time="2026-03-13T00:39:20.340412233Z" level=info msg="CreateContainer within sandbox \"16c8d4881d284d9c4a3041ddd0df64100e496f2495436c7c600dad37177aff0b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d496c40fff4c5e040199138fb19c2c3ca9556066ba19519ddb8ec57f6d4f9ef5\"" Mar 13 00:39:20.341422 containerd[1987]: time="2026-03-13T00:39:20.341280515Z" level=info msg="StartContainer for \"d496c40fff4c5e040199138fb19c2c3ca9556066ba19519ddb8ec57f6d4f9ef5\"" Mar 13 00:39:20.344315 containerd[1987]: time="2026-03-13T00:39:20.344271473Z" level=info msg="connecting to shim d496c40fff4c5e040199138fb19c2c3ca9556066ba19519ddb8ec57f6d4f9ef5" address="unix:///run/containerd/s/a4c4c6440554f2ead5134d27a1c811abeda988eb26416934090ed21239797355" protocol=ttrpc version=3 Mar 13 00:39:20.380594 systemd[1]: Started cri-containerd-d496c40fff4c5e040199138fb19c2c3ca9556066ba19519ddb8ec57f6d4f9ef5.scope - libcontainer container d496c40fff4c5e040199138fb19c2c3ca9556066ba19519ddb8ec57f6d4f9ef5. Mar 13 00:39:20.491982 containerd[1987]: time="2026-03-13T00:39:20.491931228Z" level=info msg="StartContainer for \"d496c40fff4c5e040199138fb19c2c3ca9556066ba19519ddb8ec57f6d4f9ef5\" returns successfully" Mar 13 00:39:20.498046 systemd[1]: cri-containerd-d496c40fff4c5e040199138fb19c2c3ca9556066ba19519ddb8ec57f6d4f9ef5.scope: Deactivated successfully. Mar 13 00:39:20.533775 containerd[1987]: time="2026-03-13T00:39:20.533723399Z" level=info msg="received container exit event container_id:\"d496c40fff4c5e040199138fb19c2c3ca9556066ba19519ddb8ec57f6d4f9ef5\" id:\"d496c40fff4c5e040199138fb19c2c3ca9556066ba19519ddb8ec57f6d4f9ef5\" pid:4163 exited_at:{seconds:1773362360 nanos:501759250}" Mar 13 00:39:20.564146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d496c40fff4c5e040199138fb19c2c3ca9556066ba19519ddb8ec57f6d4f9ef5-rootfs.mount: Deactivated successfully. Mar 13 00:39:20.979464 kubelet[3328]: I0313 00:39:20.979415 3328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:39:20.981144 containerd[1987]: time="2026-03-13T00:39:20.980945918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 13 00:39:21.846066 kubelet[3328]: E0313 00:39:21.846003 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkmzp" podUID="c8d85f17-e076-4723-880f-19418eb52308" Mar 13 00:39:23.845541 kubelet[3328]: E0313 00:39:23.845469 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkmzp" podUID="c8d85f17-e076-4723-880f-19418eb52308" Mar 13 00:39:25.846183 kubelet[3328]: E0313 00:39:25.846118 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkmzp" podUID="c8d85f17-e076-4723-880f-19418eb52308" Mar 13 00:39:27.853048 kubelet[3328]: E0313 00:39:27.852932 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkmzp" podUID="c8d85f17-e076-4723-880f-19418eb52308" Mar 13 00:39:29.845712 kubelet[3328]: E0313 00:39:29.845650 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkmzp" podUID="c8d85f17-e076-4723-880f-19418eb52308" Mar 13 00:39:30.903512 kubelet[3328]: I0313 00:39:30.903452 3328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:39:31.846208 kubelet[3328]: E0313 00:39:31.846135 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkmzp" podUID="c8d85f17-e076-4723-880f-19418eb52308" Mar 13 00:39:32.002846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1702557358.mount: Deactivated successfully. Mar 13 00:39:32.052011 containerd[1987]: time="2026-03-13T00:39:32.051954003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:32.054787 containerd[1987]: time="2026-03-13T00:39:32.054715346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 13 00:39:32.055869 containerd[1987]: time="2026-03-13T00:39:32.055805763Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:32.059022 containerd[1987]: time="2026-03-13T00:39:32.058961222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:32.059860 containerd[1987]: time="2026-03-13T00:39:32.059643268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 11.078637224s" Mar 13 00:39:32.059860 containerd[1987]: time="2026-03-13T00:39:32.059684051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 13 00:39:32.066317 containerd[1987]: time="2026-03-13T00:39:32.066268930Z" level=info msg="CreateContainer within sandbox \"16c8d4881d284d9c4a3041ddd0df64100e496f2495436c7c600dad37177aff0b\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 13 00:39:32.098601 containerd[1987]: time="2026-03-13T00:39:32.098488919Z" level=info msg="Container c684d7693e411c865dfc59e28381b98385d8b8b00c7564df36fb9bc7586fa96c: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:32.106321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3010976196.mount: Deactivated successfully. Mar 13 00:39:32.122657 containerd[1987]: time="2026-03-13T00:39:32.122607134Z" level=info msg="CreateContainer within sandbox \"16c8d4881d284d9c4a3041ddd0df64100e496f2495436c7c600dad37177aff0b\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"c684d7693e411c865dfc59e28381b98385d8b8b00c7564df36fb9bc7586fa96c\"" Mar 13 00:39:32.124347 containerd[1987]: time="2026-03-13T00:39:32.123350870Z" level=info msg="StartContainer for \"c684d7693e411c865dfc59e28381b98385d8b8b00c7564df36fb9bc7586fa96c\"" Mar 13 00:39:32.125559 containerd[1987]: time="2026-03-13T00:39:32.125523517Z" level=info msg="connecting to shim c684d7693e411c865dfc59e28381b98385d8b8b00c7564df36fb9bc7586fa96c" address="unix:///run/containerd/s/a4c4c6440554f2ead5134d27a1c811abeda988eb26416934090ed21239797355" protocol=ttrpc version=3 Mar 13 00:39:32.265617 systemd[1]: Started cri-containerd-c684d7693e411c865dfc59e28381b98385d8b8b00c7564df36fb9bc7586fa96c.scope - libcontainer container c684d7693e411c865dfc59e28381b98385d8b8b00c7564df36fb9bc7586fa96c. Mar 13 00:39:32.355795 containerd[1987]: time="2026-03-13T00:39:32.355572510Z" level=info msg="StartContainer for \"c684d7693e411c865dfc59e28381b98385d8b8b00c7564df36fb9bc7586fa96c\" returns successfully" Mar 13 00:39:32.545487 systemd[1]: cri-containerd-c684d7693e411c865dfc59e28381b98385d8b8b00c7564df36fb9bc7586fa96c.scope: Deactivated successfully. Mar 13 00:39:32.546191 systemd[1]: cri-containerd-c684d7693e411c865dfc59e28381b98385d8b8b00c7564df36fb9bc7586fa96c.scope: Consumed 91ms CPU time, 35.2M memory peak, 14M read from disk. Mar 13 00:39:32.601346 containerd[1987]: time="2026-03-13T00:39:32.601298183Z" level=info msg="received container exit event container_id:\"c684d7693e411c865dfc59e28381b98385d8b8b00c7564df36fb9bc7586fa96c\" id:\"c684d7693e411c865dfc59e28381b98385d8b8b00c7564df36fb9bc7586fa96c\" pid:4229 exited_at:{seconds:1773362372 nanos:580027361}" Mar 13 00:39:33.002058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c684d7693e411c865dfc59e28381b98385d8b8b00c7564df36fb9bc7586fa96c-rootfs.mount: Deactivated successfully. Mar 13 00:39:33.051965 containerd[1987]: time="2026-03-13T00:39:33.051922949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 13 00:39:33.846185 kubelet[3328]: E0313 00:39:33.846075 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkmzp" podUID="c8d85f17-e076-4723-880f-19418eb52308" Mar 13 00:39:35.846501 kubelet[3328]: E0313 00:39:35.846425 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkmzp" podUID="c8d85f17-e076-4723-880f-19418eb52308" Mar 13 00:39:35.905524 containerd[1987]: time="2026-03-13T00:39:35.905309045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:35.907867 containerd[1987]: time="2026-03-13T00:39:35.907828972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 13 00:39:35.910448 containerd[1987]: time="2026-03-13T00:39:35.910409723Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:35.914570 containerd[1987]: time="2026-03-13T00:39:35.914261816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:35.915086 containerd[1987]: time="2026-03-13T00:39:35.915053989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.863082204s" Mar 13 00:39:35.915770 containerd[1987]: time="2026-03-13T00:39:35.915092358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 13 00:39:35.920815 containerd[1987]: time="2026-03-13T00:39:35.920769011Z" level=info msg="CreateContainer within sandbox \"16c8d4881d284d9c4a3041ddd0df64100e496f2495436c7c600dad37177aff0b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 13 00:39:35.933994 containerd[1987]: time="2026-03-13T00:39:35.933874536Z" level=info msg="Container f8a49d66523431f6e0859f2236735f66b93030e393ffb606fad752ea92f18970: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:35.995118 containerd[1987]: time="2026-03-13T00:39:35.995060637Z" level=info msg="CreateContainer within sandbox \"16c8d4881d284d9c4a3041ddd0df64100e496f2495436c7c600dad37177aff0b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f8a49d66523431f6e0859f2236735f66b93030e393ffb606fad752ea92f18970\"" Mar 13 00:39:35.996053 containerd[1987]: time="2026-03-13T00:39:35.995738890Z" level=info msg="StartContainer for \"f8a49d66523431f6e0859f2236735f66b93030e393ffb606fad752ea92f18970\"" Mar 13 00:39:36.007631 containerd[1987]: time="2026-03-13T00:39:36.006266853Z" level=info msg="connecting to shim f8a49d66523431f6e0859f2236735f66b93030e393ffb606fad752ea92f18970" address="unix:///run/containerd/s/a4c4c6440554f2ead5134d27a1c811abeda988eb26416934090ed21239797355" protocol=ttrpc version=3 Mar 13 00:39:36.035630 systemd[1]: Started cri-containerd-f8a49d66523431f6e0859f2236735f66b93030e393ffb606fad752ea92f18970.scope - libcontainer container f8a49d66523431f6e0859f2236735f66b93030e393ffb606fad752ea92f18970. Mar 13 00:39:36.146316 containerd[1987]: time="2026-03-13T00:39:36.146275717Z" level=info msg="StartContainer for \"f8a49d66523431f6e0859f2236735f66b93030e393ffb606fad752ea92f18970\" returns successfully" Mar 13 00:39:37.372430 systemd[1]: cri-containerd-f8a49d66523431f6e0859f2236735f66b93030e393ffb606fad752ea92f18970.scope: Deactivated successfully. Mar 13 00:39:37.373383 systemd[1]: cri-containerd-f8a49d66523431f6e0859f2236735f66b93030e393ffb606fad752ea92f18970.scope: Consumed 605ms CPU time, 177.3M memory peak, 7.2M read from disk, 177M written to disk. Mar 13 00:39:37.392795 containerd[1987]: time="2026-03-13T00:39:37.392752604Z" level=info msg="received container exit event container_id:\"f8a49d66523431f6e0859f2236735f66b93030e393ffb606fad752ea92f18970\" id:\"f8a49d66523431f6e0859f2236735f66b93030e393ffb606fad752ea92f18970\" pid:4287 exited_at:{seconds:1773362377 nanos:392434651}" Mar 13 00:39:37.433503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8a49d66523431f6e0859f2236735f66b93030e393ffb606fad752ea92f18970-rootfs.mount: Deactivated successfully. Mar 13 00:39:37.473157 kubelet[3328]: I0313 00:39:37.473129 3328 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 13 00:39:37.548561 systemd[1]: Created slice kubepods-burstable-pod22a17a20_5da9_4c40_a71d_b92e59cd60c8.slice - libcontainer container kubepods-burstable-pod22a17a20_5da9_4c40_a71d_b92e59cd60c8.slice. Mar 13 00:39:37.562021 systemd[1]: Created slice kubepods-burstable-pod98194567_8a9d_4ddc_aedd_092811d13223.slice - libcontainer container kubepods-burstable-pod98194567_8a9d_4ddc_aedd_092811d13223.slice. Mar 13 00:39:37.577419 kubelet[3328]: E0313 00:39:37.577376 3328 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"whisker-nginx-config\" is forbidden: User \"system:node:ip-172-31-22-244\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-22-244' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"whisker-nginx-config\"" type="*v1.ConfigMap" Mar 13 00:39:37.577560 kubelet[3328]: I0313 00:39:37.577520 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8kq4\" (UniqueName: \"kubernetes.io/projected/98194567-8a9d-4ddc-aedd-092811d13223-kube-api-access-c8kq4\") pod \"coredns-674b8bbfcf-8xphx\" (UID: \"98194567-8a9d-4ddc-aedd-092811d13223\") " pod="kube-system/coredns-674b8bbfcf-8xphx" Mar 13 00:39:37.577560 kubelet[3328]: I0313 00:39:37.577548 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22a17a20-5da9-4c40-a71d-b92e59cd60c8-config-volume\") pod \"coredns-674b8bbfcf-nd7nx\" (UID: \"22a17a20-5da9-4c40-a71d-b92e59cd60c8\") " pod="kube-system/coredns-674b8bbfcf-nd7nx" Mar 13 00:39:37.577651 kubelet[3328]: I0313 00:39:37.577584 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-whisker-ca-bundle\") pod \"whisker-7bf747955c-zvrbt\" (UID: \"b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37\") " pod="calico-system/whisker-7bf747955c-zvrbt" Mar 13 00:39:37.577651 kubelet[3328]: I0313 00:39:37.577613 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2974\" (UniqueName: \"kubernetes.io/projected/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-kube-api-access-t2974\") pod \"whisker-7bf747955c-zvrbt\" (UID: \"b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37\") " pod="calico-system/whisker-7bf747955c-zvrbt" Mar 13 00:39:37.577651 kubelet[3328]: I0313 00:39:37.577638 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-nginx-config\") pod \"whisker-7bf747955c-zvrbt\" (UID: \"b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37\") " pod="calico-system/whisker-7bf747955c-zvrbt" Mar 13 00:39:37.577780 kubelet[3328]: I0313 00:39:37.577662 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-whisker-backend-key-pair\") pod \"whisker-7bf747955c-zvrbt\" (UID: \"b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37\") " pod="calico-system/whisker-7bf747955c-zvrbt" Mar 13 00:39:37.577780 kubelet[3328]: I0313 00:39:37.577694 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98194567-8a9d-4ddc-aedd-092811d13223-config-volume\") pod \"coredns-674b8bbfcf-8xphx\" (UID: \"98194567-8a9d-4ddc-aedd-092811d13223\") " pod="kube-system/coredns-674b8bbfcf-8xphx" Mar 13 00:39:37.577780 kubelet[3328]: I0313 00:39:37.577717 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5n7d\" (UniqueName: \"kubernetes.io/projected/22a17a20-5da9-4c40-a71d-b92e59cd60c8-kube-api-access-j5n7d\") pod \"coredns-674b8bbfcf-nd7nx\" (UID: \"22a17a20-5da9-4c40-a71d-b92e59cd60c8\") " pod="kube-system/coredns-674b8bbfcf-nd7nx" Mar 13 00:39:37.579118 kubelet[3328]: E0313 00:39:37.578934 3328 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ip-172-31-22-244\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-22-244' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"whisker-ca-bundle\"" type="*v1.ConfigMap" Mar 13 00:39:37.579118 kubelet[3328]: E0313 00:39:37.579061 3328 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ip-172-31-22-244\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-22-244' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"whisker-backend-key-pair\"" type="*v1.Secret" Mar 13 00:39:37.581045 systemd[1]: Created slice kubepods-besteffort-podb9bbf7a5_b532_4c06_848a_9c2ee2d5fe37.slice - libcontainer container kubepods-besteffort-podb9bbf7a5_b532_4c06_848a_9c2ee2d5fe37.slice. Mar 13 00:39:37.608011 systemd[1]: Created slice kubepods-besteffort-pod05bcf32a_e5fb_4c55_add3_c1efda8a3268.slice - libcontainer container kubepods-besteffort-pod05bcf32a_e5fb_4c55_add3_c1efda8a3268.slice. Mar 13 00:39:37.622916 systemd[1]: Created slice kubepods-besteffort-pod5c86dfd6_a5e4_4241_8df8_e41d28563f15.slice - libcontainer container kubepods-besteffort-pod5c86dfd6_a5e4_4241_8df8_e41d28563f15.slice. Mar 13 00:39:37.635637 systemd[1]: Created slice kubepods-besteffort-pod7c86ec91_9cc0_46f5_9cdd_0a50a7ddacaf.slice - libcontainer container kubepods-besteffort-pod7c86ec91_9cc0_46f5_9cdd_0a50a7ddacaf.slice. Mar 13 00:39:37.647140 systemd[1]: Created slice kubepods-besteffort-pod232d682b_1e73_4352_a839_d93cdc2b9af5.slice - libcontainer container kubepods-besteffort-pod232d682b_1e73_4352_a839_d93cdc2b9af5.slice. Mar 13 00:39:37.678498 kubelet[3328]: I0313 00:39:37.678407 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05bcf32a-e5fb-4c55-add3-c1efda8a3268-config\") pod \"goldmane-5b85766d88-s47n2\" (UID: \"05bcf32a-e5fb-4c55-add3-c1efda8a3268\") " pod="calico-system/goldmane-5b85766d88-s47n2" Mar 13 00:39:37.679059 kubelet[3328]: I0313 00:39:37.678975 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8pqx\" (UniqueName: \"kubernetes.io/projected/05bcf32a-e5fb-4c55-add3-c1efda8a3268-kube-api-access-n8pqx\") pod \"goldmane-5b85766d88-s47n2\" (UID: \"05bcf32a-e5fb-4c55-add3-c1efda8a3268\") " pod="calico-system/goldmane-5b85766d88-s47n2" Mar 13 00:39:37.679059 kubelet[3328]: I0313 00:39:37.679019 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5c86dfd6-a5e4-4241-8df8-e41d28563f15-calico-apiserver-certs\") pod \"calico-apiserver-f4fc4d544-k9996\" (UID: \"5c86dfd6-a5e4-4241-8df8-e41d28563f15\") " pod="calico-system/calico-apiserver-f4fc4d544-k9996" Mar 13 00:39:37.679713 kubelet[3328]: I0313 00:39:37.679218 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7c86ec91-9cc0-46f5-9cdd-0a50a7ddacaf-calico-apiserver-certs\") pod \"calico-apiserver-f4fc4d544-jrxrr\" (UID: \"7c86ec91-9cc0-46f5-9cdd-0a50a7ddacaf\") " pod="calico-system/calico-apiserver-f4fc4d544-jrxrr" Mar 13 00:39:37.679713 kubelet[3328]: I0313 00:39:37.679260 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05bcf32a-e5fb-4c55-add3-c1efda8a3268-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-s47n2\" (UID: \"05bcf32a-e5fb-4c55-add3-c1efda8a3268\") " pod="calico-system/goldmane-5b85766d88-s47n2" Mar 13 00:39:37.679713 kubelet[3328]: I0313 00:39:37.679322 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nkm9\" (UniqueName: \"kubernetes.io/projected/232d682b-1e73-4352-a839-d93cdc2b9af5-kube-api-access-9nkm9\") pod \"calico-kube-controllers-8498cbcb77-2dh8t\" (UID: \"232d682b-1e73-4352-a839-d93cdc2b9af5\") " pod="calico-system/calico-kube-controllers-8498cbcb77-2dh8t" Mar 13 00:39:37.679713 kubelet[3328]: I0313 00:39:37.679348 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/05bcf32a-e5fb-4c55-add3-c1efda8a3268-goldmane-key-pair\") pod \"goldmane-5b85766d88-s47n2\" (UID: \"05bcf32a-e5fb-4c55-add3-c1efda8a3268\") " pod="calico-system/goldmane-5b85766d88-s47n2" Mar 13 00:39:37.679713 kubelet[3328]: I0313 00:39:37.679491 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/232d682b-1e73-4352-a839-d93cdc2b9af5-tigera-ca-bundle\") pod \"calico-kube-controllers-8498cbcb77-2dh8t\" (UID: \"232d682b-1e73-4352-a839-d93cdc2b9af5\") " pod="calico-system/calico-kube-controllers-8498cbcb77-2dh8t" Mar 13 00:39:37.680639 kubelet[3328]: I0313 00:39:37.679535 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnvdp\" (UniqueName: \"kubernetes.io/projected/5c86dfd6-a5e4-4241-8df8-e41d28563f15-kube-api-access-nnvdp\") pod \"calico-apiserver-f4fc4d544-k9996\" (UID: \"5c86dfd6-a5e4-4241-8df8-e41d28563f15\") " pod="calico-system/calico-apiserver-f4fc4d544-k9996" Mar 13 00:39:37.680639 kubelet[3328]: I0313 00:39:37.679583 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2524n\" (UniqueName: \"kubernetes.io/projected/7c86ec91-9cc0-46f5-9cdd-0a50a7ddacaf-kube-api-access-2524n\") pod \"calico-apiserver-f4fc4d544-jrxrr\" (UID: \"7c86ec91-9cc0-46f5-9cdd-0a50a7ddacaf\") " pod="calico-system/calico-apiserver-f4fc4d544-jrxrr" Mar 13 00:39:37.851659 systemd[1]: Created slice kubepods-besteffort-podc8d85f17_e076_4723_880f_19418eb52308.slice - libcontainer container kubepods-besteffort-podc8d85f17_e076_4723_880f_19418eb52308.slice. Mar 13 00:39:37.861787 containerd[1987]: time="2026-03-13T00:39:37.860698956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nd7nx,Uid:22a17a20-5da9-4c40-a71d-b92e59cd60c8,Namespace:kube-system,Attempt:0,}" Mar 13 00:39:37.862039 containerd[1987]: time="2026-03-13T00:39:37.862006672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkmzp,Uid:c8d85f17-e076-4723-880f-19418eb52308,Namespace:calico-system,Attempt:0,}" Mar 13 00:39:37.891391 containerd[1987]: time="2026-03-13T00:39:37.891118644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8xphx,Uid:98194567-8a9d-4ddc-aedd-092811d13223,Namespace:kube-system,Attempt:0,}" Mar 13 00:39:37.930401 containerd[1987]: time="2026-03-13T00:39:37.930316384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4fc4d544-k9996,Uid:5c86dfd6-a5e4-4241-8df8-e41d28563f15,Namespace:calico-system,Attempt:0,}" Mar 13 00:39:37.939814 containerd[1987]: time="2026-03-13T00:39:37.939772038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-s47n2,Uid:05bcf32a-e5fb-4c55-add3-c1efda8a3268,Namespace:calico-system,Attempt:0,}" Mar 13 00:39:37.944116 containerd[1987]: time="2026-03-13T00:39:37.943891179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4fc4d544-jrxrr,Uid:7c86ec91-9cc0-46f5-9cdd-0a50a7ddacaf,Namespace:calico-system,Attempt:0,}" Mar 13 00:39:37.951527 containerd[1987]: time="2026-03-13T00:39:37.951471272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8498cbcb77-2dh8t,Uid:232d682b-1e73-4352-a839-d93cdc2b9af5,Namespace:calico-system,Attempt:0,}" Mar 13 00:39:38.148667 containerd[1987]: time="2026-03-13T00:39:38.148559097Z" level=info msg="CreateContainer within sandbox \"16c8d4881d284d9c4a3041ddd0df64100e496f2495436c7c600dad37177aff0b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 13 00:39:38.224234 containerd[1987]: time="2026-03-13T00:39:38.224136114Z" level=info msg="Container c47ccb6d938b57872deebfefad7166ba002266962eb7193f7462c8eef5bcda6e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:38.242325 containerd[1987]: time="2026-03-13T00:39:38.242274079Z" level=info msg="CreateContainer within sandbox \"16c8d4881d284d9c4a3041ddd0df64100e496f2495436c7c600dad37177aff0b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c47ccb6d938b57872deebfefad7166ba002266962eb7193f7462c8eef5bcda6e\"" Mar 13 00:39:38.244640 containerd[1987]: time="2026-03-13T00:39:38.244589073Z" level=info msg="StartContainer for \"c47ccb6d938b57872deebfefad7166ba002266962eb7193f7462c8eef5bcda6e\"" Mar 13 00:39:38.248670 containerd[1987]: time="2026-03-13T00:39:38.248461949Z" level=info msg="connecting to shim c47ccb6d938b57872deebfefad7166ba002266962eb7193f7462c8eef5bcda6e" address="unix:///run/containerd/s/a4c4c6440554f2ead5134d27a1c811abeda988eb26416934090ed21239797355" protocol=ttrpc version=3 Mar 13 00:39:38.280751 systemd[1]: Started cri-containerd-c47ccb6d938b57872deebfefad7166ba002266962eb7193f7462c8eef5bcda6e.scope - libcontainer container c47ccb6d938b57872deebfefad7166ba002266962eb7193f7462c8eef5bcda6e. Mar 13 00:39:38.473240 containerd[1987]: time="2026-03-13T00:39:38.472801217Z" level=info msg="StartContainer for \"c47ccb6d938b57872deebfefad7166ba002266962eb7193f7462c8eef5bcda6e\" returns successfully" Mar 13 00:39:38.546690 containerd[1987]: time="2026-03-13T00:39:38.546622017Z" level=error msg="Failed to destroy network for sandbox \"e35acfb92d6573ed80fc8174dc235d8c7dfbf77142e430f24dee262bf76a2a06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.552769 systemd[1]: run-netns-cni\x2df09a97ea\x2d2794\x2d479b\x2d7210\x2d19fc828d1334.mount: Deactivated successfully. Mar 13 00:39:38.566381 containerd[1987]: time="2026-03-13T00:39:38.565478445Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8498cbcb77-2dh8t,Uid:232d682b-1e73-4352-a839-d93cdc2b9af5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35acfb92d6573ed80fc8174dc235d8c7dfbf77142e430f24dee262bf76a2a06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.628454 containerd[1987]: time="2026-03-13T00:39:38.628332586Z" level=error msg="Failed to destroy network for sandbox \"91a821cce95e53d284043981803877981acf0a6736a3f116fa0f54713b27dead\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.634474 containerd[1987]: time="2026-03-13T00:39:38.634398601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8xphx,Uid:98194567-8a9d-4ddc-aedd-092811d13223,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91a821cce95e53d284043981803877981acf0a6736a3f116fa0f54713b27dead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.634902 systemd[1]: run-netns-cni\x2df9bdd90a\x2de5cb\x2d3848\x2d8558\x2d5d9a8bc7672d.mount: Deactivated successfully. Mar 13 00:39:38.636270 kubelet[3328]: E0313 00:39:38.634911 3328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91a821cce95e53d284043981803877981acf0a6736a3f116fa0f54713b27dead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.636270 kubelet[3328]: E0313 00:39:38.634994 3328 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91a821cce95e53d284043981803877981acf0a6736a3f116fa0f54713b27dead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8xphx" Mar 13 00:39:38.636270 kubelet[3328]: E0313 00:39:38.635026 3328 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91a821cce95e53d284043981803877981acf0a6736a3f116fa0f54713b27dead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8xphx" Mar 13 00:39:38.637245 kubelet[3328]: E0313 00:39:38.635086 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8xphx_kube-system(98194567-8a9d-4ddc-aedd-092811d13223)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8xphx_kube-system(98194567-8a9d-4ddc-aedd-092811d13223)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91a821cce95e53d284043981803877981acf0a6736a3f116fa0f54713b27dead\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8xphx" podUID="98194567-8a9d-4ddc-aedd-092811d13223" Mar 13 00:39:38.642329 kubelet[3328]: E0313 00:39:38.640947 3328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35acfb92d6573ed80fc8174dc235d8c7dfbf77142e430f24dee262bf76a2a06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.642329 kubelet[3328]: E0313 00:39:38.641372 3328 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35acfb92d6573ed80fc8174dc235d8c7dfbf77142e430f24dee262bf76a2a06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8498cbcb77-2dh8t" Mar 13 00:39:38.642329 kubelet[3328]: E0313 00:39:38.641425 3328 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35acfb92d6573ed80fc8174dc235d8c7dfbf77142e430f24dee262bf76a2a06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8498cbcb77-2dh8t" Mar 13 00:39:38.642634 kubelet[3328]: E0313 00:39:38.641510 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8498cbcb77-2dh8t_calico-system(232d682b-1e73-4352-a839-d93cdc2b9af5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8498cbcb77-2dh8t_calico-system(232d682b-1e73-4352-a839-d93cdc2b9af5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e35acfb92d6573ed80fc8174dc235d8c7dfbf77142e430f24dee262bf76a2a06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8498cbcb77-2dh8t" podUID="232d682b-1e73-4352-a839-d93cdc2b9af5" Mar 13 00:39:38.656736 containerd[1987]: time="2026-03-13T00:39:38.656662859Z" level=error msg="Failed to destroy network for sandbox \"3ef2a8c216af4222d9b4914efd877a237722731da6cadd4d55b4c09fd7cc0441\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.660797 containerd[1987]: time="2026-03-13T00:39:38.659186435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-s47n2,Uid:05bcf32a-e5fb-4c55-add3-c1efda8a3268,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ef2a8c216af4222d9b4914efd877a237722731da6cadd4d55b4c09fd7cc0441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.661275 systemd[1]: run-netns-cni\x2d0fb7f1e4\x2dec91\x2d8e42\x2dcc27\x2d6b1582021eb5.mount: Deactivated successfully. Mar 13 00:39:38.663925 kubelet[3328]: E0313 00:39:38.662045 3328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ef2a8c216af4222d9b4914efd877a237722731da6cadd4d55b4c09fd7cc0441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.663925 kubelet[3328]: E0313 00:39:38.662124 3328 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ef2a8c216af4222d9b4914efd877a237722731da6cadd4d55b4c09fd7cc0441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-s47n2" Mar 13 00:39:38.663925 kubelet[3328]: E0313 00:39:38.662159 3328 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ef2a8c216af4222d9b4914efd877a237722731da6cadd4d55b4c09fd7cc0441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-s47n2" Mar 13 00:39:38.664189 kubelet[3328]: E0313 00:39:38.662221 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-s47n2_calico-system(05bcf32a-e5fb-4c55-add3-c1efda8a3268)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-s47n2_calico-system(05bcf32a-e5fb-4c55-add3-c1efda8a3268)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ef2a8c216af4222d9b4914efd877a237722731da6cadd4d55b4c09fd7cc0441\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-s47n2" podUID="05bcf32a-e5fb-4c55-add3-c1efda8a3268" Mar 13 00:39:38.672999 containerd[1987]: time="2026-03-13T00:39:38.672946657Z" level=error msg="Failed to destroy network for sandbox \"d0ad2ebf7352c10519f580f16678fe9bc9f5310bb81a0fcb0fa8cae7868c01ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.674505 containerd[1987]: time="2026-03-13T00:39:38.674264987Z" level=error msg="Failed to destroy network for sandbox \"303f5641ed4eda0154b031eeaf23a23cac858ec1c39ac9db0b423000c1265b8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.677372 containerd[1987]: time="2026-03-13T00:39:38.675051747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nd7nx,Uid:22a17a20-5da9-4c40-a71d-b92e59cd60c8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0ad2ebf7352c10519f580f16678fe9bc9f5310bb81a0fcb0fa8cae7868c01ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.678405 kubelet[3328]: E0313 00:39:38.677696 3328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0ad2ebf7352c10519f580f16678fe9bc9f5310bb81a0fcb0fa8cae7868c01ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.678594 containerd[1987]: time="2026-03-13T00:39:38.678509751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4fc4d544-k9996,Uid:5c86dfd6-a5e4-4241-8df8-e41d28563f15,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"303f5641ed4eda0154b031eeaf23a23cac858ec1c39ac9db0b423000c1265b8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.679408 kubelet[3328]: E0313 00:39:38.678872 3328 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0ad2ebf7352c10519f580f16678fe9bc9f5310bb81a0fcb0fa8cae7868c01ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nd7nx" Mar 13 00:39:38.679381 systemd[1]: run-netns-cni\x2d44f11282\x2dfd9c\x2d3221\x2d7a6b\x2d9bb00b68fbb1.mount: Deactivated successfully. Mar 13 00:39:38.683438 containerd[1987]: time="2026-03-13T00:39:38.679466575Z" level=error msg="Failed to destroy network for sandbox \"9da965f8b277af0713ccaf25e916e0d5bef3501e532d8424a8e1359a493cd409\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.683438 containerd[1987]: time="2026-03-13T00:39:38.680571912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkmzp,Uid:c8d85f17-e076-4723-880f-19418eb52308,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9da965f8b277af0713ccaf25e916e0d5bef3501e532d8424a8e1359a493cd409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.683605 kubelet[3328]: E0313 00:39:38.679959 3328 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0ad2ebf7352c10519f580f16678fe9bc9f5310bb81a0fcb0fa8cae7868c01ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nd7nx" Mar 13 00:39:38.683605 kubelet[3328]: E0313 00:39:38.680056 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nd7nx_kube-system(22a17a20-5da9-4c40-a71d-b92e59cd60c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nd7nx_kube-system(22a17a20-5da9-4c40-a71d-b92e59cd60c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0ad2ebf7352c10519f580f16678fe9bc9f5310bb81a0fcb0fa8cae7868c01ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nd7nx" podUID="22a17a20-5da9-4c40-a71d-b92e59cd60c8" Mar 13 00:39:38.686234 kubelet[3328]: E0313 00:39:38.683779 3328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"303f5641ed4eda0154b031eeaf23a23cac858ec1c39ac9db0b423000c1265b8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.686234 kubelet[3328]: E0313 00:39:38.683845 3328 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"303f5641ed4eda0154b031eeaf23a23cac858ec1c39ac9db0b423000c1265b8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-f4fc4d544-k9996" Mar 13 00:39:38.686234 kubelet[3328]: E0313 00:39:38.683871 3328 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"303f5641ed4eda0154b031eeaf23a23cac858ec1c39ac9db0b423000c1265b8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-f4fc4d544-k9996" Mar 13 00:39:38.686464 kubelet[3328]: E0313 00:39:38.683931 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4fc4d544-k9996_calico-system(5c86dfd6-a5e4-4241-8df8-e41d28563f15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4fc4d544-k9996_calico-system(5c86dfd6-a5e4-4241-8df8-e41d28563f15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"303f5641ed4eda0154b031eeaf23a23cac858ec1c39ac9db0b423000c1265b8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-f4fc4d544-k9996" podUID="5c86dfd6-a5e4-4241-8df8-e41d28563f15" Mar 13 00:39:38.688931 kubelet[3328]: E0313 00:39:38.687577 3328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9da965f8b277af0713ccaf25e916e0d5bef3501e532d8424a8e1359a493cd409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.688931 kubelet[3328]: E0313 00:39:38.687637 3328 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9da965f8b277af0713ccaf25e916e0d5bef3501e532d8424a8e1359a493cd409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkmzp" Mar 13 00:39:38.688931 kubelet[3328]: E0313 00:39:38.687663 3328 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9da965f8b277af0713ccaf25e916e0d5bef3501e532d8424a8e1359a493cd409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkmzp" Mar 13 00:39:38.689161 kubelet[3328]: E0313 00:39:38.687727 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bkmzp_calico-system(c8d85f17-e076-4723-880f-19418eb52308)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bkmzp_calico-system(c8d85f17-e076-4723-880f-19418eb52308)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9da965f8b277af0713ccaf25e916e0d5bef3501e532d8424a8e1359a493cd409\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bkmzp" podUID="c8d85f17-e076-4723-880f-19418eb52308" Mar 13 00:39:38.690028 kubelet[3328]: E0313 00:39:38.689718 3328 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Mar 13 00:39:38.697944 containerd[1987]: time="2026-03-13T00:39:38.697061233Z" level=error msg="Failed to destroy network for sandbox \"a1721b11d6d3106c4dfb0e7ee5661b45c5a768d1b7c11ff6d26b5bca518daaf4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.699470 containerd[1987]: time="2026-03-13T00:39:38.699405235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4fc4d544-jrxrr,Uid:7c86ec91-9cc0-46f5-9cdd-0a50a7ddacaf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1721b11d6d3106c4dfb0e7ee5661b45c5a768d1b7c11ff6d26b5bca518daaf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.699810 kubelet[3328]: E0313 00:39:38.699746 3328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1721b11d6d3106c4dfb0e7ee5661b45c5a768d1b7c11ff6d26b5bca518daaf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:39:38.699810 kubelet[3328]: E0313 00:39:38.699804 3328 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1721b11d6d3106c4dfb0e7ee5661b45c5a768d1b7c11ff6d26b5bca518daaf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-f4fc4d544-jrxrr" Mar 13 00:39:38.700038 kubelet[3328]: E0313 00:39:38.699827 3328 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1721b11d6d3106c4dfb0e7ee5661b45c5a768d1b7c11ff6d26b5bca518daaf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-f4fc4d544-jrxrr" Mar 13 00:39:38.700038 kubelet[3328]: E0313 00:39:38.699886 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4fc4d544-jrxrr_calico-system(7c86ec91-9cc0-46f5-9cdd-0a50a7ddacaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4fc4d544-jrxrr_calico-system(7c86ec91-9cc0-46f5-9cdd-0a50a7ddacaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1721b11d6d3106c4dfb0e7ee5661b45c5a768d1b7c11ff6d26b5bca518daaf4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-f4fc4d544-jrxrr" podUID="7c86ec91-9cc0-46f5-9cdd-0a50a7ddacaf" Mar 13 00:39:38.704399 kubelet[3328]: E0313 00:39:38.704366 3328 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-whisker-backend-key-pair podName:b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37 nodeName:}" failed. No retries permitted until 2026-03-13 00:39:39.189818144 +0000 UTC m=+44.528899637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-whisker-backend-key-pair") pod "whisker-7bf747955c-zvrbt" (UID: "b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37") : failed to sync secret cache: timed out waiting for the condition Mar 13 00:39:39.164385 kubelet[3328]: I0313 00:39:39.160782 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dc2zf" podStartSLOduration=3.870671957 podStartE2EDuration="23.160755611s" podCreationTimestamp="2026-03-13 00:39:16 +0000 UTC" firstStartedPulling="2026-03-13 00:39:16.626386291 +0000 UTC m=+21.965467775" lastFinishedPulling="2026-03-13 00:39:35.916469943 +0000 UTC m=+41.255551429" observedRunningTime="2026-03-13 00:39:39.159449111 +0000 UTC m=+44.498530632" watchObservedRunningTime="2026-03-13 00:39:39.160755611 +0000 UTC m=+44.499837107" Mar 13 00:39:39.344644 containerd[1987]: time="2026-03-13T00:39:39.344604162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bf747955c-zvrbt,Uid:b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37,Namespace:calico-system,Attempt:0,}" Mar 13 00:39:39.442741 systemd[1]: run-netns-cni\x2d69ab6fbf\x2d75f7\x2d5abd\x2d3a0a\x2def852e53078e.mount: Deactivated successfully. Mar 13 00:39:39.442864 systemd[1]: run-netns-cni\x2d6523083c\x2d5577\x2d6ce0\x2d809d\x2d9dbd3ff687c1.mount: Deactivated successfully. Mar 13 00:39:39.442940 systemd[1]: run-netns-cni\x2defbf1969\x2d8e30\x2df0d7\x2dd9c9\x2d9e1ad6c55598.mount: Deactivated successfully. Mar 13 00:39:39.864379 systemd-networkd[1795]: cali3e4ffebeb65: Link UP Mar 13 00:39:39.865822 systemd-networkd[1795]: cali3e4ffebeb65: Gained carrier Mar 13 00:39:39.885843 (udev-worker)[4599]: Network interface NamePolicy= disabled on kernel command line. Mar 13 00:39:39.924512 containerd[1987]: 2026-03-13 00:39:39.460 [ERROR][4549] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:39:39.924512 containerd[1987]: 2026-03-13 00:39:39.550 [INFO][4549] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0 whisker-7bf747955c- calico-system b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37 880 0 2026-03-13 00:39:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7bf747955c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-22-244 whisker-7bf747955c-zvrbt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3e4ffebeb65 [] [] }} ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Namespace="calico-system" Pod="whisker-7bf747955c-zvrbt" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-" Mar 13 00:39:39.924512 containerd[1987]: 2026-03-13 00:39:39.550 [INFO][4549] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Namespace="calico-system" Pod="whisker-7bf747955c-zvrbt" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:39.924512 containerd[1987]: 2026-03-13 00:39:39.665 [INFO][4588] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" HandleID="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Workload="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:39.926830 containerd[1987]: 2026-03-13 00:39:39.679 [INFO][4588] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" HandleID="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Workload="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ec20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-244", "pod":"whisker-7bf747955c-zvrbt", "timestamp":"2026-03-13 00:39:39.665482877 +0000 UTC"}, Hostname:"ip-172-31-22-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00041adc0)} Mar 13 00:39:39.926830 containerd[1987]: 2026-03-13 00:39:39.679 [INFO][4588] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:39:39.926830 containerd[1987]: 2026-03-13 00:39:39.679 [INFO][4588] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:39:39.926830 containerd[1987]: 2026-03-13 00:39:39.679 [INFO][4588] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-244' Mar 13 00:39:39.926830 containerd[1987]: 2026-03-13 00:39:39.683 [INFO][4588] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" host="ip-172-31-22-244" Mar 13 00:39:39.926830 containerd[1987]: 2026-03-13 00:39:39.707 [INFO][4588] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-22-244" Mar 13 00:39:39.926830 containerd[1987]: 2026-03-13 00:39:39.725 [INFO][4588] ipam/ipam.go 558: Ran out of existing affine blocks for host host="ip-172-31-22-244" Mar 13 00:39:39.926830 containerd[1987]: 2026-03-13 00:39:39.730 [INFO][4588] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="ip-172-31-22-244" Mar 13 00:39:39.926830 containerd[1987]: 2026-03-13 00:39:39.734 [INFO][4588] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.92.64/26 Mar 13 00:39:39.926830 containerd[1987]: 2026-03-13 00:39:39.734 [INFO][4588] ipam/ipam.go 588: Found unclaimed block in 3.909816ms host="ip-172-31-22-244" subnet=192.168.92.64/26 Mar 13 00:39:39.927249 containerd[1987]: 2026-03-13 00:39:39.734 [INFO][4588] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="ip-172-31-22-244" subnet=192.168.92.64/26 Mar 13 00:39:39.927249 containerd[1987]: 2026-03-13 00:39:39.750 [INFO][4588] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="ip-172-31-22-244" subnet=192.168.92.64/26 Mar 13 00:39:39.927249 containerd[1987]: 2026-03-13 00:39:39.751 [INFO][4588] ipam/ipam.go 160: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:39.927249 containerd[1987]: 2026-03-13 00:39:39.758 [INFO][4588] ipam/ipam.go 165: The referenced block doesn't exist, trying to create it cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:39.927249 containerd[1987]: 2026-03-13 00:39:39.768 [INFO][4588] ipam/ipam.go 172: Wrote affinity as pending cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:39.927249 containerd[1987]: 2026-03-13 00:39:39.771 [INFO][4588] ipam/ipam.go 181: Attempting to claim the block cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:39.927249 containerd[1987]: 2026-03-13 00:39:39.771 [INFO][4588] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="ip-172-31-22-244" subnet=192.168.92.64/26 Mar 13 00:39:39.927249 containerd[1987]: 2026-03-13 00:39:39.785 [INFO][4588] ipam/ipam_block_reader_writer.go 267: Successfully created block Mar 13 00:39:39.927249 containerd[1987]: 2026-03-13 00:39:39.785 [INFO][4588] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="ip-172-31-22-244" subnet=192.168.92.64/26 Mar 13 00:39:39.927249 containerd[1987]: 2026-03-13 00:39:39.793 [INFO][4588] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="ip-172-31-22-244" subnet=192.168.92.64/26 Mar 13 00:39:39.927249 containerd[1987]: 2026-03-13 00:39:39.793 [INFO][4588] ipam/ipam.go 623: Block '192.168.92.64/26' has 64 free ips which is more than 1 ips required. host="ip-172-31-22-244" subnet=192.168.92.64/26 Mar 13 00:39:39.927249 containerd[1987]: 2026-03-13 00:39:39.793 [INFO][4588] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" host="ip-172-31-22-244" Mar 13 00:39:39.927249 containerd[1987]: 2026-03-13 00:39:39.796 [INFO][4588] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297 Mar 13 00:39:39.930900 containerd[1987]: 2026-03-13 00:39:39.809 [INFO][4588] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" host="ip-172-31-22-244" Mar 13 00:39:39.930900 containerd[1987]: 2026-03-13 00:39:39.817 [INFO][4588] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.92.64/26] block=192.168.92.64/26 handle="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" host="ip-172-31-22-244" Mar 13 00:39:39.930900 containerd[1987]: 2026-03-13 00:39:39.818 [INFO][4588] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.92.64/26] handle="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" host="ip-172-31-22-244" Mar 13 00:39:39.930900 containerd[1987]: 2026-03-13 00:39:39.818 [INFO][4588] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:39:39.930900 containerd[1987]: 2026-03-13 00:39:39.818 [INFO][4588] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.92.64/26] IPv6=[] ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" HandleID="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Workload="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:39.936973 containerd[1987]: 2026-03-13 00:39:39.824 [INFO][4549] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Namespace="calico-system" Pod="whisker-7bf747955c-zvrbt" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0", GenerateName:"whisker-7bf747955c-", Namespace:"calico-system", SelfLink:"", UID:"b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bf747955c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"", Pod:"whisker-7bf747955c-zvrbt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.92.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3e4ffebeb65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:39.936973 containerd[1987]: 2026-03-13 00:39:39.824 [INFO][4549] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.64/32] ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Namespace="calico-system" Pod="whisker-7bf747955c-zvrbt" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:39.937095 containerd[1987]: 2026-03-13 00:39:39.824 [INFO][4549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e4ffebeb65 ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Namespace="calico-system" Pod="whisker-7bf747955c-zvrbt" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:39.937095 containerd[1987]: 2026-03-13 00:39:39.860 [INFO][4549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Namespace="calico-system" Pod="whisker-7bf747955c-zvrbt" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:39.937145 containerd[1987]: 2026-03-13 00:39:39.860 [INFO][4549] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Namespace="calico-system" Pod="whisker-7bf747955c-zvrbt" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0", GenerateName:"whisker-7bf747955c-", Namespace:"calico-system", SelfLink:"", UID:"b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bf747955c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297", Pod:"whisker-7bf747955c-zvrbt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.92.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3e4ffebeb65", MAC:"86:6b:76:e9:18:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:39.937204 containerd[1987]: 2026-03-13 00:39:39.917 [INFO][4549] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Namespace="calico-system" Pod="whisker-7bf747955c-zvrbt" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:40.030296 containerd[1987]: time="2026-03-13T00:39:40.030241911Z" level=info msg="connecting to shim b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" address="unix:///run/containerd/s/7e92f7bf4f4fb94e38b10c80b17d55f3dbcc6f38a8b0db21a94d5e66805293b1" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:40.065028 systemd[1]: Started cri-containerd-b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297.scope - libcontainer container b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297. Mar 13 00:39:40.128806 containerd[1987]: time="2026-03-13T00:39:40.128760405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bf747955c-zvrbt,Uid:b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\"" Mar 13 00:39:40.133817 containerd[1987]: time="2026-03-13T00:39:40.133779350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 13 00:39:41.490738 systemd-networkd[1795]: cali3e4ffebeb65: Gained IPv6LL Mar 13 00:39:41.495256 containerd[1987]: time="2026-03-13T00:39:41.495199274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:41.497139 containerd[1987]: time="2026-03-13T00:39:41.497053248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 13 00:39:41.499417 containerd[1987]: time="2026-03-13T00:39:41.499347868Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:41.502904 containerd[1987]: time="2026-03-13T00:39:41.502844779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:41.503839 containerd[1987]: time="2026-03-13T00:39:41.503807808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.369777493s" Mar 13 00:39:41.503947 containerd[1987]: time="2026-03-13T00:39:41.503841649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 13 00:39:41.510836 containerd[1987]: time="2026-03-13T00:39:41.510785797Z" level=info msg="CreateContainer within sandbox \"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 13 00:39:41.542410 containerd[1987]: time="2026-03-13T00:39:41.541746315Z" level=info msg="Container e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:41.560728 containerd[1987]: time="2026-03-13T00:39:41.560683095Z" level=info msg="CreateContainer within sandbox \"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\"" Mar 13 00:39:41.561733 containerd[1987]: time="2026-03-13T00:39:41.561680324Z" level=info msg="StartContainer for \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\"" Mar 13 00:39:41.563637 containerd[1987]: time="2026-03-13T00:39:41.563595211Z" level=info msg="connecting to shim e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd" address="unix:///run/containerd/s/7e92f7bf4f4fb94e38b10c80b17d55f3dbcc6f38a8b0db21a94d5e66805293b1" protocol=ttrpc version=3 Mar 13 00:39:41.596602 systemd[1]: Started cri-containerd-e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd.scope - libcontainer container e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd. Mar 13 00:39:41.655194 containerd[1987]: time="2026-03-13T00:39:41.655117350Z" level=info msg="StartContainer for \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\" returns successfully" Mar 13 00:39:41.658914 containerd[1987]: time="2026-03-13T00:39:41.658594831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 13 00:39:43.398888 (udev-worker)[4888]: Network interface NamePolicy= disabled on kernel command line. Mar 13 00:39:43.404330 systemd-networkd[1795]: vxlan.calico: Link UP Mar 13 00:39:43.404340 systemd-networkd[1795]: vxlan.calico: Gained carrier Mar 13 00:39:43.424185 (udev-worker)[4889]: Network interface NamePolicy= disabled on kernel command line. Mar 13 00:39:43.427955 (udev-worker)[4904]: Network interface NamePolicy= disabled on kernel command line. Mar 13 00:39:44.305580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount144920510.mount: Deactivated successfully. Mar 13 00:39:44.356421 containerd[1987]: time="2026-03-13T00:39:44.356339891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:44.370401 containerd[1987]: time="2026-03-13T00:39:44.369568988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 13 00:39:44.409434 containerd[1987]: time="2026-03-13T00:39:44.409235303Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:44.426541 containerd[1987]: time="2026-03-13T00:39:44.426455147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:44.436089 containerd[1987]: time="2026-03-13T00:39:44.435876229Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.770812119s" Mar 13 00:39:44.436089 containerd[1987]: time="2026-03-13T00:39:44.435933796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 13 00:39:44.436122 systemd-networkd[1795]: vxlan.calico: Gained IPv6LL Mar 13 00:39:44.470932 containerd[1987]: time="2026-03-13T00:39:44.470880320Z" level=info msg="CreateContainer within sandbox \"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 13 00:39:44.540634 containerd[1987]: time="2026-03-13T00:39:44.540517994Z" level=info msg="Container 3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:44.664536 containerd[1987]: time="2026-03-13T00:39:44.664492757Z" level=info msg="CreateContainer within sandbox \"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\"" Mar 13 00:39:44.666097 containerd[1987]: time="2026-03-13T00:39:44.666011296Z" level=info msg="StartContainer for \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\"" Mar 13 00:39:44.694383 containerd[1987]: time="2026-03-13T00:39:44.694040853Z" level=info msg="connecting to shim 3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209" address="unix:///run/containerd/s/7e92f7bf4f4fb94e38b10c80b17d55f3dbcc6f38a8b0db21a94d5e66805293b1" protocol=ttrpc version=3 Mar 13 00:39:44.840763 systemd[1]: Started cri-containerd-3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209.scope - libcontainer container 3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209. Mar 13 00:39:44.973345 containerd[1987]: time="2026-03-13T00:39:44.973222741Z" level=info msg="StartContainer for \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\" returns successfully" Mar 13 00:39:45.219499 containerd[1987]: time="2026-03-13T00:39:45.218697210Z" level=info msg="StopContainer for \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\" with timeout 30 (s)" Mar 13 00:39:45.219499 containerd[1987]: time="2026-03-13T00:39:45.219057318Z" level=info msg="StopContainer for \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\" with timeout 30 (s)" Mar 13 00:39:45.221015 containerd[1987]: time="2026-03-13T00:39:45.220133983Z" level=info msg="Stop container \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\" with signal terminated" Mar 13 00:39:45.221255 containerd[1987]: time="2026-03-13T00:39:45.221232682Z" level=info msg="Stop container \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\" with signal terminated" Mar 13 00:39:45.244538 systemd[1]: cri-containerd-3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209.scope: Deactivated successfully. Mar 13 00:39:45.269480 systemd[1]: cri-containerd-e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd.scope: Deactivated successfully. Mar 13 00:39:45.290810 containerd[1987]: time="2026-03-13T00:39:45.290622154Z" level=info msg="received container exit event container_id:\"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\" id:\"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\" pid:4727 exited_at:{seconds:1773362385 nanos:281973054}" Mar 13 00:39:45.303814 containerd[1987]: time="2026-03-13T00:39:45.303440135Z" level=info msg="received container exit event container_id:\"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\" id:\"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\" pid:4980 exit_status:2 exited_at:{seconds:1773362385 nanos:281444687}" Mar 13 00:39:45.340571 kubelet[3328]: I0313 00:39:45.333441 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7bf747955c-zvrbt" podStartSLOduration=22.012767576 podStartE2EDuration="26.326975369s" podCreationTimestamp="2026-03-13 00:39:19 +0000 UTC" firstStartedPulling="2026-03-13 00:39:40.130215535 +0000 UTC m=+45.469297005" lastFinishedPulling="2026-03-13 00:39:44.444423312 +0000 UTC m=+49.783504798" observedRunningTime="2026-03-13 00:39:45.325905849 +0000 UTC m=+50.664987345" watchObservedRunningTime="2026-03-13 00:39:45.326975369 +0000 UTC m=+50.666056867" Mar 13 00:39:45.365823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd-rootfs.mount: Deactivated successfully. Mar 13 00:39:45.374048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209-rootfs.mount: Deactivated successfully. Mar 13 00:39:45.477213 containerd[1987]: time="2026-03-13T00:39:45.477056118Z" level=info msg="StopContainer for \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\" returns successfully" Mar 13 00:39:45.482217 containerd[1987]: time="2026-03-13T00:39:45.482168922Z" level=info msg="StopContainer for \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\" returns successfully" Mar 13 00:39:45.482824 containerd[1987]: time="2026-03-13T00:39:45.482795584Z" level=info msg="StopPodSandbox for \"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\"" Mar 13 00:39:45.487367 containerd[1987]: time="2026-03-13T00:39:45.487315443Z" level=info msg="Container to stop \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:39:45.487538 containerd[1987]: time="2026-03-13T00:39:45.487438040Z" level=info msg="Container to stop \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:39:45.496065 systemd[1]: cri-containerd-b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297.scope: Deactivated successfully. Mar 13 00:39:45.501463 containerd[1987]: time="2026-03-13T00:39:45.501283541Z" level=info msg="received sandbox exit event container_id:\"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\" id:\"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\" exit_status:137 exited_at:{seconds:1773362385 nanos:499039456}" monitor_name=podsandbox Mar 13 00:39:45.526732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297-rootfs.mount: Deactivated successfully. Mar 13 00:39:45.537691 containerd[1987]: time="2026-03-13T00:39:45.537640234Z" level=info msg="shim disconnected" id=b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297 namespace=k8s.io Mar 13 00:39:45.539929 containerd[1987]: time="2026-03-13T00:39:45.537932259Z" level=warning msg="cleaning up after shim disconnected" id=b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297 namespace=k8s.io Mar 13 00:39:45.551937 containerd[1987]: time="2026-03-13T00:39:45.537951396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 00:39:45.629596 containerd[1987]: time="2026-03-13T00:39:45.629425245Z" level=info msg="received sandbox container exit event sandbox_id:\"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\" exit_status:137 exited_at:{seconds:1773362385 nanos:499039456}" monitor_name=criService Mar 13 00:39:45.631825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297-shm.mount: Deactivated successfully. Mar 13 00:39:45.830260 systemd-networkd[1795]: cali3e4ffebeb65: Link DOWN Mar 13 00:39:45.830277 systemd-networkd[1795]: cali3e4ffebeb65: Lost carrier Mar 13 00:39:45.974095 containerd[1987]: 2026-03-13 00:39:45.827 [INFO][5081] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Mar 13 00:39:45.974095 containerd[1987]: 2026-03-13 00:39:45.828 [INFO][5081] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" iface="eth0" netns="/var/run/netns/cni-793b9926-4320-59b2-d8ee-4896c2993bbe" Mar 13 00:39:45.974095 containerd[1987]: 2026-03-13 00:39:45.829 [INFO][5081] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" iface="eth0" netns="/var/run/netns/cni-793b9926-4320-59b2-d8ee-4896c2993bbe" Mar 13 00:39:45.974095 containerd[1987]: 2026-03-13 00:39:45.836 [INFO][5081] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" after=7.080906ms iface="eth0" netns="/var/run/netns/cni-793b9926-4320-59b2-d8ee-4896c2993bbe" Mar 13 00:39:45.974095 containerd[1987]: 2026-03-13 00:39:45.836 [INFO][5081] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Mar 13 00:39:45.974095 containerd[1987]: 2026-03-13 00:39:45.836 [INFO][5081] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Mar 13 00:39:45.974095 containerd[1987]: 2026-03-13 00:39:45.916 [INFO][5089] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" HandleID="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Workload="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:45.974095 containerd[1987]: 2026-03-13 00:39:45.916 [INFO][5089] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:39:45.974095 containerd[1987]: 2026-03-13 00:39:45.916 [INFO][5089] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:39:45.976972 containerd[1987]: 2026-03-13 00:39:45.968 [INFO][5089] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" HandleID="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Workload="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:45.976972 containerd[1987]: 2026-03-13 00:39:45.968 [INFO][5089] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" HandleID="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Workload="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:45.976972 containerd[1987]: 2026-03-13 00:39:45.970 [INFO][5089] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:39:45.976972 containerd[1987]: 2026-03-13 00:39:45.972 [INFO][5081] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Mar 13 00:39:45.976972 containerd[1987]: time="2026-03-13T00:39:45.976452630Z" level=info msg="TearDown network for sandbox \"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\" successfully" Mar 13 00:39:45.976972 containerd[1987]: time="2026-03-13T00:39:45.976486857Z" level=info msg="StopPodSandbox for \"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\" returns successfully" Mar 13 00:39:45.977810 systemd[1]: run-netns-cni\x2d793b9926\x2d4320\x2d59b2\x2dd8ee\x2d4896c2993bbe.mount: Deactivated successfully. Mar 13 00:39:46.145261 kubelet[3328]: I0313 00:39:46.145211 3328 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-nginx-config\") pod \"b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37\" (UID: \"b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37\") " Mar 13 00:39:46.146153 kubelet[3328]: I0313 00:39:46.145315 3328 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-whisker-backend-key-pair\") pod \"b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37\" (UID: \"b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37\") " Mar 13 00:39:46.146153 kubelet[3328]: I0313 00:39:46.145375 3328 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2974\" (UniqueName: \"kubernetes.io/projected/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-kube-api-access-t2974\") pod \"b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37\" (UID: \"b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37\") " Mar 13 00:39:46.146153 kubelet[3328]: I0313 00:39:46.145430 3328 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-whisker-ca-bundle\") pod \"b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37\" (UID: \"b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37\") " Mar 13 00:39:46.179484 kubelet[3328]: I0313 00:39:46.178676 3328 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37" (UID: "b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:39:46.179484 kubelet[3328]: I0313 00:39:46.174630 3328 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37" (UID: "b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:39:46.179484 kubelet[3328]: I0313 00:39:46.174630 3328 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37" (UID: "b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:39:46.181922 kubelet[3328]: I0313 00:39:46.181864 3328 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-kube-api-access-t2974" (OuterVolumeSpecName: "kube-api-access-t2974") pod "b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37" (UID: "b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37"). InnerVolumeSpecName "kube-api-access-t2974". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:39:46.197997 systemd[1]: Removed slice kubepods-besteffort-podb9bbf7a5_b532_4c06_848a_9c2ee2d5fe37.slice - libcontainer container kubepods-besteffort-podb9bbf7a5_b532_4c06_848a_9c2ee2d5fe37.slice. Mar 13 00:39:46.198156 systemd[1]: kubepods-besteffort-podb9bbf7a5_b532_4c06_848a_9c2ee2d5fe37.slice: Consumed 154ms CPU time, 11.8M memory peak, 1.1M read from disk, 12K written to disk. Mar 13 00:39:46.204506 kubelet[3328]: I0313 00:39:46.204456 3328 scope.go:117] "RemoveContainer" containerID="3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209" Mar 13 00:39:46.217388 containerd[1987]: time="2026-03-13T00:39:46.217209350Z" level=info msg="RemoveContainer for \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\"" Mar 13 00:39:46.235194 containerd[1987]: time="2026-03-13T00:39:46.235116705Z" level=info msg="RemoveContainer for \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\" returns successfully" Mar 13 00:39:46.236435 kubelet[3328]: I0313 00:39:46.236330 3328 scope.go:117] "RemoveContainer" containerID="e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd" Mar 13 00:39:46.248071 kubelet[3328]: I0313 00:39:46.248017 3328 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t2974\" (UniqueName: \"kubernetes.io/projected/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-kube-api-access-t2974\") on node \"ip-172-31-22-244\" DevicePath \"\"" Mar 13 00:39:46.250045 kubelet[3328]: I0313 00:39:46.249646 3328 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-whisker-ca-bundle\") on node \"ip-172-31-22-244\" DevicePath \"\"" Mar 13 00:39:46.250045 kubelet[3328]: I0313 00:39:46.249677 3328 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-nginx-config\") on node \"ip-172-31-22-244\" DevicePath \"\"" Mar 13 00:39:46.250045 kubelet[3328]: I0313 00:39:46.249695 3328 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37-whisker-backend-key-pair\") on node \"ip-172-31-22-244\" DevicePath \"\"" Mar 13 00:39:46.252025 containerd[1987]: time="2026-03-13T00:39:46.251977022Z" level=info msg="RemoveContainer for \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\"" Mar 13 00:39:46.265342 containerd[1987]: time="2026-03-13T00:39:46.265304797Z" level=info msg="RemoveContainer for \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\" returns successfully" Mar 13 00:39:46.265835 kubelet[3328]: I0313 00:39:46.265797 3328 scope.go:117] "RemoveContainer" containerID="3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209" Mar 13 00:39:46.303620 containerd[1987]: time="2026-03-13T00:39:46.303470400Z" level=error msg="ContainerStatus for \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\": not found" Mar 13 00:39:46.303973 kubelet[3328]: E0313 00:39:46.303933 3328 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\": not found" containerID="3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209" Mar 13 00:39:46.327896 kubelet[3328]: I0313 00:39:46.318546 3328 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209"} err="failed to get container status \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\": not found" Mar 13 00:39:46.328058 kubelet[3328]: I0313 00:39:46.327906 3328 scope.go:117] "RemoveContainer" containerID="e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd" Mar 13 00:39:46.331501 containerd[1987]: time="2026-03-13T00:39:46.331430677Z" level=error msg="ContainerStatus for \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\": not found" Mar 13 00:39:46.331965 kubelet[3328]: E0313 00:39:46.331715 3328 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\": not found" containerID="e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd" Mar 13 00:39:46.331965 kubelet[3328]: I0313 00:39:46.331760 3328 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd"} err="failed to get container status \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\": not found" Mar 13 00:39:46.331965 kubelet[3328]: I0313 00:39:46.331805 3328 scope.go:117] "RemoveContainer" containerID="3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209" Mar 13 00:39:46.332760 containerd[1987]: time="2026-03-13T00:39:46.332723438Z" level=error msg="ContainerStatus for \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\": not found" Mar 13 00:39:46.332964 kubelet[3328]: I0313 00:39:46.332942 3328 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209"} err="failed to get container status \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c650d119de0b8622080439c45dcd371ed98da552779c88b660df7d67e957209\": not found" Mar 13 00:39:46.333026 kubelet[3328]: I0313 00:39:46.332970 3328 scope.go:117] "RemoveContainer" containerID="e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd" Mar 13 00:39:46.333255 containerd[1987]: time="2026-03-13T00:39:46.333158797Z" level=error msg="ContainerStatus for \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\": not found" Mar 13 00:39:46.333486 kubelet[3328]: I0313 00:39:46.333280 3328 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd"} err="failed to get container status \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"e772a6906ac982d78c162087f538a8aaae337b806b75f4cc21ecd42f884108dd\": not found" Mar 13 00:39:46.364218 systemd[1]: var-lib-kubelet-pods-b9bbf7a5\x2db532\x2d4c06\x2d848a\x2d9c2ee2d5fe37-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 13 00:39:46.364389 systemd[1]: var-lib-kubelet-pods-b9bbf7a5\x2db532\x2d4c06\x2d848a\x2d9c2ee2d5fe37-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt2974.mount: Deactivated successfully. Mar 13 00:39:46.445365 systemd[1]: Created slice kubepods-besteffort-pod889b28d9_6231_4834_991d_edcf9f432105.slice - libcontainer container kubepods-besteffort-pod889b28d9_6231_4834_991d_edcf9f432105.slice. Mar 13 00:39:46.565529 kubelet[3328]: I0313 00:39:46.565470 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/889b28d9-6231-4834-991d-edcf9f432105-nginx-config\") pod \"whisker-58475b8bbb-qmm8s\" (UID: \"889b28d9-6231-4834-991d-edcf9f432105\") " pod="calico-system/whisker-58475b8bbb-qmm8s" Mar 13 00:39:46.565529 kubelet[3328]: I0313 00:39:46.565527 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/889b28d9-6231-4834-991d-edcf9f432105-whisker-backend-key-pair\") pod \"whisker-58475b8bbb-qmm8s\" (UID: \"889b28d9-6231-4834-991d-edcf9f432105\") " pod="calico-system/whisker-58475b8bbb-qmm8s" Mar 13 00:39:46.565993 kubelet[3328]: I0313 00:39:46.565563 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crqwn\" (UniqueName: \"kubernetes.io/projected/889b28d9-6231-4834-991d-edcf9f432105-kube-api-access-crqwn\") pod \"whisker-58475b8bbb-qmm8s\" (UID: \"889b28d9-6231-4834-991d-edcf9f432105\") " pod="calico-system/whisker-58475b8bbb-qmm8s" Mar 13 00:39:46.565993 kubelet[3328]: I0313 00:39:46.565590 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/889b28d9-6231-4834-991d-edcf9f432105-whisker-ca-bundle\") pod \"whisker-58475b8bbb-qmm8s\" (UID: \"889b28d9-6231-4834-991d-edcf9f432105\") " pod="calico-system/whisker-58475b8bbb-qmm8s" Mar 13 00:39:46.766380 containerd[1987]: time="2026-03-13T00:39:46.766070661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58475b8bbb-qmm8s,Uid:889b28d9-6231-4834-991d-edcf9f432105,Namespace:calico-system,Attempt:0,}" Mar 13 00:39:46.849401 kubelet[3328]: I0313 00:39:46.849332 3328 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37" path="/var/lib/kubelet/pods/b9bbf7a5-b532-4c06-848a-9c2ee2d5fe37/volumes" Mar 13 00:39:46.919610 systemd-networkd[1795]: calic3a13027c8f: Link UP Mar 13 00:39:46.921091 systemd-networkd[1795]: calic3a13027c8f: Gained carrier Mar 13 00:39:46.941457 containerd[1987]: 2026-03-13 00:39:46.817 [INFO][5121] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--244-k8s-whisker--58475b8bbb--qmm8s-eth0 whisker-58475b8bbb- calico-system 889b28d9-6231-4834-991d-edcf9f432105 947 0 2026-03-13 00:39:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58475b8bbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-22-244 whisker-58475b8bbb-qmm8s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic3a13027c8f [] [] }} ContainerID="f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" Namespace="calico-system" Pod="whisker-58475b8bbb-qmm8s" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--58475b8bbb--qmm8s-" Mar 13 00:39:46.941457 containerd[1987]: 2026-03-13 00:39:46.817 [INFO][5121] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" Namespace="calico-system" Pod="whisker-58475b8bbb-qmm8s" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--58475b8bbb--qmm8s-eth0" Mar 13 00:39:46.941457 containerd[1987]: 2026-03-13 00:39:46.852 [INFO][5133] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" HandleID="k8s-pod-network.f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" Workload="ip--172--31--22--244-k8s-whisker--58475b8bbb--qmm8s-eth0" Mar 13 00:39:46.941758 containerd[1987]: 2026-03-13 00:39:46.863 [INFO][5133] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" HandleID="k8s-pod-network.f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" Workload="ip--172--31--22--244-k8s-whisker--58475b8bbb--qmm8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f74e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-244", "pod":"whisker-58475b8bbb-qmm8s", "timestamp":"2026-03-13 00:39:46.8524845 +0000 UTC"}, Hostname:"ip-172-31-22-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00027d080)} Mar 13 00:39:46.941758 containerd[1987]: 2026-03-13 00:39:46.863 [INFO][5133] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:39:46.941758 containerd[1987]: 2026-03-13 00:39:46.863 [INFO][5133] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:39:46.941758 containerd[1987]: 2026-03-13 00:39:46.863 [INFO][5133] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-244' Mar 13 00:39:46.941758 containerd[1987]: 2026-03-13 00:39:46.866 [INFO][5133] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" host="ip-172-31-22-244" Mar 13 00:39:46.941758 containerd[1987]: 2026-03-13 00:39:46.878 [INFO][5133] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-22-244" Mar 13 00:39:46.941758 containerd[1987]: 2026-03-13 00:39:46.883 [INFO][5133] ipam/ipam.go 526: Trying affinity for 192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:46.941758 containerd[1987]: 2026-03-13 00:39:46.885 [INFO][5133] ipam/ipam.go 160: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:46.941758 containerd[1987]: 2026-03-13 00:39:46.887 [INFO][5133] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:46.942110 containerd[1987]: 2026-03-13 00:39:46.888 [INFO][5133] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" host="ip-172-31-22-244" Mar 13 00:39:46.942110 containerd[1987]: 2026-03-13 00:39:46.889 [INFO][5133] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307 Mar 13 00:39:46.942110 containerd[1987]: 2026-03-13 00:39:46.894 [INFO][5133] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" host="ip-172-31-22-244" Mar 13 00:39:46.942110 containerd[1987]: 2026-03-13 00:39:46.904 [INFO][5133] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.92.66/26] block=192.168.92.64/26 handle="k8s-pod-network.f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" host="ip-172-31-22-244" Mar 13 00:39:46.942110 containerd[1987]: 2026-03-13 00:39:46.904 [INFO][5133] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.92.66/26] handle="k8s-pod-network.f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" host="ip-172-31-22-244" Mar 13 00:39:46.942110 containerd[1987]: 2026-03-13 00:39:46.904 [INFO][5133] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:39:46.942110 containerd[1987]: 2026-03-13 00:39:46.904 [INFO][5133] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.92.66/26] IPv6=[] ContainerID="f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" HandleID="k8s-pod-network.f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" Workload="ip--172--31--22--244-k8s-whisker--58475b8bbb--qmm8s-eth0" Mar 13 00:39:46.943662 containerd[1987]: 2026-03-13 00:39:46.912 [INFO][5121] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" Namespace="calico-system" Pod="whisker-58475b8bbb-qmm8s" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--58475b8bbb--qmm8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-whisker--58475b8bbb--qmm8s-eth0", GenerateName:"whisker-58475b8bbb-", Namespace:"calico-system", SelfLink:"", UID:"889b28d9-6231-4834-991d-edcf9f432105", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58475b8bbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"", Pod:"whisker-58475b8bbb-qmm8s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.92.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic3a13027c8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:46.943662 containerd[1987]: 2026-03-13 00:39:46.913 [INFO][5121] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.66/32] ContainerID="f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" Namespace="calico-system" Pod="whisker-58475b8bbb-qmm8s" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--58475b8bbb--qmm8s-eth0" Mar 13 00:39:46.943819 containerd[1987]: 2026-03-13 00:39:46.913 [INFO][5121] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3a13027c8f ContainerID="f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" Namespace="calico-system" Pod="whisker-58475b8bbb-qmm8s" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--58475b8bbb--qmm8s-eth0" Mar 13 00:39:46.943819 containerd[1987]: 2026-03-13 00:39:46.922 [INFO][5121] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" Namespace="calico-system" Pod="whisker-58475b8bbb-qmm8s" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--58475b8bbb--qmm8s-eth0" Mar 13 00:39:46.943896 containerd[1987]: 2026-03-13 00:39:46.922 [INFO][5121] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" Namespace="calico-system" Pod="whisker-58475b8bbb-qmm8s" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--58475b8bbb--qmm8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-whisker--58475b8bbb--qmm8s-eth0", GenerateName:"whisker-58475b8bbb-", Namespace:"calico-system", SelfLink:"", UID:"889b28d9-6231-4834-991d-edcf9f432105", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58475b8bbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307", Pod:"whisker-58475b8bbb-qmm8s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.92.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic3a13027c8f", MAC:"3a:27:ed:e8:a0:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:46.944025 containerd[1987]: 2026-03-13 00:39:46.938 [INFO][5121] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" Namespace="calico-system" Pod="whisker-58475b8bbb-qmm8s" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--58475b8bbb--qmm8s-eth0" Mar 13 00:39:46.980154 containerd[1987]: time="2026-03-13T00:39:46.980078544Z" level=info msg="connecting to shim f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307" address="unix:///run/containerd/s/2faf05dd64a39a633f50a47c9bd93a775b84acaaf99fb3071ca98b9cd1731522" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:47.022848 systemd[1]: Started cri-containerd-f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307.scope - libcontainer container f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307. Mar 13 00:39:47.121224 containerd[1987]: time="2026-03-13T00:39:47.121177255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58475b8bbb-qmm8s,Uid:889b28d9-6231-4834-991d-edcf9f432105,Namespace:calico-system,Attempt:0,} returns sandbox id \"f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307\"" Mar 13 00:39:47.129609 containerd[1987]: time="2026-03-13T00:39:47.129565040Z" level=info msg="CreateContainer within sandbox \"f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 13 00:39:47.139960 containerd[1987]: time="2026-03-13T00:39:47.139912390Z" level=info msg="Container 04396cd3c448210659919c67d9dfefeaedc757fdb7c3c066f166a65568ae6480: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:47.150885 containerd[1987]: time="2026-03-13T00:39:47.150841429Z" level=info msg="CreateContainer within sandbox \"f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"04396cd3c448210659919c67d9dfefeaedc757fdb7c3c066f166a65568ae6480\"" Mar 13 00:39:47.162550 containerd[1987]: time="2026-03-13T00:39:47.162412563Z" level=info msg="StartContainer for \"04396cd3c448210659919c67d9dfefeaedc757fdb7c3c066f166a65568ae6480\"" Mar 13 00:39:47.164085 containerd[1987]: time="2026-03-13T00:39:47.164013765Z" level=info msg="connecting to shim 04396cd3c448210659919c67d9dfefeaedc757fdb7c3c066f166a65568ae6480" address="unix:///run/containerd/s/2faf05dd64a39a633f50a47c9bd93a775b84acaaf99fb3071ca98b9cd1731522" protocol=ttrpc version=3 Mar 13 00:39:47.187731 systemd[1]: Started cri-containerd-04396cd3c448210659919c67d9dfefeaedc757fdb7c3c066f166a65568ae6480.scope - libcontainer container 04396cd3c448210659919c67d9dfefeaedc757fdb7c3c066f166a65568ae6480. Mar 13 00:39:47.251608 containerd[1987]: time="2026-03-13T00:39:47.251333343Z" level=info msg="StartContainer for \"04396cd3c448210659919c67d9dfefeaedc757fdb7c3c066f166a65568ae6480\" returns successfully" Mar 13 00:39:47.270775 containerd[1987]: time="2026-03-13T00:39:47.270232222Z" level=info msg="CreateContainer within sandbox \"f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 13 00:39:47.285396 containerd[1987]: time="2026-03-13T00:39:47.285266529Z" level=info msg="Container 1e1e0f834d0cde4e6cc0bba13dcc607235ed07a9b114240fc3810e9eb6686001: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:47.300574 containerd[1987]: time="2026-03-13T00:39:47.300529743Z" level=info msg="CreateContainer within sandbox \"f84ac36d16bc6ba070c0d216941c3e063a5e3e89725ebd4d0a49dbbca2de1307\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1e1e0f834d0cde4e6cc0bba13dcc607235ed07a9b114240fc3810e9eb6686001\"" Mar 13 00:39:47.301219 containerd[1987]: time="2026-03-13T00:39:47.301150128Z" level=info msg="StartContainer for \"1e1e0f834d0cde4e6cc0bba13dcc607235ed07a9b114240fc3810e9eb6686001\"" Mar 13 00:39:47.303464 containerd[1987]: time="2026-03-13T00:39:47.303420305Z" level=info msg="connecting to shim 1e1e0f834d0cde4e6cc0bba13dcc607235ed07a9b114240fc3810e9eb6686001" address="unix:///run/containerd/s/2faf05dd64a39a633f50a47c9bd93a775b84acaaf99fb3071ca98b9cd1731522" protocol=ttrpc version=3 Mar 13 00:39:47.322620 systemd[1]: Started cri-containerd-1e1e0f834d0cde4e6cc0bba13dcc607235ed07a9b114240fc3810e9eb6686001.scope - libcontainer container 1e1e0f834d0cde4e6cc0bba13dcc607235ed07a9b114240fc3810e9eb6686001. Mar 13 00:39:47.391600 containerd[1987]: time="2026-03-13T00:39:47.391559821Z" level=info msg="StartContainer for \"1e1e0f834d0cde4e6cc0bba13dcc607235ed07a9b114240fc3810e9eb6686001\" returns successfully" Mar 13 00:39:48.658620 systemd-networkd[1795]: calic3a13027c8f: Gained IPv6LL Mar 13 00:39:49.847145 containerd[1987]: time="2026-03-13T00:39:49.847039272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkmzp,Uid:c8d85f17-e076-4723-880f-19418eb52308,Namespace:calico-system,Attempt:0,}" Mar 13 00:39:49.847885 containerd[1987]: time="2026-03-13T00:39:49.847039266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nd7nx,Uid:22a17a20-5da9-4c40-a71d-b92e59cd60c8,Namespace:kube-system,Attempt:0,}" Mar 13 00:39:49.853863 systemd[1]: Started sshd@7-172.31.22.244:22-20.161.92.111:50044.service - OpenSSH per-connection server daemon (20.161.92.111:50044). Mar 13 00:39:50.129828 systemd-networkd[1795]: cali699cf1d2479: Link UP Mar 13 00:39:50.130870 systemd-networkd[1795]: cali699cf1d2479: Gained carrier Mar 13 00:39:50.139170 (udev-worker)[5341]: Network interface NamePolicy= disabled on kernel command line. Mar 13 00:39:50.154451 containerd[1987]: 2026-03-13 00:39:49.973 [INFO][5315] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--244-k8s-coredns--674b8bbfcf--nd7nx-eth0 coredns-674b8bbfcf- kube-system 22a17a20-5da9-4c40-a71d-b92e59cd60c8 846 0 2026-03-13 00:39:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-244 coredns-674b8bbfcf-nd7nx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali699cf1d2479 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" Namespace="kube-system" Pod="coredns-674b8bbfcf-nd7nx" WorkloadEndpoint="ip--172--31--22--244-k8s-coredns--674b8bbfcf--nd7nx-" Mar 13 00:39:50.154451 containerd[1987]: 2026-03-13 00:39:49.975 [INFO][5315] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" Namespace="kube-system" Pod="coredns-674b8bbfcf-nd7nx" WorkloadEndpoint="ip--172--31--22--244-k8s-coredns--674b8bbfcf--nd7nx-eth0" Mar 13 00:39:50.154451 containerd[1987]: 2026-03-13 00:39:50.067 [INFO][5327] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" HandleID="k8s-pod-network.6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" Workload="ip--172--31--22--244-k8s-coredns--674b8bbfcf--nd7nx-eth0" Mar 13 00:39:50.154727 kubelet[3328]: I0313 00:39:50.154196 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-58475b8bbb-qmm8s" podStartSLOduration=4.154169521 podStartE2EDuration="4.154169521s" podCreationTimestamp="2026-03-13 00:39:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:39:48.224214801 +0000 UTC m=+53.563296295" watchObservedRunningTime="2026-03-13 00:39:50.154169521 +0000 UTC m=+55.493251016" Mar 13 00:39:50.156753 containerd[1987]: 2026-03-13 00:39:50.086 [INFO][5327] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" HandleID="k8s-pod-network.6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" Workload="ip--172--31--22--244-k8s-coredns--674b8bbfcf--nd7nx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004081a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-244", "pod":"coredns-674b8bbfcf-nd7nx", "timestamp":"2026-03-13 00:39:50.067744671 +0000 UTC"}, Hostname:"ip-172-31-22-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000151080)} Mar 13 00:39:50.156753 containerd[1987]: 2026-03-13 00:39:50.086 [INFO][5327] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:39:50.156753 containerd[1987]: 2026-03-13 00:39:50.086 [INFO][5327] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:39:50.156753 containerd[1987]: 2026-03-13 00:39:50.087 [INFO][5327] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-244' Mar 13 00:39:50.156753 containerd[1987]: 2026-03-13 00:39:50.090 [INFO][5327] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" host="ip-172-31-22-244" Mar 13 00:39:50.156753 containerd[1987]: 2026-03-13 00:39:50.096 [INFO][5327] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-22-244" Mar 13 00:39:50.156753 containerd[1987]: 2026-03-13 00:39:50.101 [INFO][5327] ipam/ipam.go 526: Trying affinity for 192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:50.156753 containerd[1987]: 2026-03-13 00:39:50.103 [INFO][5327] ipam/ipam.go 160: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:50.156753 containerd[1987]: 2026-03-13 00:39:50.105 [INFO][5327] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:50.157081 containerd[1987]: 2026-03-13 00:39:50.105 [INFO][5327] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" host="ip-172-31-22-244" Mar 13 00:39:50.157081 containerd[1987]: 2026-03-13 00:39:50.107 [INFO][5327] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575 Mar 13 00:39:50.157081 containerd[1987]: 2026-03-13 00:39:50.112 [INFO][5327] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" host="ip-172-31-22-244" Mar 13 00:39:50.157081 containerd[1987]: 2026-03-13 00:39:50.120 [INFO][5327] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.92.67/26] block=192.168.92.64/26 handle="k8s-pod-network.6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" host="ip-172-31-22-244" Mar 13 00:39:50.157081 containerd[1987]: 2026-03-13 00:39:50.120 [INFO][5327] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.92.67/26] handle="k8s-pod-network.6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" host="ip-172-31-22-244" Mar 13 00:39:50.157081 containerd[1987]: 2026-03-13 00:39:50.120 [INFO][5327] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:39:50.157081 containerd[1987]: 2026-03-13 00:39:50.120 [INFO][5327] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.92.67/26] IPv6=[] ContainerID="6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" HandleID="k8s-pod-network.6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" Workload="ip--172--31--22--244-k8s-coredns--674b8bbfcf--nd7nx-eth0" Mar 13 00:39:50.157291 containerd[1987]: 2026-03-13 00:39:50.124 [INFO][5315] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" Namespace="kube-system" Pod="coredns-674b8bbfcf-nd7nx" WorkloadEndpoint="ip--172--31--22--244-k8s-coredns--674b8bbfcf--nd7nx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-coredns--674b8bbfcf--nd7nx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"22a17a20-5da9-4c40-a71d-b92e59cd60c8", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"", Pod:"coredns-674b8bbfcf-nd7nx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali699cf1d2479", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:50.157291 containerd[1987]: 2026-03-13 00:39:50.124 [INFO][5315] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.67/32] ContainerID="6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" Namespace="kube-system" Pod="coredns-674b8bbfcf-nd7nx" WorkloadEndpoint="ip--172--31--22--244-k8s-coredns--674b8bbfcf--nd7nx-eth0" Mar 13 00:39:50.157291 containerd[1987]: 2026-03-13 00:39:50.124 [INFO][5315] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali699cf1d2479 ContainerID="6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" Namespace="kube-system" Pod="coredns-674b8bbfcf-nd7nx" WorkloadEndpoint="ip--172--31--22--244-k8s-coredns--674b8bbfcf--nd7nx-eth0" Mar 13 00:39:50.157291 containerd[1987]: 2026-03-13 00:39:50.130 [INFO][5315] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" Namespace="kube-system" Pod="coredns-674b8bbfcf-nd7nx" WorkloadEndpoint="ip--172--31--22--244-k8s-coredns--674b8bbfcf--nd7nx-eth0" Mar 13 00:39:50.157291 containerd[1987]: 2026-03-13 00:39:50.131 [INFO][5315] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" Namespace="kube-system" Pod="coredns-674b8bbfcf-nd7nx" WorkloadEndpoint="ip--172--31--22--244-k8s-coredns--674b8bbfcf--nd7nx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-coredns--674b8bbfcf--nd7nx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"22a17a20-5da9-4c40-a71d-b92e59cd60c8", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575", Pod:"coredns-674b8bbfcf-nd7nx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali699cf1d2479", MAC:"a6:c2:6b:f1:ca:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:50.157291 containerd[1987]: 2026-03-13 00:39:50.148 [INFO][5315] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" Namespace="kube-system" Pod="coredns-674b8bbfcf-nd7nx" WorkloadEndpoint="ip--172--31--22--244-k8s-coredns--674b8bbfcf--nd7nx-eth0" Mar 13 00:39:50.209460 containerd[1987]: time="2026-03-13T00:39:50.209305955Z" level=info msg="connecting to shim 6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575" address="unix:///run/containerd/s/6af20af24576ff86529c670ee0940e3acc5af891fd52ae3f1df187c7f4333106" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:50.270763 systemd[1]: Started cri-containerd-6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575.scope - libcontainer container 6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575. Mar 13 00:39:50.280408 systemd-networkd[1795]: cali258fbba13a7: Link UP Mar 13 00:39:50.286642 systemd-networkd[1795]: cali258fbba13a7: Gained carrier Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.002 [INFO][5301] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--244-k8s-csi--node--driver--bkmzp-eth0 csi-node-driver- calico-system c8d85f17-e076-4723-880f-19418eb52308 708 0 2026-03-13 00:39:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-22-244 csi-node-driver-bkmzp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali258fbba13a7 [] [] }} ContainerID="027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" Namespace="calico-system" Pod="csi-node-driver-bkmzp" WorkloadEndpoint="ip--172--31--22--244-k8s-csi--node--driver--bkmzp-" Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.002 [INFO][5301] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" Namespace="calico-system" Pod="csi-node-driver-bkmzp" WorkloadEndpoint="ip--172--31--22--244-k8s-csi--node--driver--bkmzp-eth0" Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.080 [INFO][5332] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" HandleID="k8s-pod-network.027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" Workload="ip--172--31--22--244-k8s-csi--node--driver--bkmzp-eth0" Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.090 [INFO][5332] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" HandleID="k8s-pod-network.027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" Workload="ip--172--31--22--244-k8s-csi--node--driver--bkmzp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd9b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-244", "pod":"csi-node-driver-bkmzp", "timestamp":"2026-03-13 00:39:50.08092559 +0000 UTC"}, Hostname:"ip-172-31-22-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000189080)} Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.090 [INFO][5332] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.121 [INFO][5332] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.121 [INFO][5332] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-244' Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.194 [INFO][5332] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" host="ip-172-31-22-244" Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.206 [INFO][5332] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-22-244" Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.217 [INFO][5332] ipam/ipam.go 526: Trying affinity for 192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.221 [INFO][5332] ipam/ipam.go 160: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.233 [INFO][5332] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.233 [INFO][5332] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" host="ip-172-31-22-244" Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.239 [INFO][5332] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.251 [INFO][5332] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" host="ip-172-31-22-244" Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.268 [INFO][5332] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.92.68/26] block=192.168.92.64/26 handle="k8s-pod-network.027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" host="ip-172-31-22-244" Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.269 [INFO][5332] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.92.68/26] handle="k8s-pod-network.027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" host="ip-172-31-22-244" Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.269 [INFO][5332] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:39:50.331818 containerd[1987]: 2026-03-13 00:39:50.269 [INFO][5332] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.92.68/26] IPv6=[] ContainerID="027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" HandleID="k8s-pod-network.027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" Workload="ip--172--31--22--244-k8s-csi--node--driver--bkmzp-eth0" Mar 13 00:39:50.332757 containerd[1987]: 2026-03-13 00:39:50.275 [INFO][5301] cni-plugin/k8s.go 418: Populated endpoint ContainerID="027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" Namespace="calico-system" Pod="csi-node-driver-bkmzp" WorkloadEndpoint="ip--172--31--22--244-k8s-csi--node--driver--bkmzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-csi--node--driver--bkmzp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c8d85f17-e076-4723-880f-19418eb52308", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"", Pod:"csi-node-driver-bkmzp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali258fbba13a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:50.332757 containerd[1987]: 2026-03-13 00:39:50.275 [INFO][5301] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.68/32] ContainerID="027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" Namespace="calico-system" Pod="csi-node-driver-bkmzp" WorkloadEndpoint="ip--172--31--22--244-k8s-csi--node--driver--bkmzp-eth0" Mar 13 00:39:50.332757 containerd[1987]: 2026-03-13 00:39:50.276 [INFO][5301] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali258fbba13a7 ContainerID="027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" Namespace="calico-system" Pod="csi-node-driver-bkmzp" WorkloadEndpoint="ip--172--31--22--244-k8s-csi--node--driver--bkmzp-eth0" Mar 13 00:39:50.332757 containerd[1987]: 2026-03-13 00:39:50.291 [INFO][5301] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" Namespace="calico-system" Pod="csi-node-driver-bkmzp" WorkloadEndpoint="ip--172--31--22--244-k8s-csi--node--driver--bkmzp-eth0" Mar 13 00:39:50.332757 containerd[1987]: 2026-03-13 00:39:50.292 [INFO][5301] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" Namespace="calico-system" Pod="csi-node-driver-bkmzp" WorkloadEndpoint="ip--172--31--22--244-k8s-csi--node--driver--bkmzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-csi--node--driver--bkmzp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c8d85f17-e076-4723-880f-19418eb52308", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d", Pod:"csi-node-driver-bkmzp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali258fbba13a7", MAC:"1a:78:b6:92:4a:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:50.332757 containerd[1987]: 2026-03-13 00:39:50.327 [INFO][5301] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" Namespace="calico-system" Pod="csi-node-driver-bkmzp" WorkloadEndpoint="ip--172--31--22--244-k8s-csi--node--driver--bkmzp-eth0" Mar 13 00:39:50.389490 containerd[1987]: time="2026-03-13T00:39:50.389279569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nd7nx,Uid:22a17a20-5da9-4c40-a71d-b92e59cd60c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575\"" Mar 13 00:39:50.405065 sshd[5300]: Accepted publickey for core from 20.161.92.111 port 50044 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:39:50.407466 containerd[1987]: time="2026-03-13T00:39:50.406574326Z" level=info msg="CreateContainer within sandbox \"6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:39:50.408278 containerd[1987]: time="2026-03-13T00:39:50.406749297Z" level=info msg="connecting to shim 027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d" address="unix:///run/containerd/s/3d31d4a1f9dee67ca9ae4fc4b6dc92772cf3f5a48373daa3d083181edaec11eb" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:50.409205 sshd-session[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:50.424028 systemd-logind[1959]: New session 8 of user core. Mar 13 00:39:50.429295 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:39:50.469207 containerd[1987]: time="2026-03-13T00:39:50.469150407Z" level=info msg="Container a6b6dc5706688ede967667c1964e47b3e2a71afb9ae8c7056647b38f8f3ead1d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:50.470711 systemd[1]: Started cri-containerd-027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d.scope - libcontainer container 027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d. Mar 13 00:39:50.485385 containerd[1987]: time="2026-03-13T00:39:50.485064541Z" level=info msg="CreateContainer within sandbox \"6862812cbce3ecb7d8759c1cd5d5337326dd3da4076331859def07b453f14575\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6b6dc5706688ede967667c1964e47b3e2a71afb9ae8c7056647b38f8f3ead1d\"" Mar 13 00:39:50.487184 containerd[1987]: time="2026-03-13T00:39:50.487128175Z" level=info msg="StartContainer for \"a6b6dc5706688ede967667c1964e47b3e2a71afb9ae8c7056647b38f8f3ead1d\"" Mar 13 00:39:50.492599 containerd[1987]: time="2026-03-13T00:39:50.492551807Z" level=info msg="connecting to shim a6b6dc5706688ede967667c1964e47b3e2a71afb9ae8c7056647b38f8f3ead1d" address="unix:///run/containerd/s/6af20af24576ff86529c670ee0940e3acc5af891fd52ae3f1df187c7f4333106" protocol=ttrpc version=3 Mar 13 00:39:50.541667 systemd[1]: Started cri-containerd-a6b6dc5706688ede967667c1964e47b3e2a71afb9ae8c7056647b38f8f3ead1d.scope - libcontainer container a6b6dc5706688ede967667c1964e47b3e2a71afb9ae8c7056647b38f8f3ead1d. Mar 13 00:39:50.552381 containerd[1987]: time="2026-03-13T00:39:50.552317957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkmzp,Uid:c8d85f17-e076-4723-880f-19418eb52308,Namespace:calico-system,Attempt:0,} returns sandbox id \"027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d\"" Mar 13 00:39:50.577405 containerd[1987]: time="2026-03-13T00:39:50.577349616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 13 00:39:50.634565 containerd[1987]: time="2026-03-13T00:39:50.634528302Z" level=info msg="StartContainer for \"a6b6dc5706688ede967667c1964e47b3e2a71afb9ae8c7056647b38f8f3ead1d\" returns successfully" Mar 13 00:39:51.154978 systemd-networkd[1795]: cali699cf1d2479: Gained IPv6LL Mar 13 00:39:51.292157 kubelet[3328]: I0313 00:39:51.291989 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nd7nx" podStartSLOduration=51.291963622 podStartE2EDuration="51.291963622s" podCreationTimestamp="2026-03-13 00:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:39:51.288104057 +0000 UTC m=+56.627185552" watchObservedRunningTime="2026-03-13 00:39:51.291963622 +0000 UTC m=+56.631045119" Mar 13 00:39:51.578874 sshd[5441]: Connection closed by 20.161.92.111 port 50044 Mar 13 00:39:51.579905 sshd-session[5300]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:51.588570 systemd[1]: sshd@7-172.31.22.244:22-20.161.92.111:50044.service: Deactivated successfully. Mar 13 00:39:51.591091 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:39:51.592480 systemd-logind[1959]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:39:51.594843 systemd-logind[1959]: Removed session 8. Mar 13 00:39:51.666603 systemd-networkd[1795]: cali258fbba13a7: Gained IPv6LL Mar 13 00:39:51.848097 containerd[1987]: time="2026-03-13T00:39:51.847703679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4fc4d544-k9996,Uid:5c86dfd6-a5e4-4241-8df8-e41d28563f15,Namespace:calico-system,Attempt:0,}" Mar 13 00:39:51.848097 containerd[1987]: time="2026-03-13T00:39:51.847702140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8xphx,Uid:98194567-8a9d-4ddc-aedd-092811d13223,Namespace:kube-system,Attempt:0,}" Mar 13 00:39:51.849295 containerd[1987]: time="2026-03-13T00:39:51.849269208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-s47n2,Uid:05bcf32a-e5fb-4c55-add3-c1efda8a3268,Namespace:calico-system,Attempt:0,}" Mar 13 00:39:52.192914 systemd-networkd[1795]: calibc779cf5e70: Link UP Mar 13 00:39:52.193629 systemd-networkd[1795]: calibc779cf5e70: Gained carrier Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:51.993 [INFO][5544] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--k9996-eth0 calico-apiserver-f4fc4d544- calico-system 5c86dfd6-a5e4-4241-8df8-e41d28563f15 853 0 2026-03-13 00:39:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f4fc4d544 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-244 calico-apiserver-f4fc4d544-k9996 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibc779cf5e70 [] [] }} ContainerID="2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" Namespace="calico-system" Pod="calico-apiserver-f4fc4d544-k9996" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--k9996-" Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:51.994 [INFO][5544] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" Namespace="calico-system" Pod="calico-apiserver-f4fc4d544-k9996" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--k9996-eth0" Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.111 [INFO][5576] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" HandleID="k8s-pod-network.2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" Workload="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--k9996-eth0" Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.132 [INFO][5576] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" HandleID="k8s-pod-network.2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" Workload="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--k9996-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-244", "pod":"calico-apiserver-f4fc4d544-k9996", "timestamp":"2026-03-13 00:39:52.111548907 +0000 UTC"}, Hostname:"ip-172-31-22-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000642420)} Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.132 [INFO][5576] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.132 [INFO][5576] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.133 [INFO][5576] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-244' Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.136 [INFO][5576] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" host="ip-172-31-22-244" Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.141 [INFO][5576] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-22-244" Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.148 [INFO][5576] ipam/ipam.go 526: Trying affinity for 192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.151 [INFO][5576] ipam/ipam.go 160: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.154 [INFO][5576] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.154 [INFO][5576] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" host="ip-172-31-22-244" Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.165 [INFO][5576] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.171 [INFO][5576] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" host="ip-172-31-22-244" Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.179 [INFO][5576] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.92.69/26] block=192.168.92.64/26 handle="k8s-pod-network.2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" host="ip-172-31-22-244" Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.179 [INFO][5576] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.92.69/26] handle="k8s-pod-network.2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" host="ip-172-31-22-244" Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.179 [INFO][5576] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:39:52.217089 containerd[1987]: 2026-03-13 00:39:52.179 [INFO][5576] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.92.69/26] IPv6=[] ContainerID="2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" HandleID="k8s-pod-network.2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" Workload="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--k9996-eth0" Mar 13 00:39:52.219570 containerd[1987]: 2026-03-13 00:39:52.187 [INFO][5544] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" Namespace="calico-system" Pod="calico-apiserver-f4fc4d544-k9996" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--k9996-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--k9996-eth0", GenerateName:"calico-apiserver-f4fc4d544-", Namespace:"calico-system", SelfLink:"", UID:"5c86dfd6-a5e4-4241-8df8-e41d28563f15", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4fc4d544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"", Pod:"calico-apiserver-f4fc4d544-k9996", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibc779cf5e70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:52.219570 containerd[1987]: 2026-03-13 00:39:52.187 [INFO][5544] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.69/32] ContainerID="2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" Namespace="calico-system" Pod="calico-apiserver-f4fc4d544-k9996" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--k9996-eth0" Mar 13 00:39:52.219570 containerd[1987]: 2026-03-13 00:39:52.187 [INFO][5544] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc779cf5e70 ContainerID="2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" Namespace="calico-system" Pod="calico-apiserver-f4fc4d544-k9996" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--k9996-eth0" Mar 13 00:39:52.219570 containerd[1987]: 2026-03-13 00:39:52.194 [INFO][5544] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" Namespace="calico-system" Pod="calico-apiserver-f4fc4d544-k9996" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--k9996-eth0" Mar 13 00:39:52.219570 containerd[1987]: 2026-03-13 00:39:52.195 [INFO][5544] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" Namespace="calico-system" Pod="calico-apiserver-f4fc4d544-k9996" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--k9996-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--k9996-eth0", GenerateName:"calico-apiserver-f4fc4d544-", Namespace:"calico-system", SelfLink:"", UID:"5c86dfd6-a5e4-4241-8df8-e41d28563f15", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4fc4d544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab", Pod:"calico-apiserver-f4fc4d544-k9996", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibc779cf5e70", MAC:"1a:2b:58:84:5a:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:52.219570 containerd[1987]: 2026-03-13 00:39:52.210 [INFO][5544] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" Namespace="calico-system" Pod="calico-apiserver-f4fc4d544-k9996" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--k9996-eth0" Mar 13 00:39:52.279380 containerd[1987]: time="2026-03-13T00:39:52.279314993Z" level=info msg="connecting to shim 2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab" address="unix:///run/containerd/s/5691e62e7bb1d9d4f7c9f5ca4b36487c659ee010b8c5025644854225538f7792" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:52.336044 systemd-networkd[1795]: cali1d293797005: Link UP Mar 13 00:39:52.336331 systemd-networkd[1795]: cali1d293797005: Gained carrier Mar 13 00:39:52.368590 systemd[1]: Started cri-containerd-2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab.scope - libcontainer container 2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab. Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:51.973 [INFO][5534] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--244-k8s-coredns--674b8bbfcf--8xphx-eth0 coredns-674b8bbfcf- kube-system 98194567-8a9d-4ddc-aedd-092811d13223 849 0 2026-03-13 00:39:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-244 coredns-674b8bbfcf-8xphx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1d293797005 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" Namespace="kube-system" Pod="coredns-674b8bbfcf-8xphx" WorkloadEndpoint="ip--172--31--22--244-k8s-coredns--674b8bbfcf--8xphx-" Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:51.974 [INFO][5534] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" Namespace="kube-system" Pod="coredns-674b8bbfcf-8xphx" WorkloadEndpoint="ip--172--31--22--244-k8s-coredns--674b8bbfcf--8xphx-eth0" Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.107 [INFO][5570] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" HandleID="k8s-pod-network.4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" Workload="ip--172--31--22--244-k8s-coredns--674b8bbfcf--8xphx-eth0" Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.132 [INFO][5570] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" HandleID="k8s-pod-network.4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" Workload="ip--172--31--22--244-k8s-coredns--674b8bbfcf--8xphx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e8b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-244", "pod":"coredns-674b8bbfcf-8xphx", "timestamp":"2026-03-13 00:39:52.10790625 +0000 UTC"}, Hostname:"ip-172-31-22-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000114c60)} Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.132 [INFO][5570] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.179 [INFO][5570] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.180 [INFO][5570] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-244' Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.237 [INFO][5570] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" host="ip-172-31-22-244" Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.246 [INFO][5570] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-22-244" Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.259 [INFO][5570] ipam/ipam.go 526: Trying affinity for 192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.269 [INFO][5570] ipam/ipam.go 160: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.277 [INFO][5570] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.277 [INFO][5570] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" host="ip-172-31-22-244" Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.282 [INFO][5570] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310 Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.293 [INFO][5570] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" host="ip-172-31-22-244" Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.312 [INFO][5570] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.92.70/26] block=192.168.92.64/26 handle="k8s-pod-network.4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" host="ip-172-31-22-244" Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.312 [INFO][5570] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.92.70/26] handle="k8s-pod-network.4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" host="ip-172-31-22-244" Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.315 [INFO][5570] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:39:52.379968 containerd[1987]: 2026-03-13 00:39:52.315 [INFO][5570] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.92.70/26] IPv6=[] ContainerID="4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" HandleID="k8s-pod-network.4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" Workload="ip--172--31--22--244-k8s-coredns--674b8bbfcf--8xphx-eth0" Mar 13 00:39:52.381881 containerd[1987]: 2026-03-13 00:39:52.327 [INFO][5534] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" Namespace="kube-system" Pod="coredns-674b8bbfcf-8xphx" WorkloadEndpoint="ip--172--31--22--244-k8s-coredns--674b8bbfcf--8xphx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-coredns--674b8bbfcf--8xphx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"98194567-8a9d-4ddc-aedd-092811d13223", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"", Pod:"coredns-674b8bbfcf-8xphx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d293797005", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:52.381881 containerd[1987]: 2026-03-13 00:39:52.328 [INFO][5534] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.70/32] ContainerID="4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" Namespace="kube-system" Pod="coredns-674b8bbfcf-8xphx" WorkloadEndpoint="ip--172--31--22--244-k8s-coredns--674b8bbfcf--8xphx-eth0" Mar 13 00:39:52.381881 containerd[1987]: 2026-03-13 00:39:52.328 [INFO][5534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d293797005 ContainerID="4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" Namespace="kube-system" Pod="coredns-674b8bbfcf-8xphx" WorkloadEndpoint="ip--172--31--22--244-k8s-coredns--674b8bbfcf--8xphx-eth0" Mar 13 00:39:52.381881 containerd[1987]: 2026-03-13 00:39:52.332 [INFO][5534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" Namespace="kube-system" Pod="coredns-674b8bbfcf-8xphx" WorkloadEndpoint="ip--172--31--22--244-k8s-coredns--674b8bbfcf--8xphx-eth0" Mar 13 00:39:52.381881 containerd[1987]: 2026-03-13 00:39:52.333 [INFO][5534] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" Namespace="kube-system" Pod="coredns-674b8bbfcf-8xphx" WorkloadEndpoint="ip--172--31--22--244-k8s-coredns--674b8bbfcf--8xphx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-coredns--674b8bbfcf--8xphx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"98194567-8a9d-4ddc-aedd-092811d13223", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310", Pod:"coredns-674b8bbfcf-8xphx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d293797005", MAC:"e2:9b:bc:f9:02:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:52.381881 containerd[1987]: 2026-03-13 00:39:52.362 [INFO][5534] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" Namespace="kube-system" Pod="coredns-674b8bbfcf-8xphx" WorkloadEndpoint="ip--172--31--22--244-k8s-coredns--674b8bbfcf--8xphx-eth0" Mar 13 00:39:52.458381 containerd[1987]: time="2026-03-13T00:39:52.458217239Z" level=info msg="connecting to shim 4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310" address="unix:///run/containerd/s/162907da5bc878342b921af1fb2eab6c068d7f274852e2cef5b36dd299ca308d" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:52.464413 systemd-networkd[1795]: cali08ac24d24d1: Link UP Mar 13 00:39:52.464924 systemd-networkd[1795]: cali08ac24d24d1: Gained carrier Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.002 [INFO][5550] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--244-k8s-goldmane--5b85766d88--s47n2-eth0 goldmane-5b85766d88- calico-system 05bcf32a-e5fb-4c55-add3-c1efda8a3268 852 0 2026-03-13 00:39:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-22-244 goldmane-5b85766d88-s47n2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali08ac24d24d1 [] [] }} ContainerID="2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" Namespace="calico-system" Pod="goldmane-5b85766d88-s47n2" WorkloadEndpoint="ip--172--31--22--244-k8s-goldmane--5b85766d88--s47n2-" Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.002 [INFO][5550] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" Namespace="calico-system" Pod="goldmane-5b85766d88-s47n2" WorkloadEndpoint="ip--172--31--22--244-k8s-goldmane--5b85766d88--s47n2-eth0" Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.118 [INFO][5582] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" HandleID="k8s-pod-network.2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" Workload="ip--172--31--22--244-k8s-goldmane--5b85766d88--s47n2-eth0" Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.136 [INFO][5582] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" HandleID="k8s-pod-network.2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" Workload="ip--172--31--22--244-k8s-goldmane--5b85766d88--s47n2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380680), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-244", "pod":"goldmane-5b85766d88-s47n2", "timestamp":"2026-03-13 00:39:52.118720431 +0000 UTC"}, Hostname:"ip-172-31-22-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000280000)} Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.137 [INFO][5582] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.318 [INFO][5582] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.318 [INFO][5582] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-244' Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.348 [INFO][5582] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" host="ip-172-31-22-244" Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.375 [INFO][5582] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-22-244" Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.391 [INFO][5582] ipam/ipam.go 526: Trying affinity for 192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.393 [INFO][5582] ipam/ipam.go 160: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.397 [INFO][5582] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.397 [INFO][5582] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" host="ip-172-31-22-244" Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.403 [INFO][5582] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509 Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.412 [INFO][5582] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" host="ip-172-31-22-244" Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.430 [INFO][5582] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.92.71/26] block=192.168.92.64/26 handle="k8s-pod-network.2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" host="ip-172-31-22-244" Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.430 [INFO][5582] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.92.71/26] handle="k8s-pod-network.2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" host="ip-172-31-22-244" Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.430 [INFO][5582] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:39:52.503046 containerd[1987]: 2026-03-13 00:39:52.430 [INFO][5582] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.92.71/26] IPv6=[] ContainerID="2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" HandleID="k8s-pod-network.2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" Workload="ip--172--31--22--244-k8s-goldmane--5b85766d88--s47n2-eth0" Mar 13 00:39:52.505782 containerd[1987]: 2026-03-13 00:39:52.441 [INFO][5550] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" Namespace="calico-system" Pod="goldmane-5b85766d88-s47n2" WorkloadEndpoint="ip--172--31--22--244-k8s-goldmane--5b85766d88--s47n2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-goldmane--5b85766d88--s47n2-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"05bcf32a-e5fb-4c55-add3-c1efda8a3268", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"", Pod:"goldmane-5b85766d88-s47n2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.92.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali08ac24d24d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:52.505782 containerd[1987]: 2026-03-13 00:39:52.441 [INFO][5550] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.71/32] ContainerID="2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" Namespace="calico-system" Pod="goldmane-5b85766d88-s47n2" WorkloadEndpoint="ip--172--31--22--244-k8s-goldmane--5b85766d88--s47n2-eth0" Mar 13 00:39:52.505782 containerd[1987]: 2026-03-13 00:39:52.441 [INFO][5550] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08ac24d24d1 ContainerID="2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" Namespace="calico-system" Pod="goldmane-5b85766d88-s47n2" WorkloadEndpoint="ip--172--31--22--244-k8s-goldmane--5b85766d88--s47n2-eth0" Mar 13 00:39:52.505782 containerd[1987]: 2026-03-13 00:39:52.470 [INFO][5550] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" Namespace="calico-system" Pod="goldmane-5b85766d88-s47n2" WorkloadEndpoint="ip--172--31--22--244-k8s-goldmane--5b85766d88--s47n2-eth0" Mar 13 00:39:52.505782 containerd[1987]: 2026-03-13 00:39:52.473 [INFO][5550] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" Namespace="calico-system" Pod="goldmane-5b85766d88-s47n2" WorkloadEndpoint="ip--172--31--22--244-k8s-goldmane--5b85766d88--s47n2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-goldmane--5b85766d88--s47n2-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"05bcf32a-e5fb-4c55-add3-c1efda8a3268", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509", Pod:"goldmane-5b85766d88-s47n2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.92.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali08ac24d24d1", MAC:"3e:87:0a:57:e9:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:52.505782 containerd[1987]: 2026-03-13 00:39:52.496 [INFO][5550] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" Namespace="calico-system" Pod="goldmane-5b85766d88-s47n2" WorkloadEndpoint="ip--172--31--22--244-k8s-goldmane--5b85766d88--s47n2-eth0" Mar 13 00:39:52.560569 systemd[1]: Started cri-containerd-4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310.scope - libcontainer container 4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310. Mar 13 00:39:52.597181 containerd[1987]: time="2026-03-13T00:39:52.597029041Z" level=info msg="connecting to shim 2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509" address="unix:///run/containerd/s/715f43549186b7766c57da67d68c9982967f14742fa51432b11c7ed1a4678599" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:52.661916 containerd[1987]: time="2026-03-13T00:39:52.661681332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4fc4d544-k9996,Uid:5c86dfd6-a5e4-4241-8df8-e41d28563f15,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab\"" Mar 13 00:39:52.675620 systemd[1]: Started cri-containerd-2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509.scope - libcontainer container 2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509. Mar 13 00:39:52.743301 containerd[1987]: time="2026-03-13T00:39:52.742481268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8xphx,Uid:98194567-8a9d-4ddc-aedd-092811d13223,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310\"" Mar 13 00:39:52.757120 containerd[1987]: time="2026-03-13T00:39:52.757072877Z" level=info msg="CreateContainer within sandbox \"4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:39:52.801711 containerd[1987]: time="2026-03-13T00:39:52.801604565Z" level=info msg="Container 5440559786840accdaf67f647b672ce21920c1ffbb1f5779e408af40282aee80: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:52.822350 containerd[1987]: time="2026-03-13T00:39:52.822285192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-s47n2,Uid:05bcf32a-e5fb-4c55-add3-c1efda8a3268,Namespace:calico-system,Attempt:0,} returns sandbox id \"2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509\"" Mar 13 00:39:52.823281 containerd[1987]: time="2026-03-13T00:39:52.823232700Z" level=info msg="CreateContainer within sandbox \"4b3ff076e53316031c71e63dc88fcd504c9b5aa5ce4ec8c9b18e9c43e968d310\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5440559786840accdaf67f647b672ce21920c1ffbb1f5779e408af40282aee80\"" Mar 13 00:39:52.824385 containerd[1987]: time="2026-03-13T00:39:52.824296924Z" level=info msg="StartContainer for \"5440559786840accdaf67f647b672ce21920c1ffbb1f5779e408af40282aee80\"" Mar 13 00:39:52.829647 containerd[1987]: time="2026-03-13T00:39:52.826293672Z" level=info msg="connecting to shim 5440559786840accdaf67f647b672ce21920c1ffbb1f5779e408af40282aee80" address="unix:///run/containerd/s/162907da5bc878342b921af1fb2eab6c068d7f274852e2cef5b36dd299ca308d" protocol=ttrpc version=3 Mar 13 00:39:52.902844 systemd[1]: Started cri-containerd-5440559786840accdaf67f647b672ce21920c1ffbb1f5779e408af40282aee80.scope - libcontainer container 5440559786840accdaf67f647b672ce21920c1ffbb1f5779e408af40282aee80. Mar 13 00:39:52.984179 containerd[1987]: time="2026-03-13T00:39:52.984130474Z" level=info msg="StartContainer for \"5440559786840accdaf67f647b672ce21920c1ffbb1f5779e408af40282aee80\" returns successfully" Mar 13 00:39:53.090903 containerd[1987]: time="2026-03-13T00:39:53.090769520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:53.092828 containerd[1987]: time="2026-03-13T00:39:53.092760308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 13 00:39:53.097040 containerd[1987]: time="2026-03-13T00:39:53.095793831Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:53.101808 containerd[1987]: time="2026-03-13T00:39:53.101732903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:53.102647 containerd[1987]: time="2026-03-13T00:39:53.102435758Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.52502308s" Mar 13 00:39:53.102647 containerd[1987]: time="2026-03-13T00:39:53.102480330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 13 00:39:53.104461 containerd[1987]: time="2026-03-13T00:39:53.103967887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:39:53.110483 containerd[1987]: time="2026-03-13T00:39:53.110395401Z" level=info msg="CreateContainer within sandbox \"027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 13 00:39:53.138679 containerd[1987]: time="2026-03-13T00:39:53.138634249Z" level=info msg="Container 2b86516075c3529b8e88cbb2d88b386b4de38fb15d1af1460ff503dae63a77aa: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:53.153319 containerd[1987]: time="2026-03-13T00:39:53.153257817Z" level=info msg="CreateContainer within sandbox \"027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2b86516075c3529b8e88cbb2d88b386b4de38fb15d1af1460ff503dae63a77aa\"" Mar 13 00:39:53.155223 containerd[1987]: time="2026-03-13T00:39:53.155173805Z" level=info msg="StartContainer for \"2b86516075c3529b8e88cbb2d88b386b4de38fb15d1af1460ff503dae63a77aa\"" Mar 13 00:39:53.159147 containerd[1987]: time="2026-03-13T00:39:53.158216752Z" level=info msg="connecting to shim 2b86516075c3529b8e88cbb2d88b386b4de38fb15d1af1460ff503dae63a77aa" address="unix:///run/containerd/s/3d31d4a1f9dee67ca9ae4fc4b6dc92772cf3f5a48373daa3d083181edaec11eb" protocol=ttrpc version=3 Mar 13 00:39:53.201705 systemd[1]: Started cri-containerd-2b86516075c3529b8e88cbb2d88b386b4de38fb15d1af1460ff503dae63a77aa.scope - libcontainer container 2b86516075c3529b8e88cbb2d88b386b4de38fb15d1af1460ff503dae63a77aa. Mar 13 00:39:53.289709 containerd[1987]: time="2026-03-13T00:39:53.289667804Z" level=info msg="StartContainer for \"2b86516075c3529b8e88cbb2d88b386b4de38fb15d1af1460ff503dae63a77aa\" returns successfully" Mar 13 00:39:53.307214 kubelet[3328]: I0313 00:39:53.307054 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8xphx" podStartSLOduration=53.307032906 podStartE2EDuration="53.307032906s" podCreationTimestamp="2026-03-13 00:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:39:53.304630117 +0000 UTC m=+58.643711639" watchObservedRunningTime="2026-03-13 00:39:53.307032906 +0000 UTC m=+58.646114403" Mar 13 00:39:53.650834 systemd-networkd[1795]: cali1d293797005: Gained IPv6LL Mar 13 00:39:53.847596 containerd[1987]: time="2026-03-13T00:39:53.847512446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4fc4d544-jrxrr,Uid:7c86ec91-9cc0-46f5-9cdd-0a50a7ddacaf,Namespace:calico-system,Attempt:0,}" Mar 13 00:39:53.847916 containerd[1987]: time="2026-03-13T00:39:53.847512441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8498cbcb77-2dh8t,Uid:232d682b-1e73-4352-a839-d93cdc2b9af5,Namespace:calico-system,Attempt:0,}" Mar 13 00:39:53.908462 systemd-networkd[1795]: calibc779cf5e70: Gained IPv6LL Mar 13 00:39:53.970986 systemd-networkd[1795]: cali08ac24d24d1: Gained IPv6LL Mar 13 00:39:54.158139 systemd-networkd[1795]: calie96fb2da063: Link UP Mar 13 00:39:54.161904 systemd-networkd[1795]: calie96fb2da063: Gained carrier Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.018 [INFO][5867] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--244-k8s-calico--kube--controllers--8498cbcb77--2dh8t-eth0 calico-kube-controllers-8498cbcb77- calico-system 232d682b-1e73-4352-a839-d93cdc2b9af5 855 0 2026-03-13 00:39:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8498cbcb77 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-22-244 calico-kube-controllers-8498cbcb77-2dh8t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie96fb2da063 [] [] }} ContainerID="acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" Namespace="calico-system" Pod="calico-kube-controllers-8498cbcb77-2dh8t" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--kube--controllers--8498cbcb77--2dh8t-" Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.019 [INFO][5867] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" Namespace="calico-system" Pod="calico-kube-controllers-8498cbcb77-2dh8t" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--kube--controllers--8498cbcb77--2dh8t-eth0" Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.081 [INFO][5888] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" HandleID="k8s-pod-network.acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" Workload="ip--172--31--22--244-k8s-calico--kube--controllers--8498cbcb77--2dh8t-eth0" Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.092 [INFO][5888] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" HandleID="k8s-pod-network.acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" Workload="ip--172--31--22--244-k8s-calico--kube--controllers--8498cbcb77--2dh8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efe90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-244", "pod":"calico-kube-controllers-8498cbcb77-2dh8t", "timestamp":"2026-03-13 00:39:54.08151624 +0000 UTC"}, Hostname:"ip-172-31-22-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f6dc0)} Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.092 [INFO][5888] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.092 [INFO][5888] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.092 [INFO][5888] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-244' Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.095 [INFO][5888] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" host="ip-172-31-22-244" Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.105 [INFO][5888] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-22-244" Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.113 [INFO][5888] ipam/ipam.go 526: Trying affinity for 192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.119 [INFO][5888] ipam/ipam.go 160: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.122 [INFO][5888] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.122 [INFO][5888] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" host="ip-172-31-22-244" Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.125 [INFO][5888] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4 Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.132 [INFO][5888] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" host="ip-172-31-22-244" Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.142 [INFO][5888] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.92.72/26] block=192.168.92.64/26 handle="k8s-pod-network.acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" host="ip-172-31-22-244" Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.142 [INFO][5888] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.92.72/26] handle="k8s-pod-network.acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" host="ip-172-31-22-244" Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.143 [INFO][5888] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:39:54.192797 containerd[1987]: 2026-03-13 00:39:54.143 [INFO][5888] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.92.72/26] IPv6=[] ContainerID="acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" HandleID="k8s-pod-network.acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" Workload="ip--172--31--22--244-k8s-calico--kube--controllers--8498cbcb77--2dh8t-eth0" Mar 13 00:39:54.195339 containerd[1987]: 2026-03-13 00:39:54.148 [INFO][5867] cni-plugin/k8s.go 418: Populated endpoint ContainerID="acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" Namespace="calico-system" Pod="calico-kube-controllers-8498cbcb77-2dh8t" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--kube--controllers--8498cbcb77--2dh8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-calico--kube--controllers--8498cbcb77--2dh8t-eth0", GenerateName:"calico-kube-controllers-8498cbcb77-", Namespace:"calico-system", SelfLink:"", UID:"232d682b-1e73-4352-a839-d93cdc2b9af5", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8498cbcb77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"", Pod:"calico-kube-controllers-8498cbcb77-2dh8t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie96fb2da063", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:54.195339 containerd[1987]: 2026-03-13 00:39:54.148 [INFO][5867] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.72/32] ContainerID="acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" Namespace="calico-system" Pod="calico-kube-controllers-8498cbcb77-2dh8t" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--kube--controllers--8498cbcb77--2dh8t-eth0" Mar 13 00:39:54.195339 containerd[1987]: 2026-03-13 00:39:54.148 [INFO][5867] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie96fb2da063 ContainerID="acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" Namespace="calico-system" Pod="calico-kube-controllers-8498cbcb77-2dh8t" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--kube--controllers--8498cbcb77--2dh8t-eth0" Mar 13 00:39:54.195339 containerd[1987]: 2026-03-13 00:39:54.161 [INFO][5867] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" Namespace="calico-system" Pod="calico-kube-controllers-8498cbcb77-2dh8t" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--kube--controllers--8498cbcb77--2dh8t-eth0" Mar 13 00:39:54.195339 containerd[1987]: 2026-03-13 00:39:54.165 [INFO][5867] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" Namespace="calico-system" Pod="calico-kube-controllers-8498cbcb77-2dh8t" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--kube--controllers--8498cbcb77--2dh8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-calico--kube--controllers--8498cbcb77--2dh8t-eth0", GenerateName:"calico-kube-controllers-8498cbcb77-", Namespace:"calico-system", SelfLink:"", UID:"232d682b-1e73-4352-a839-d93cdc2b9af5", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8498cbcb77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4", Pod:"calico-kube-controllers-8498cbcb77-2dh8t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie96fb2da063", MAC:"4e:21:be:44:24:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:54.195339 containerd[1987]: 2026-03-13 00:39:54.184 [INFO][5867] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" Namespace="calico-system" Pod="calico-kube-controllers-8498cbcb77-2dh8t" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--kube--controllers--8498cbcb77--2dh8t-eth0" Mar 13 00:39:54.274397 containerd[1987]: time="2026-03-13T00:39:54.273854460Z" level=info msg="connecting to shim acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4" address="unix:///run/containerd/s/c722b04639333bce3ce6583cf39ea9dec13c493096cc3ce5553ad995b411cb16" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:54.379105 systemd-networkd[1795]: cali3fed603a332: Link UP Mar 13 00:39:54.380712 systemd-networkd[1795]: cali3fed603a332: Gained carrier Mar 13 00:39:54.385485 systemd[1]: Started cri-containerd-acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4.scope - libcontainer container acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4. Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:53.989 [INFO][5851] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--jrxrr-eth0 calico-apiserver-f4fc4d544- calico-system 7c86ec91-9cc0-46f5-9cdd-0a50a7ddacaf 856 0 2026-03-13 00:39:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f4fc4d544 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-244 calico-apiserver-f4fc4d544-jrxrr eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali3fed603a332 [] [] }} ContainerID="60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" Namespace="calico-system" Pod="calico-apiserver-f4fc4d544-jrxrr" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--jrxrr-" Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:53.991 [INFO][5851] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" Namespace="calico-system" Pod="calico-apiserver-f4fc4d544-jrxrr" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--jrxrr-eth0" Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.108 [INFO][5885] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" HandleID="k8s-pod-network.60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" Workload="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--jrxrr-eth0" Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.125 [INFO][5885] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" HandleID="k8s-pod-network.60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" Workload="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--jrxrr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000379d60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-244", "pod":"calico-apiserver-f4fc4d544-jrxrr", "timestamp":"2026-03-13 00:39:54.108933673 +0000 UTC"}, Hostname:"ip-172-31-22-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000536160)} Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.125 [INFO][5885] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.142 [INFO][5885] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.143 [INFO][5885] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-244' Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.199 [INFO][5885] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" host="ip-172-31-22-244" Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.209 [INFO][5885] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-22-244" Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.217 [INFO][5885] ipam/ipam.go 526: Trying affinity for 192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.223 [INFO][5885] ipam/ipam.go 160: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.227 [INFO][5885] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ip-172-31-22-244" Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.229 [INFO][5885] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" host="ip-172-31-22-244" Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.234 [INFO][5885] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3 Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.263 [INFO][5885] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" host="ip-172-31-22-244" Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.312 [INFO][5885] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.92.73/26] block=192.168.92.64/26 handle="k8s-pod-network.60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" host="ip-172-31-22-244" Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.318 [INFO][5885] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.92.73/26] handle="k8s-pod-network.60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" host="ip-172-31-22-244" Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.319 [INFO][5885] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:39:54.451556 containerd[1987]: 2026-03-13 00:39:54.325 [INFO][5885] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.92.73/26] IPv6=[] ContainerID="60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" HandleID="k8s-pod-network.60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" Workload="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--jrxrr-eth0" Mar 13 00:39:54.455731 containerd[1987]: 2026-03-13 00:39:54.339 [INFO][5851] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" Namespace="calico-system" Pod="calico-apiserver-f4fc4d544-jrxrr" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--jrxrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--jrxrr-eth0", GenerateName:"calico-apiserver-f4fc4d544-", Namespace:"calico-system", SelfLink:"", UID:"7c86ec91-9cc0-46f5-9cdd-0a50a7ddacaf", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4fc4d544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"", Pod:"calico-apiserver-f4fc4d544-jrxrr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3fed603a332", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:54.455731 containerd[1987]: 2026-03-13 00:39:54.339 [INFO][5851] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.73/32] ContainerID="60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" Namespace="calico-system" Pod="calico-apiserver-f4fc4d544-jrxrr" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--jrxrr-eth0" Mar 13 00:39:54.455731 containerd[1987]: 2026-03-13 00:39:54.339 [INFO][5851] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fed603a332 ContainerID="60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" Namespace="calico-system" Pod="calico-apiserver-f4fc4d544-jrxrr" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--jrxrr-eth0" Mar 13 00:39:54.455731 containerd[1987]: 2026-03-13 00:39:54.382 [INFO][5851] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" Namespace="calico-system" Pod="calico-apiserver-f4fc4d544-jrxrr" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--jrxrr-eth0" Mar 13 00:39:54.455731 containerd[1987]: 2026-03-13 00:39:54.384 [INFO][5851] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" Namespace="calico-system" Pod="calico-apiserver-f4fc4d544-jrxrr" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--jrxrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--jrxrr-eth0", GenerateName:"calico-apiserver-f4fc4d544-", Namespace:"calico-system", SelfLink:"", UID:"7c86ec91-9cc0-46f5-9cdd-0a50a7ddacaf", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4fc4d544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-244", ContainerID:"60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3", Pod:"calico-apiserver-f4fc4d544-jrxrr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3fed603a332", MAC:"aa:f3:85:c4:cb:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:39:54.455731 containerd[1987]: 2026-03-13 00:39:54.446 [INFO][5851] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" Namespace="calico-system" Pod="calico-apiserver-f4fc4d544-jrxrr" WorkloadEndpoint="ip--172--31--22--244-k8s-calico--apiserver--f4fc4d544--jrxrr-eth0" Mar 13 00:39:54.537390 containerd[1987]: time="2026-03-13T00:39:54.536696324Z" level=info msg="connecting to shim 60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3" address="unix:///run/containerd/s/d58b71fddf19622052dbc32460d9dec9991c9e025042057a8d8709f7de311ccb" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:54.606808 systemd[1]: Started cri-containerd-60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3.scope - libcontainer container 60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3. Mar 13 00:39:54.791467 containerd[1987]: time="2026-03-13T00:39:54.790857381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8498cbcb77-2dh8t,Uid:232d682b-1e73-4352-a839-d93cdc2b9af5,Namespace:calico-system,Attempt:0,} returns sandbox id \"acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4\"" Mar 13 00:39:54.869593 containerd[1987]: time="2026-03-13T00:39:54.869508183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4fc4d544-jrxrr,Uid:7c86ec91-9cc0-46f5-9cdd-0a50a7ddacaf,Namespace:calico-system,Attempt:0,} returns sandbox id \"60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3\"" Mar 13 00:39:55.001151 containerd[1987]: time="2026-03-13T00:39:55.001099690Z" level=info msg="StopPodSandbox for \"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\"" Mar 13 00:39:55.196951 containerd[1987]: 2026-03-13 00:39:55.091 [WARNING][6035] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:55.196951 containerd[1987]: 2026-03-13 00:39:55.091 [INFO][6035] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Mar 13 00:39:55.196951 containerd[1987]: 2026-03-13 00:39:55.091 [INFO][6035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" iface="eth0" netns="" Mar 13 00:39:55.196951 containerd[1987]: 2026-03-13 00:39:55.091 [INFO][6035] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Mar 13 00:39:55.196951 containerd[1987]: 2026-03-13 00:39:55.091 [INFO][6035] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Mar 13 00:39:55.196951 containerd[1987]: 2026-03-13 00:39:55.167 [INFO][6042] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" HandleID="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Workload="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:55.196951 containerd[1987]: 2026-03-13 00:39:55.167 [INFO][6042] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:39:55.196951 containerd[1987]: 2026-03-13 00:39:55.168 [INFO][6042] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:39:55.196951 containerd[1987]: 2026-03-13 00:39:55.187 [WARNING][6042] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" HandleID="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Workload="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:55.196951 containerd[1987]: 2026-03-13 00:39:55.187 [INFO][6042] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" HandleID="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Workload="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:55.196951 containerd[1987]: 2026-03-13 00:39:55.190 [INFO][6042] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:39:55.196951 containerd[1987]: 2026-03-13 00:39:55.193 [INFO][6035] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Mar 13 00:39:55.198921 containerd[1987]: time="2026-03-13T00:39:55.197312817Z" level=info msg="TearDown network for sandbox \"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\" successfully" Mar 13 00:39:55.198921 containerd[1987]: time="2026-03-13T00:39:55.197350390Z" level=info msg="StopPodSandbox for \"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\" returns successfully" Mar 13 00:39:55.211143 containerd[1987]: time="2026-03-13T00:39:55.211043440Z" level=info msg="RemovePodSandbox for \"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\"" Mar 13 00:39:55.219266 containerd[1987]: time="2026-03-13T00:39:55.219061251Z" level=info msg="Forcibly stopping sandbox \"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\"" Mar 13 00:39:55.378776 systemd-networkd[1795]: calie96fb2da063: Gained IPv6LL Mar 13 00:39:55.387139 containerd[1987]: 2026-03-13 00:39:55.284 [WARNING][6056] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" WorkloadEndpoint="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:55.387139 containerd[1987]: 2026-03-13 00:39:55.284 [INFO][6056] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Mar 13 00:39:55.387139 containerd[1987]: 2026-03-13 00:39:55.284 [INFO][6056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" iface="eth0" netns="" Mar 13 00:39:55.387139 containerd[1987]: 2026-03-13 00:39:55.284 [INFO][6056] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Mar 13 00:39:55.387139 containerd[1987]: 2026-03-13 00:39:55.284 [INFO][6056] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Mar 13 00:39:55.387139 containerd[1987]: 2026-03-13 00:39:55.357 [INFO][6063] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" HandleID="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Workload="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:55.387139 containerd[1987]: 2026-03-13 00:39:55.357 [INFO][6063] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:39:55.387139 containerd[1987]: 2026-03-13 00:39:55.357 [INFO][6063] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:39:55.387139 containerd[1987]: 2026-03-13 00:39:55.369 [WARNING][6063] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" HandleID="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Workload="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:55.387139 containerd[1987]: 2026-03-13 00:39:55.369 [INFO][6063] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" HandleID="k8s-pod-network.b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Workload="ip--172--31--22--244-k8s-whisker--7bf747955c--zvrbt-eth0" Mar 13 00:39:55.387139 containerd[1987]: 2026-03-13 00:39:55.371 [INFO][6063] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:39:55.387139 containerd[1987]: 2026-03-13 00:39:55.377 [INFO][6056] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297" Mar 13 00:39:55.388591 containerd[1987]: time="2026-03-13T00:39:55.387175420Z" level=info msg="TearDown network for sandbox \"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\" successfully" Mar 13 00:39:55.391834 containerd[1987]: time="2026-03-13T00:39:55.391718924Z" level=info msg="Ensure that sandbox b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297 in task-service has been cleanup successfully" Mar 13 00:39:55.412401 containerd[1987]: time="2026-03-13T00:39:55.412327250Z" level=info msg="RemovePodSandbox \"b0ec1f582f26c4ad079f411eebaeae877fc9846b5c5cdaa9fe70f506af4a0297\" returns successfully" Mar 13 00:39:55.635060 systemd-networkd[1795]: cali3fed603a332: Gained IPv6LL Mar 13 00:39:56.413722 containerd[1987]: time="2026-03-13T00:39:56.413667614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:56.415802 containerd[1987]: time="2026-03-13T00:39:56.415679460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 13 00:39:56.418399 containerd[1987]: time="2026-03-13T00:39:56.418245717Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:56.421789 containerd[1987]: time="2026-03-13T00:39:56.421748369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:56.422794 containerd[1987]: time="2026-03-13T00:39:56.422658747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.318654581s" Mar 13 00:39:56.422794 containerd[1987]: time="2026-03-13T00:39:56.422698313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:39:56.424042 containerd[1987]: time="2026-03-13T00:39:56.423978771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 13 00:39:56.430732 containerd[1987]: time="2026-03-13T00:39:56.430691796Z" level=info msg="CreateContainer within sandbox \"2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:39:56.444975 containerd[1987]: time="2026-03-13T00:39:56.442376512Z" level=info msg="Container c360e28f4dcd150db762b7bc90b0d82ae91f96cab6a275ec075fc6ffdc741bac: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:56.462176 containerd[1987]: time="2026-03-13T00:39:56.462047119Z" level=info msg="CreateContainer within sandbox \"2d13577d83b00cebea906801b7d9ad38e31b3382441a079a99d628238d56b1ab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c360e28f4dcd150db762b7bc90b0d82ae91f96cab6a275ec075fc6ffdc741bac\"" Mar 13 00:39:56.464594 containerd[1987]: time="2026-03-13T00:39:56.464521004Z" level=info msg="StartContainer for \"c360e28f4dcd150db762b7bc90b0d82ae91f96cab6a275ec075fc6ffdc741bac\"" Mar 13 00:39:56.465970 containerd[1987]: time="2026-03-13T00:39:56.465938600Z" level=info msg="connecting to shim c360e28f4dcd150db762b7bc90b0d82ae91f96cab6a275ec075fc6ffdc741bac" address="unix:///run/containerd/s/5691e62e7bb1d9d4f7c9f5ca4b36487c659ee010b8c5025644854225538f7792" protocol=ttrpc version=3 Mar 13 00:39:56.527608 systemd[1]: Started cri-containerd-c360e28f4dcd150db762b7bc90b0d82ae91f96cab6a275ec075fc6ffdc741bac.scope - libcontainer container c360e28f4dcd150db762b7bc90b0d82ae91f96cab6a275ec075fc6ffdc741bac. Mar 13 00:39:56.613456 containerd[1987]: time="2026-03-13T00:39:56.612697783Z" level=info msg="StartContainer for \"c360e28f4dcd150db762b7bc90b0d82ae91f96cab6a275ec075fc6ffdc741bac\" returns successfully" Mar 13 00:39:56.683458 systemd[1]: Started sshd@8-172.31.22.244:22-20.161.92.111:36596.service - OpenSSH per-connection server daemon (20.161.92.111:36596). Mar 13 00:39:57.218426 sshd[6119]: Accepted publickey for core from 20.161.92.111 port 36596 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:39:57.221252 sshd-session[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:57.232949 systemd-logind[1959]: New session 9 of user core. Mar 13 00:39:57.237587 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:39:57.347428 kubelet[3328]: I0313 00:39:57.346335 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-f4fc4d544-k9996" podStartSLOduration=38.613462497 podStartE2EDuration="42.345491802s" podCreationTimestamp="2026-03-13 00:39:15 +0000 UTC" firstStartedPulling="2026-03-13 00:39:52.69185485 +0000 UTC m=+58.030936335" lastFinishedPulling="2026-03-13 00:39:56.423855348 +0000 UTC m=+61.762965640" observedRunningTime="2026-03-13 00:39:57.34373586 +0000 UTC m=+62.682817365" watchObservedRunningTime="2026-03-13 00:39:57.345491802 +0000 UTC m=+62.684573298" Mar 13 00:39:57.929020 ntpd[2211]: Listen normally on 6 vxlan.calico 192.168.92.65:123 Mar 13 00:39:57.931324 ntpd[2211]: 13 Mar 00:39:57 ntpd[2211]: Listen normally on 6 vxlan.calico 192.168.92.65:123 Mar 13 00:39:57.931324 ntpd[2211]: 13 Mar 00:39:57 ntpd[2211]: Listen normally on 7 vxlan.calico [fe80::64d6:2eff:fe8a:13f0%5]:123 Mar 13 00:39:57.931324 ntpd[2211]: 13 Mar 00:39:57 ntpd[2211]: Listen normally on 8 calic3a13027c8f [fe80::ecee:eeff:feee:eeee%8]:123 Mar 13 00:39:57.931324 ntpd[2211]: 13 Mar 00:39:57 ntpd[2211]: Listen normally on 9 cali699cf1d2479 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 13 00:39:57.931324 ntpd[2211]: 13 Mar 00:39:57 ntpd[2211]: Listen normally on 10 cali258fbba13a7 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 13 00:39:57.931324 ntpd[2211]: 13 Mar 00:39:57 ntpd[2211]: Listen normally on 11 calibc779cf5e70 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 13 00:39:57.931324 ntpd[2211]: 13 Mar 00:39:57 ntpd[2211]: Listen normally on 12 cali1d293797005 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 13 00:39:57.931324 ntpd[2211]: 13 Mar 00:39:57 ntpd[2211]: Listen normally on 13 cali08ac24d24d1 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 13 00:39:57.931324 ntpd[2211]: 13 Mar 00:39:57 ntpd[2211]: Listen normally on 14 calie96fb2da063 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 13 00:39:57.931324 ntpd[2211]: 13 Mar 00:39:57 ntpd[2211]: Listen normally on 15 cali3fed603a332 [fe80::ecee:eeff:feee:eeee%15]:123 Mar 13 00:39:57.929089 ntpd[2211]: Listen normally on 7 vxlan.calico [fe80::64d6:2eff:fe8a:13f0%5]:123 Mar 13 00:39:57.929122 ntpd[2211]: Listen normally on 8 calic3a13027c8f [fe80::ecee:eeff:feee:eeee%8]:123 Mar 13 00:39:57.929149 ntpd[2211]: Listen normally on 9 cali699cf1d2479 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 13 00:39:57.929175 ntpd[2211]: Listen normally on 10 cali258fbba13a7 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 13 00:39:57.929201 ntpd[2211]: Listen normally on 11 calibc779cf5e70 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 13 00:39:57.929230 ntpd[2211]: Listen normally on 12 cali1d293797005 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 13 00:39:57.929256 ntpd[2211]: Listen normally on 13 cali08ac24d24d1 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 13 00:39:57.929282 ntpd[2211]: Listen normally on 14 calie96fb2da063 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 13 00:39:57.929308 ntpd[2211]: Listen normally on 15 cali3fed603a332 [fe80::ecee:eeff:feee:eeee%15]:123 Mar 13 00:39:58.329500 kubelet[3328]: I0313 00:39:58.329277 3328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:39:58.624548 sshd[6124]: Connection closed by 20.161.92.111 port 36596 Mar 13 00:39:58.625073 sshd-session[6119]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:58.638285 systemd-logind[1959]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:39:58.643604 systemd[1]: sshd@8-172.31.22.244:22-20.161.92.111:36596.service: Deactivated successfully. Mar 13 00:39:58.650635 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:39:58.655262 systemd-logind[1959]: Removed session 9. Mar 13 00:40:03.235166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1028959177.mount: Deactivated successfully. Mar 13 00:40:03.716638 systemd[1]: Started sshd@9-172.31.22.244:22-20.161.92.111:45182.service - OpenSSH per-connection server daemon (20.161.92.111:45182). Mar 13 00:40:04.000155 containerd[1987]: time="2026-03-13T00:40:03.999933017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:04.001726 containerd[1987]: time="2026-03-13T00:40:04.001677431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 13 00:40:04.038779 containerd[1987]: time="2026-03-13T00:40:04.038723180Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:04.041859 containerd[1987]: time="2026-03-13T00:40:04.041788667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:04.042735 containerd[1987]: time="2026-03-13T00:40:04.042694650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 7.618686204s" Mar 13 00:40:04.042863 containerd[1987]: time="2026-03-13T00:40:04.042744909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 13 00:40:04.044118 containerd[1987]: time="2026-03-13T00:40:04.044050699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 13 00:40:04.094609 containerd[1987]: time="2026-03-13T00:40:04.094560783Z" level=info msg="CreateContainer within sandbox \"2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 13 00:40:04.149498 containerd[1987]: time="2026-03-13T00:40:04.146514556Z" level=info msg="Container ff775bf4552a1c1007bdd4f1d873984d61d7f554fdc76d5941b788dbfc225dda: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:04.212783 containerd[1987]: time="2026-03-13T00:40:04.212602453Z" level=info msg="CreateContainer within sandbox \"2efa9a66904e314e8c101a7ea7d9d7226e31a4ae80431ffd4a9ad3068a8ee509\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ff775bf4552a1c1007bdd4f1d873984d61d7f554fdc76d5941b788dbfc225dda\"" Mar 13 00:40:04.213943 containerd[1987]: time="2026-03-13T00:40:04.213510681Z" level=info msg="StartContainer for \"ff775bf4552a1c1007bdd4f1d873984d61d7f554fdc76d5941b788dbfc225dda\"" Mar 13 00:40:04.230809 containerd[1987]: time="2026-03-13T00:40:04.230723666Z" level=info msg="connecting to shim ff775bf4552a1c1007bdd4f1d873984d61d7f554fdc76d5941b788dbfc225dda" address="unix:///run/containerd/s/715f43549186b7766c57da67d68c9982967f14742fa51432b11c7ed1a4678599" protocol=ttrpc version=3 Mar 13 00:40:04.286549 sshd[6157]: Accepted publickey for core from 20.161.92.111 port 45182 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:04.289158 sshd-session[6157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:04.297150 systemd-logind[1959]: New session 10 of user core. Mar 13 00:40:04.311235 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:40:04.442882 systemd[1]: Started cri-containerd-ff775bf4552a1c1007bdd4f1d873984d61d7f554fdc76d5941b788dbfc225dda.scope - libcontainer container ff775bf4552a1c1007bdd4f1d873984d61d7f554fdc76d5941b788dbfc225dda. Mar 13 00:40:04.620984 containerd[1987]: time="2026-03-13T00:40:04.620942717Z" level=info msg="StartContainer for \"ff775bf4552a1c1007bdd4f1d873984d61d7f554fdc76d5941b788dbfc225dda\" returns successfully" Mar 13 00:40:05.252198 sshd[6175]: Connection closed by 20.161.92.111 port 45182 Mar 13 00:40:05.253546 sshd-session[6157]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:05.259044 systemd[1]: sshd@9-172.31.22.244:22-20.161.92.111:45182.service: Deactivated successfully. Mar 13 00:40:05.261660 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:40:05.263433 systemd-logind[1959]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:40:05.266624 systemd-logind[1959]: Removed session 10. Mar 13 00:40:06.233849 containerd[1987]: time="2026-03-13T00:40:06.233795365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:06.234881 containerd[1987]: time="2026-03-13T00:40:06.234732168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 13 00:40:06.235987 containerd[1987]: time="2026-03-13T00:40:06.235952996Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:06.239076 containerd[1987]: time="2026-03-13T00:40:06.239016919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:06.239496 containerd[1987]: time="2026-03-13T00:40:06.239463930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.195375782s" Mar 13 00:40:06.239581 containerd[1987]: time="2026-03-13T00:40:06.239505461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 13 00:40:06.283578 containerd[1987]: time="2026-03-13T00:40:06.283537580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 13 00:40:06.311552 containerd[1987]: time="2026-03-13T00:40:06.311507824Z" level=info msg="CreateContainer within sandbox \"027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 13 00:40:06.324678 containerd[1987]: time="2026-03-13T00:40:06.324542741Z" level=info msg="Container 1b47b13700965af127aaaa32459b6f2dc5288f41727ca8da8f4e9ee4a802dcc0: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:06.337970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288526157.mount: Deactivated successfully. Mar 13 00:40:06.349484 containerd[1987]: time="2026-03-13T00:40:06.349435641Z" level=info msg="CreateContainer within sandbox \"027d5f93ee5d969c2a3b15f4adcb41f07fae4f5f8d69e3b0572bfd189e85681d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1b47b13700965af127aaaa32459b6f2dc5288f41727ca8da8f4e9ee4a802dcc0\"" Mar 13 00:40:06.350334 containerd[1987]: time="2026-03-13T00:40:06.350295573Z" level=info msg="StartContainer for \"1b47b13700965af127aaaa32459b6f2dc5288f41727ca8da8f4e9ee4a802dcc0\"" Mar 13 00:40:06.353277 containerd[1987]: time="2026-03-13T00:40:06.353237918Z" level=info msg="connecting to shim 1b47b13700965af127aaaa32459b6f2dc5288f41727ca8da8f4e9ee4a802dcc0" address="unix:///run/containerd/s/3d31d4a1f9dee67ca9ae4fc4b6dc92772cf3f5a48373daa3d083181edaec11eb" protocol=ttrpc version=3 Mar 13 00:40:06.384281 systemd[1]: Started cri-containerd-1b47b13700965af127aaaa32459b6f2dc5288f41727ca8da8f4e9ee4a802dcc0.scope - libcontainer container 1b47b13700965af127aaaa32459b6f2dc5288f41727ca8da8f4e9ee4a802dcc0. Mar 13 00:40:06.469955 containerd[1987]: time="2026-03-13T00:40:06.468818034Z" level=info msg="StartContainer for \"1b47b13700965af127aaaa32459b6f2dc5288f41727ca8da8f4e9ee4a802dcc0\" returns successfully" Mar 13 00:40:06.588114 kubelet[3328]: I0313 00:40:06.587852 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-s47n2" podStartSLOduration=41.38203819 podStartE2EDuration="52.583930444s" podCreationTimestamp="2026-03-13 00:39:14 +0000 UTC" firstStartedPulling="2026-03-13 00:39:52.841999084 +0000 UTC m=+58.181080562" lastFinishedPulling="2026-03-13 00:40:04.043891336 +0000 UTC m=+69.382972816" observedRunningTime="2026-03-13 00:40:05.682895466 +0000 UTC m=+71.021976962" watchObservedRunningTime="2026-03-13 00:40:06.583930444 +0000 UTC m=+71.923011939" Mar 13 00:40:07.251306 kubelet[3328]: I0313 00:40:07.251238 3328 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 13 00:40:07.260040 kubelet[3328]: I0313 00:40:07.259999 3328 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 13 00:40:10.280769 containerd[1987]: time="2026-03-13T00:40:10.280696980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:10.282319 containerd[1987]: time="2026-03-13T00:40:10.282261860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 13 00:40:10.283323 containerd[1987]: time="2026-03-13T00:40:10.283265375Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:10.285999 containerd[1987]: time="2026-03-13T00:40:10.285931800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:10.286625 containerd[1987]: time="2026-03-13T00:40:10.286589679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.002732453s" Mar 13 00:40:10.286744 containerd[1987]: time="2026-03-13T00:40:10.286649459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 13 00:40:10.288041 containerd[1987]: time="2026-03-13T00:40:10.287971828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:40:10.344621 systemd[1]: Started sshd@10-172.31.22.244:22-20.161.92.111:39636.service - OpenSSH per-connection server daemon (20.161.92.111:39636). Mar 13 00:40:10.461275 containerd[1987]: time="2026-03-13T00:40:10.461236523Z" level=info msg="CreateContainer within sandbox \"acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 13 00:40:10.475628 containerd[1987]: time="2026-03-13T00:40:10.475580355Z" level=info msg="Container 12968ec5fa7bbceea9fcc1ce5c9d548ff04abab2905a5d06b037792b713d9580: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:10.492772 containerd[1987]: time="2026-03-13T00:40:10.492734584Z" level=info msg="CreateContainer within sandbox \"acb59b05d4b65a26421a025e27eeb6ae1b46f24310bc3a7014d455beb5972ea4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"12968ec5fa7bbceea9fcc1ce5c9d548ff04abab2905a5d06b037792b713d9580\"" Mar 13 00:40:10.495459 containerd[1987]: time="2026-03-13T00:40:10.494623403Z" level=info msg="StartContainer for \"12968ec5fa7bbceea9fcc1ce5c9d548ff04abab2905a5d06b037792b713d9580\"" Mar 13 00:40:10.497378 containerd[1987]: time="2026-03-13T00:40:10.497306870Z" level=info msg="connecting to shim 12968ec5fa7bbceea9fcc1ce5c9d548ff04abab2905a5d06b037792b713d9580" address="unix:///run/containerd/s/c722b04639333bce3ce6583cf39ea9dec13c493096cc3ce5553ad995b411cb16" protocol=ttrpc version=3 Mar 13 00:40:10.558672 systemd[1]: Started cri-containerd-12968ec5fa7bbceea9fcc1ce5c9d548ff04abab2905a5d06b037792b713d9580.scope - libcontainer container 12968ec5fa7bbceea9fcc1ce5c9d548ff04abab2905a5d06b037792b713d9580. Mar 13 00:40:10.634175 containerd[1987]: time="2026-03-13T00:40:10.632655463Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:10.634175 containerd[1987]: time="2026-03-13T00:40:10.633444216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 13 00:40:10.637458 containerd[1987]: time="2026-03-13T00:40:10.637417521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 349.413961ms" Mar 13 00:40:10.637598 containerd[1987]: time="2026-03-13T00:40:10.637499829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:40:10.647017 containerd[1987]: time="2026-03-13T00:40:10.646974354Z" level=info msg="CreateContainer within sandbox \"60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:40:10.658727 containerd[1987]: time="2026-03-13T00:40:10.658672929Z" level=info msg="Container b4bbd4c79729e62e9d61cc3ebbc926167eb6cc60e6267b2ed270782fbcca4adf: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:10.676249 containerd[1987]: time="2026-03-13T00:40:10.676181882Z" level=info msg="StartContainer for \"12968ec5fa7bbceea9fcc1ce5c9d548ff04abab2905a5d06b037792b713d9580\" returns successfully" Mar 13 00:40:10.677968 containerd[1987]: time="2026-03-13T00:40:10.677010304Z" level=info msg="CreateContainer within sandbox \"60dae87cb5954839aa449c3f7397e4012f52b29feea7687ad1e70a087cbf5bf3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b4bbd4c79729e62e9d61cc3ebbc926167eb6cc60e6267b2ed270782fbcca4adf\"" Mar 13 00:40:10.679401 containerd[1987]: time="2026-03-13T00:40:10.679372081Z" level=info msg="StartContainer for \"b4bbd4c79729e62e9d61cc3ebbc926167eb6cc60e6267b2ed270782fbcca4adf\"" Mar 13 00:40:10.682025 containerd[1987]: time="2026-03-13T00:40:10.681908136Z" level=info msg="connecting to shim b4bbd4c79729e62e9d61cc3ebbc926167eb6cc60e6267b2ed270782fbcca4adf" address="unix:///run/containerd/s/d58b71fddf19622052dbc32460d9dec9991c9e025042057a8d8709f7de311ccb" protocol=ttrpc version=3 Mar 13 00:40:10.736867 systemd[1]: Started cri-containerd-b4bbd4c79729e62e9d61cc3ebbc926167eb6cc60e6267b2ed270782fbcca4adf.scope - libcontainer container b4bbd4c79729e62e9d61cc3ebbc926167eb6cc60e6267b2ed270782fbcca4adf. Mar 13 00:40:10.921987 containerd[1987]: time="2026-03-13T00:40:10.921899075Z" level=info msg="StartContainer for \"b4bbd4c79729e62e9d61cc3ebbc926167eb6cc60e6267b2ed270782fbcca4adf\" returns successfully" Mar 13 00:40:10.978579 sshd[6342]: Accepted publickey for core from 20.161.92.111 port 39636 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:10.983112 sshd-session[6342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:10.992116 systemd-logind[1959]: New session 11 of user core. Mar 13 00:40:10.996669 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:40:12.263111 kubelet[3328]: I0313 00:40:12.256958 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bkmzp" podStartSLOduration=40.444578911 podStartE2EDuration="56.152324033s" podCreationTimestamp="2026-03-13 00:39:16 +0000 UTC" firstStartedPulling="2026-03-13 00:39:50.571526928 +0000 UTC m=+55.910608412" lastFinishedPulling="2026-03-13 00:40:06.279272049 +0000 UTC m=+71.618353534" observedRunningTime="2026-03-13 00:40:06.594946281 +0000 UTC m=+71.934027802" watchObservedRunningTime="2026-03-13 00:40:12.152324033 +0000 UTC m=+77.491405530" Mar 13 00:40:12.428830 kubelet[3328]: I0313 00:40:12.428223 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8498cbcb77-2dh8t" podStartSLOduration=40.936317973 podStartE2EDuration="56.428201456s" podCreationTimestamp="2026-03-13 00:39:16 +0000 UTC" firstStartedPulling="2026-03-13 00:39:54.795950303 +0000 UTC m=+60.135031776" lastFinishedPulling="2026-03-13 00:40:10.287833766 +0000 UTC m=+75.626915259" observedRunningTime="2026-03-13 00:40:12.427690648 +0000 UTC m=+77.766772143" watchObservedRunningTime="2026-03-13 00:40:12.428201456 +0000 UTC m=+77.767282948" Mar 13 00:40:12.433778 kubelet[3328]: I0313 00:40:12.432955 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-f4fc4d544-jrxrr" podStartSLOduration=41.780595134 podStartE2EDuration="57.432932112s" podCreationTimestamp="2026-03-13 00:39:15 +0000 UTC" firstStartedPulling="2026-03-13 00:39:54.986339146 +0000 UTC m=+60.325420621" lastFinishedPulling="2026-03-13 00:40:10.638676113 +0000 UTC m=+75.977757599" observedRunningTime="2026-03-13 00:40:12.30793404 +0000 UTC m=+77.647015532" watchObservedRunningTime="2026-03-13 00:40:12.432932112 +0000 UTC m=+77.772013608" Mar 13 00:40:12.472858 sshd[6420]: Connection closed by 20.161.92.111 port 39636 Mar 13 00:40:12.468928 sshd-session[6342]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:12.476148 systemd[1]: sshd@10-172.31.22.244:22-20.161.92.111:39636.service: Deactivated successfully. Mar 13 00:40:12.481084 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:40:12.484588 systemd-logind[1959]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:40:12.487012 systemd-logind[1959]: Removed session 11. Mar 13 00:40:12.561399 systemd[1]: Started sshd@11-172.31.22.244:22-20.161.92.111:39648.service - OpenSSH per-connection server daemon (20.161.92.111:39648). Mar 13 00:40:13.067201 sshd[6486]: Accepted publickey for core from 20.161.92.111 port 39648 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:13.068818 sshd-session[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:13.075496 systemd-logind[1959]: New session 12 of user core. Mar 13 00:40:13.083581 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:40:13.612485 sshd[6490]: Connection closed by 20.161.92.111 port 39648 Mar 13 00:40:13.613741 sshd-session[6486]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:13.623798 systemd[1]: sshd@11-172.31.22.244:22-20.161.92.111:39648.service: Deactivated successfully. Mar 13 00:40:13.629852 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:40:13.637293 systemd-logind[1959]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:40:13.639716 systemd-logind[1959]: Removed session 12. Mar 13 00:40:13.704420 systemd[1]: Started sshd@12-172.31.22.244:22-20.161.92.111:39658.service - OpenSSH per-connection server daemon (20.161.92.111:39658). Mar 13 00:40:14.236732 sshd[6502]: Accepted publickey for core from 20.161.92.111 port 39658 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:14.239708 sshd-session[6502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:14.257455 systemd-logind[1959]: New session 13 of user core. Mar 13 00:40:14.260976 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:40:14.699201 sshd[6520]: Connection closed by 20.161.92.111 port 39658 Mar 13 00:40:14.703263 sshd-session[6502]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:14.715134 systemd[1]: sshd@12-172.31.22.244:22-20.161.92.111:39658.service: Deactivated successfully. Mar 13 00:40:14.715705 systemd-logind[1959]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:40:14.718274 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:40:14.721007 systemd-logind[1959]: Removed session 13. Mar 13 00:40:19.788795 systemd[1]: Started sshd@13-172.31.22.244:22-20.161.92.111:39668.service - OpenSSH per-connection server daemon (20.161.92.111:39668). Mar 13 00:40:20.335876 sshd[6549]: Accepted publickey for core from 20.161.92.111 port 39668 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:20.340160 sshd-session[6549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:20.346656 systemd-logind[1959]: New session 14 of user core. Mar 13 00:40:20.354611 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:40:21.204035 sshd[6552]: Connection closed by 20.161.92.111 port 39668 Mar 13 00:40:21.205644 sshd-session[6549]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:21.210194 systemd[1]: sshd@13-172.31.22.244:22-20.161.92.111:39668.service: Deactivated successfully. Mar 13 00:40:21.213316 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:40:21.214509 systemd-logind[1959]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:40:21.217215 systemd-logind[1959]: Removed session 14. Mar 13 00:40:21.294877 systemd[1]: Started sshd@14-172.31.22.244:22-20.161.92.111:44864.service - OpenSSH per-connection server daemon (20.161.92.111:44864). Mar 13 00:40:21.737423 sshd[6564]: Accepted publickey for core from 20.161.92.111 port 44864 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:21.738862 sshd-session[6564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:21.744889 systemd-logind[1959]: New session 15 of user core. Mar 13 00:40:21.757611 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:40:22.554668 sshd[6567]: Connection closed by 20.161.92.111 port 44864 Mar 13 00:40:22.559081 sshd-session[6564]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:22.569602 systemd-logind[1959]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:40:22.570795 systemd[1]: sshd@14-172.31.22.244:22-20.161.92.111:44864.service: Deactivated successfully. Mar 13 00:40:22.573149 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:40:22.575756 systemd-logind[1959]: Removed session 15. Mar 13 00:40:22.645655 systemd[1]: Started sshd@15-172.31.22.244:22-20.161.92.111:44870.service - OpenSSH per-connection server daemon (20.161.92.111:44870). Mar 13 00:40:23.098828 sshd[6578]: Accepted publickey for core from 20.161.92.111 port 44870 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:23.100410 sshd-session[6578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:23.107266 systemd-logind[1959]: New session 16 of user core. Mar 13 00:40:23.110564 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:40:24.216847 sshd[6581]: Connection closed by 20.161.92.111 port 44870 Mar 13 00:40:24.215912 sshd-session[6578]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:24.227876 systemd[1]: sshd@15-172.31.22.244:22-20.161.92.111:44870.service: Deactivated successfully. Mar 13 00:40:24.230646 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:40:24.236012 systemd-logind[1959]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:40:24.237565 systemd-logind[1959]: Removed session 16. Mar 13 00:40:24.305550 systemd[1]: Started sshd@16-172.31.22.244:22-20.161.92.111:44880.service - OpenSSH per-connection server daemon (20.161.92.111:44880). Mar 13 00:40:24.767415 sshd[6621]: Accepted publickey for core from 20.161.92.111 port 44880 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:24.768712 sshd-session[6621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:24.774761 systemd-logind[1959]: New session 17 of user core. Mar 13 00:40:24.779574 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:40:25.678512 sshd[6624]: Connection closed by 20.161.92.111 port 44880 Mar 13 00:40:25.679690 sshd-session[6621]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:25.684760 systemd-logind[1959]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:40:25.685737 systemd[1]: sshd@16-172.31.22.244:22-20.161.92.111:44880.service: Deactivated successfully. Mar 13 00:40:25.688780 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:40:25.691117 systemd-logind[1959]: Removed session 17. Mar 13 00:40:25.767903 systemd[1]: Started sshd@17-172.31.22.244:22-20.161.92.111:44896.service - OpenSSH per-connection server daemon (20.161.92.111:44896). Mar 13 00:40:26.221118 sshd[6634]: Accepted publickey for core from 20.161.92.111 port 44896 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:26.222719 sshd-session[6634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:26.228913 systemd-logind[1959]: New session 18 of user core. Mar 13 00:40:26.233015 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:40:26.588896 sshd[6637]: Connection closed by 20.161.92.111 port 44896 Mar 13 00:40:26.590405 sshd-session[6634]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:26.597303 systemd[1]: sshd@17-172.31.22.244:22-20.161.92.111:44896.service: Deactivated successfully. Mar 13 00:40:26.599512 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:40:26.601587 systemd-logind[1959]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:40:26.603311 systemd-logind[1959]: Removed session 18. Mar 13 00:40:31.678715 systemd[1]: Started sshd@18-172.31.22.244:22-20.161.92.111:58068.service - OpenSSH per-connection server daemon (20.161.92.111:58068). Mar 13 00:40:32.207604 sshd[6653]: Accepted publickey for core from 20.161.92.111 port 58068 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:32.208672 sshd-session[6653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:32.224429 systemd-logind[1959]: New session 19 of user core. Mar 13 00:40:32.230579 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:40:32.839426 sshd[6656]: Connection closed by 20.161.92.111 port 58068 Mar 13 00:40:32.840170 sshd-session[6653]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:32.847300 systemd[1]: sshd@18-172.31.22.244:22-20.161.92.111:58068.service: Deactivated successfully. Mar 13 00:40:32.847386 systemd-logind[1959]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:40:32.851857 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:40:32.857120 systemd-logind[1959]: Removed session 19. Mar 13 00:40:37.929884 systemd[1]: Started sshd@19-172.31.22.244:22-20.161.92.111:58080.service - OpenSSH per-connection server daemon (20.161.92.111:58080). Mar 13 00:40:38.375043 sshd[6722]: Accepted publickey for core from 20.161.92.111 port 58080 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:38.376098 sshd-session[6722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:38.382063 systemd-logind[1959]: New session 20 of user core. Mar 13 00:40:38.388638 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 00:40:38.848927 sshd[6725]: Connection closed by 20.161.92.111 port 58080 Mar 13 00:40:38.849579 sshd-session[6722]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:38.858883 systemd-logind[1959]: Session 20 logged out. Waiting for processes to exit. Mar 13 00:40:38.859744 systemd[1]: sshd@19-172.31.22.244:22-20.161.92.111:58080.service: Deactivated successfully. Mar 13 00:40:38.862579 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 00:40:38.865432 systemd-logind[1959]: Removed session 20. Mar 13 00:40:43.939728 systemd[1]: Started sshd@20-172.31.22.244:22-20.161.92.111:36014.service - OpenSSH per-connection server daemon (20.161.92.111:36014). Mar 13 00:40:44.435998 sshd[6782]: Accepted publickey for core from 20.161.92.111 port 36014 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:44.437968 sshd-session[6782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:44.443432 systemd-logind[1959]: New session 21 of user core. Mar 13 00:40:44.454631 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 00:40:45.329565 sshd[6785]: Connection closed by 20.161.92.111 port 36014 Mar 13 00:40:45.330659 sshd-session[6782]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:45.335098 systemd[1]: sshd@20-172.31.22.244:22-20.161.92.111:36014.service: Deactivated successfully. Mar 13 00:40:45.337877 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 00:40:45.340264 systemd-logind[1959]: Session 21 logged out. Waiting for processes to exit. Mar 13 00:40:45.342874 systemd-logind[1959]: Removed session 21. Mar 13 00:40:50.429704 systemd[1]: Started sshd@21-172.31.22.244:22-20.161.92.111:51400.service - OpenSSH per-connection server daemon (20.161.92.111:51400). Mar 13 00:40:50.988684 sshd[6796]: Accepted publickey for core from 20.161.92.111 port 51400 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:50.991076 sshd-session[6796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:50.999767 systemd-logind[1959]: New session 22 of user core. Mar 13 00:40:51.011675 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 00:40:51.891434 sshd[6820]: Connection closed by 20.161.92.111 port 51400 Mar 13 00:40:51.892623 sshd-session[6796]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:51.897239 systemd[1]: sshd@21-172.31.22.244:22-20.161.92.111:51400.service: Deactivated successfully. Mar 13 00:40:51.904278 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 00:40:51.913081 systemd-logind[1959]: Session 22 logged out. Waiting for processes to exit. Mar 13 00:40:51.916604 systemd-logind[1959]: Removed session 22. Mar 13 00:40:56.982662 systemd[1]: Started sshd@22-172.31.22.244:22-20.161.92.111:51402.service - OpenSSH per-connection server daemon (20.161.92.111:51402). Mar 13 00:40:57.426409 sshd[6833]: Accepted publickey for core from 20.161.92.111 port 51402 ssh2: RSA SHA256:VTf7r7rJ3bS+yTBxl6E9nOmHhISovyU9twzX8a6wdBs Mar 13 00:40:57.428329 sshd-session[6833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:57.433448 systemd-logind[1959]: New session 23 of user core. Mar 13 00:40:57.440565 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 00:40:57.762108 sshd[6836]: Connection closed by 20.161.92.111 port 51402 Mar 13 00:40:57.763573 sshd-session[6833]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:57.768048 systemd-logind[1959]: Session 23 logged out. Waiting for processes to exit. Mar 13 00:40:57.768829 systemd[1]: sshd@22-172.31.22.244:22-20.161.92.111:51402.service: Deactivated successfully. Mar 13 00:40:57.772069 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 00:40:57.774333 systemd-logind[1959]: Removed session 23. Mar 13 00:41:12.707150 systemd[1]: cri-containerd-a447da88b30044079b56e97bc806a381a964a1f01b5c0b408360ad59728abea3.scope: Deactivated successfully. Mar 13 00:41:12.708393 systemd[1]: cri-containerd-a447da88b30044079b56e97bc806a381a964a1f01b5c0b408360ad59728abea3.scope: Consumed 9.338s CPU time, 124.8M memory peak, 56.2M read from disk. Mar 13 00:41:12.816434 containerd[1987]: time="2026-03-13T00:41:12.803905878Z" level=info msg="received container exit event container_id:\"a447da88b30044079b56e97bc806a381a964a1f01b5c0b408360ad59728abea3\" id:\"a447da88b30044079b56e97bc806a381a964a1f01b5c0b408360ad59728abea3\" pid:3842 exit_status:1 exited_at:{seconds:1773362472 nanos:736710069}" Mar 13 00:41:12.913932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a447da88b30044079b56e97bc806a381a964a1f01b5c0b408360ad59728abea3-rootfs.mount: Deactivated successfully. Mar 13 00:41:13.117312 kubelet[3328]: I0313 00:41:13.111314 3328 scope.go:117] "RemoveContainer" containerID="a447da88b30044079b56e97bc806a381a964a1f01b5c0b408360ad59728abea3" Mar 13 00:41:13.160847 containerd[1987]: time="2026-03-13T00:41:13.160800443Z" level=info msg="CreateContainer within sandbox \"9bee0b059026818c34d24d4e6b1a73f875e58e71b5b1c3a6b007c39f5803e2e3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 13 00:41:13.291936 containerd[1987]: time="2026-03-13T00:41:13.291577066Z" level=info msg="Container b88f8993e84e0a5e8a274c6655d3640b7ffdc3625a4cc2ce3e3c939e3be5498e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:13.314691 containerd[1987]: time="2026-03-13T00:41:13.314638105Z" level=info msg="CreateContainer within sandbox \"9bee0b059026818c34d24d4e6b1a73f875e58e71b5b1c3a6b007c39f5803e2e3\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b88f8993e84e0a5e8a274c6655d3640b7ffdc3625a4cc2ce3e3c939e3be5498e\"" Mar 13 00:41:13.315539 containerd[1987]: time="2026-03-13T00:41:13.315382036Z" level=info msg="StartContainer for \"b88f8993e84e0a5e8a274c6655d3640b7ffdc3625a4cc2ce3e3c939e3be5498e\"" Mar 13 00:41:13.323531 containerd[1987]: time="2026-03-13T00:41:13.323486652Z" level=info msg="connecting to shim b88f8993e84e0a5e8a274c6655d3640b7ffdc3625a4cc2ce3e3c939e3be5498e" address="unix:///run/containerd/s/8ac2029598166c8067b4d582f2c959e4f3209ae10569c94057d4203ccb51cf7e" protocol=ttrpc version=3 Mar 13 00:41:13.385860 systemd[1]: Started cri-containerd-b88f8993e84e0a5e8a274c6655d3640b7ffdc3625a4cc2ce3e3c939e3be5498e.scope - libcontainer container b88f8993e84e0a5e8a274c6655d3640b7ffdc3625a4cc2ce3e3c939e3be5498e. Mar 13 00:41:13.399672 systemd[1]: cri-containerd-04824b78804d76f8e1db17d3049bcd183690589d3e8d025d26ac52b2f1c335c8.scope: Deactivated successfully. Mar 13 00:41:13.400074 systemd[1]: cri-containerd-04824b78804d76f8e1db17d3049bcd183690589d3e8d025d26ac52b2f1c335c8.scope: Consumed 4.408s CPU time, 85.3M memory peak, 102.3M read from disk. Mar 13 00:41:13.404736 containerd[1987]: time="2026-03-13T00:41:13.403814695Z" level=info msg="received container exit event container_id:\"04824b78804d76f8e1db17d3049bcd183690589d3e8d025d26ac52b2f1c335c8\" id:\"04824b78804d76f8e1db17d3049bcd183690589d3e8d025d26ac52b2f1c335c8\" pid:3169 exit_status:1 exited_at:{seconds:1773362473 nanos:403111422}" Mar 13 00:41:13.478054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04824b78804d76f8e1db17d3049bcd183690589d3e8d025d26ac52b2f1c335c8-rootfs.mount: Deactivated successfully. Mar 13 00:41:13.495550 containerd[1987]: time="2026-03-13T00:41:13.495382912Z" level=info msg="StartContainer for \"b88f8993e84e0a5e8a274c6655d3640b7ffdc3625a4cc2ce3e3c939e3be5498e\" returns successfully" Mar 13 00:41:14.094157 kubelet[3328]: I0313 00:41:14.094127 3328 scope.go:117] "RemoveContainer" containerID="04824b78804d76f8e1db17d3049bcd183690589d3e8d025d26ac52b2f1c335c8" Mar 13 00:41:14.100376 containerd[1987]: time="2026-03-13T00:41:14.099994331Z" level=info msg="CreateContainer within sandbox \"9043efe745c541ff6cfeb22e48dffc48f676c89c798621034105f726be8dd83a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 13 00:41:14.145922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3462897633.mount: Deactivated successfully. Mar 13 00:41:14.161570 containerd[1987]: time="2026-03-13T00:41:14.161489697Z" level=info msg="Container a1381c1d2fc326416759cd1ed9dc028a6d984cf0644209f24b1edba486bbf4fa: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:14.176664 containerd[1987]: time="2026-03-13T00:41:14.176619084Z" level=info msg="CreateContainer within sandbox \"9043efe745c541ff6cfeb22e48dffc48f676c89c798621034105f726be8dd83a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a1381c1d2fc326416759cd1ed9dc028a6d984cf0644209f24b1edba486bbf4fa\"" Mar 13 00:41:14.177398 containerd[1987]: time="2026-03-13T00:41:14.177300154Z" level=info msg="StartContainer for \"a1381c1d2fc326416759cd1ed9dc028a6d984cf0644209f24b1edba486bbf4fa\"" Mar 13 00:41:14.178488 containerd[1987]: time="2026-03-13T00:41:14.178451320Z" level=info msg="connecting to shim a1381c1d2fc326416759cd1ed9dc028a6d984cf0644209f24b1edba486bbf4fa" address="unix:///run/containerd/s/2aaff2c70c720f86a73475baf6a45d3bfc50fc2b12aef4ab18c1fb86fe2140e7" protocol=ttrpc version=3 Mar 13 00:41:14.206599 systemd[1]: Started cri-containerd-a1381c1d2fc326416759cd1ed9dc028a6d984cf0644209f24b1edba486bbf4fa.scope - libcontainer container a1381c1d2fc326416759cd1ed9dc028a6d984cf0644209f24b1edba486bbf4fa. Mar 13 00:41:14.285941 containerd[1987]: time="2026-03-13T00:41:14.285893893Z" level=info msg="StartContainer for \"a1381c1d2fc326416759cd1ed9dc028a6d984cf0644209f24b1edba486bbf4fa\" returns successfully" Mar 13 00:41:17.626151 systemd[1]: cri-containerd-3655280b920a2a1a9b4e43ada6c8d61a519e5dae4feaf44b1559bfff4ef37378.scope: Deactivated successfully. Mar 13 00:41:17.626715 systemd[1]: cri-containerd-3655280b920a2a1a9b4e43ada6c8d61a519e5dae4feaf44b1559bfff4ef37378.scope: Consumed 2.514s CPU time, 38.9M memory peak, 55.5M read from disk. Mar 13 00:41:17.630676 containerd[1987]: time="2026-03-13T00:41:17.630485140Z" level=info msg="received container exit event container_id:\"3655280b920a2a1a9b4e43ada6c8d61a519e5dae4feaf44b1559bfff4ef37378\" id:\"3655280b920a2a1a9b4e43ada6c8d61a519e5dae4feaf44b1559bfff4ef37378\" pid:3160 exit_status:1 exited_at:{seconds:1773362477 nanos:628348985}" Mar 13 00:41:17.665821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3655280b920a2a1a9b4e43ada6c8d61a519e5dae4feaf44b1559bfff4ef37378-rootfs.mount: Deactivated successfully. Mar 13 00:41:17.713997 kubelet[3328]: E0313 00:41:17.713930 3328 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-244?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 00:41:18.119545 kubelet[3328]: I0313 00:41:18.119514 3328 scope.go:117] "RemoveContainer" containerID="3655280b920a2a1a9b4e43ada6c8d61a519e5dae4feaf44b1559bfff4ef37378" Mar 13 00:41:18.122113 containerd[1987]: time="2026-03-13T00:41:18.122060720Z" level=info msg="CreateContainer within sandbox \"c83a49e78cd7bb4f8e4ac9fb739745da4b2e2016927a2d46960a7915631abecf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 13 00:41:18.139387 containerd[1987]: time="2026-03-13T00:41:18.137305736Z" level=info msg="Container 11cad52192e230d46fac684d9c6b983a8a03dd44a1adade1766e7277d80ee312: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:18.151673 containerd[1987]: time="2026-03-13T00:41:18.151624705Z" level=info msg="CreateContainer within sandbox \"c83a49e78cd7bb4f8e4ac9fb739745da4b2e2016927a2d46960a7915631abecf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"11cad52192e230d46fac684d9c6b983a8a03dd44a1adade1766e7277d80ee312\"" Mar 13 00:41:18.152237 containerd[1987]: time="2026-03-13T00:41:18.152208850Z" level=info msg="StartContainer for \"11cad52192e230d46fac684d9c6b983a8a03dd44a1adade1766e7277d80ee312\"" Mar 13 00:41:18.153323 containerd[1987]: time="2026-03-13T00:41:18.153283550Z" level=info msg="connecting to shim 11cad52192e230d46fac684d9c6b983a8a03dd44a1adade1766e7277d80ee312" address="unix:///run/containerd/s/5712f244d965907ebea17e81815334006b93500cb228845eb8c2e5e327211fee" protocol=ttrpc version=3 Mar 13 00:41:18.181581 systemd[1]: Started cri-containerd-11cad52192e230d46fac684d9c6b983a8a03dd44a1adade1766e7277d80ee312.scope - libcontainer container 11cad52192e230d46fac684d9c6b983a8a03dd44a1adade1766e7277d80ee312. Mar 13 00:41:18.244476 containerd[1987]: time="2026-03-13T00:41:18.244435658Z" level=info msg="StartContainer for \"11cad52192e230d46fac684d9c6b983a8a03dd44a1adade1766e7277d80ee312\" returns successfully" Mar 13 00:41:25.357036 systemd[1]: cri-containerd-b88f8993e84e0a5e8a274c6655d3640b7ffdc3625a4cc2ce3e3c939e3be5498e.scope: Deactivated successfully. Mar 13 00:41:25.357549 systemd[1]: cri-containerd-b88f8993e84e0a5e8a274c6655d3640b7ffdc3625a4cc2ce3e3c939e3be5498e.scope: Consumed 409ms CPU time, 85.3M memory peak, 50.2M read from disk. Mar 13 00:41:25.358927 containerd[1987]: time="2026-03-13T00:41:25.358554190Z" level=info msg="received container exit event container_id:\"b88f8993e84e0a5e8a274c6655d3640b7ffdc3625a4cc2ce3e3c939e3be5498e\" id:\"b88f8993e84e0a5e8a274c6655d3640b7ffdc3625a4cc2ce3e3c939e3be5498e\" pid:6950 exit_status:1 exited_at:{seconds:1773362485 nanos:357235926}" Mar 13 00:41:25.385871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b88f8993e84e0a5e8a274c6655d3640b7ffdc3625a4cc2ce3e3c939e3be5498e-rootfs.mount: Deactivated successfully. Mar 13 00:41:26.162627 kubelet[3328]: I0313 00:41:26.162590 3328 scope.go:117] "RemoveContainer" containerID="a447da88b30044079b56e97bc806a381a964a1f01b5c0b408360ad59728abea3" Mar 13 00:41:26.163188 kubelet[3328]: I0313 00:41:26.163016 3328 scope.go:117] "RemoveContainer" containerID="b88f8993e84e0a5e8a274c6655d3640b7ffdc3625a4cc2ce3e3c939e3be5498e" Mar 13 00:41:26.165092 containerd[1987]: time="2026-03-13T00:41:26.164946499Z" level=info msg="RemoveContainer for \"a447da88b30044079b56e97bc806a381a964a1f01b5c0b408360ad59728abea3\"" Mar 13 00:41:26.173771 kubelet[3328]: E0313 00:41:26.173434 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6bf85f8dd-fg7k5_tigera-operator(ffee2c6e-b71b-4ac4-a844-4dcd4dc0c960)\"" pod="tigera-operator/tigera-operator-6bf85f8dd-fg7k5" podUID="ffee2c6e-b71b-4ac4-a844-4dcd4dc0c960" Mar 13 00:41:26.191298 containerd[1987]: time="2026-03-13T00:41:26.191255142Z" level=info msg="RemoveContainer for \"a447da88b30044079b56e97bc806a381a964a1f01b5c0b408360ad59728abea3\" returns successfully" Mar 13 00:41:27.718527 kubelet[3328]: E0313 00:41:27.718479 3328 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-22-244)"