Mar 3 12:46:07.148811 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 3 12:46:07.148857 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Mar 3 11:03:33 -00 2026 Mar 3 12:46:07.148882 kernel: KASLR disabled due to lack of seed Mar 3 12:46:07.148899 kernel: efi: EFI v2.7 by EDK II Mar 3 12:46:07.148915 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78551598 Mar 3 12:46:07.148930 kernel: secureboot: Secure boot disabled Mar 3 12:46:07.148947 kernel: ACPI: Early table checksum verification disabled Mar 3 12:46:07.148963 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 3 12:46:07.148980 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 3 12:46:07.148996 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 3 12:46:07.149012 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 3 12:46:07.149153 kernel: ACPI: FACS 0x0000000078630000 000040 Mar 3 12:46:07.149173 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 3 12:46:07.151772 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 3 12:46:07.151792 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 3 12:46:07.151809 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 3 12:46:07.151836 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 3 12:46:07.151853 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 3 12:46:07.151869 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 3 12:46:07.151885 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 3 12:46:07.151902 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 3 12:46:07.151918 kernel: printk: legacy bootconsole [uart0] enabled Mar 3 12:46:07.151933 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 3 12:46:07.151951 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 3 12:46:07.151967 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Mar 3 12:46:07.151983 kernel: Zone ranges: Mar 3 12:46:07.151999 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 3 12:46:07.152048 kernel: DMA32 empty Mar 3 12:46:07.152070 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 3 12:46:07.152086 kernel: Device empty Mar 3 12:46:07.152102 kernel: Movable zone start for each node Mar 3 12:46:07.152118 kernel: Early memory node ranges Mar 3 12:46:07.152136 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 3 12:46:07.152152 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 3 12:46:07.152168 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 3 12:46:07.152186 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 3 12:46:07.152206 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 3 12:46:07.152222 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 3 12:46:07.152238 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 3 12:46:07.152262 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 3 12:46:07.152285 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 3 12:46:07.152302 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 3 12:46:07.152320 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Mar 3 12:46:07.152336 kernel: psci: probing for conduit method from ACPI. Mar 3 12:46:07.152358 kernel: psci: PSCIv1.0 detected in firmware. Mar 3 12:46:07.152375 kernel: psci: Using standard PSCI v0.2 function IDs Mar 3 12:46:07.152393 kernel: psci: Trusted OS migration not required Mar 3 12:46:07.152410 kernel: psci: SMC Calling Convention v1.1 Mar 3 12:46:07.152429 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Mar 3 12:46:07.152446 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 3 12:46:07.152463 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 3 12:46:07.152480 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 3 12:46:07.152498 kernel: Detected PIPT I-cache on CPU0 Mar 3 12:46:07.152516 kernel: CPU features: detected: GIC system register CPU interface Mar 3 12:46:07.152533 kernel: CPU features: detected: Spectre-v2 Mar 3 12:46:07.152555 kernel: CPU features: detected: Spectre-v3a Mar 3 12:46:07.152573 kernel: CPU features: detected: Spectre-BHB Mar 3 12:46:07.152590 kernel: CPU features: detected: ARM erratum 1742098 Mar 3 12:46:07.152607 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 3 12:46:07.152623 kernel: alternatives: applying boot alternatives Mar 3 12:46:07.152643 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9550c2083f3062ad7c57f28a015a3afab95dfddb073076612b771af8d5df9e06 Mar 3 12:46:07.152660 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 3 12:46:07.152677 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 3 12:46:07.152693 kernel: Fallback order for Node 0: 0 Mar 3 12:46:07.152710 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Mar 3 12:46:07.152726 kernel: Policy zone: Normal Mar 3 12:46:07.152747 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 3 12:46:07.152764 kernel: software IO TLB: area num 2. Mar 3 12:46:07.152781 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Mar 3 12:46:07.152798 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 3 12:46:07.152814 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 3 12:46:07.152833 kernel: rcu: RCU event tracing is enabled. Mar 3 12:46:07.152850 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 3 12:46:07.152867 kernel: Trampoline variant of Tasks RCU enabled. Mar 3 12:46:07.152884 kernel: Tracing variant of Tasks RCU enabled. Mar 3 12:46:07.152901 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 3 12:46:07.152918 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 3 12:46:07.152939 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 3 12:46:07.152957 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 3 12:46:07.152973 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 3 12:46:07.152990 kernel: GICv3: 96 SPIs implemented Mar 3 12:46:07.153006 kernel: GICv3: 0 Extended SPIs implemented Mar 3 12:46:07.153057 kernel: Root IRQ handler: gic_handle_irq Mar 3 12:46:07.153077 kernel: GICv3: GICv3 features: 16 PPIs Mar 3 12:46:07.153094 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Mar 3 12:46:07.153110 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 3 12:46:07.153128 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 3 12:46:07.153168 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Mar 3 12:46:07.153187 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Mar 3 12:46:07.153212 kernel: GICv3: using LPI property table @0x0000000400110000 Mar 3 12:46:07.153229 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 3 12:46:07.153245 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Mar 3 12:46:07.153262 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 3 12:46:07.153279 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 3 12:46:07.153296 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 3 12:46:07.153313 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 3 12:46:07.153330 kernel: Console: colour dummy device 80x25 Mar 3 12:46:07.153347 kernel: printk: legacy console [tty1] enabled Mar 3 12:46:07.153364 kernel: ACPI: Core revision 20240827 Mar 3 12:46:07.153382 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 3 12:46:07.153403 kernel: pid_max: default: 32768 minimum: 301 Mar 3 12:46:07.153420 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 3 12:46:07.153437 kernel: landlock: Up and running. Mar 3 12:46:07.153454 kernel: SELinux: Initializing. Mar 3 12:46:07.153471 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 3 12:46:07.153489 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 3 12:46:07.153505 kernel: rcu: Hierarchical SRCU implementation. Mar 3 12:46:07.153524 kernel: rcu: Max phase no-delay instances is 400. Mar 3 12:46:07.153545 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 3 12:46:07.153563 kernel: Remapping and enabling EFI services. Mar 3 12:46:07.153579 kernel: smp: Bringing up secondary CPUs ... Mar 3 12:46:07.153596 kernel: Detected PIPT I-cache on CPU1 Mar 3 12:46:07.153614 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 3 12:46:07.153631 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Mar 3 12:46:07.153649 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 3 12:46:07.153666 kernel: smp: Brought up 1 node, 2 CPUs Mar 3 12:46:07.153683 kernel: SMP: Total of 2 processors activated. Mar 3 12:46:07.153704 kernel: CPU: All CPU(s) started at EL1 Mar 3 12:46:07.153732 kernel: CPU features: detected: 32-bit EL0 Support Mar 3 12:46:07.153750 kernel: CPU features: detected: 32-bit EL1 Support Mar 3 12:46:07.153771 kernel: CPU features: detected: CRC32 instructions Mar 3 12:46:07.153789 kernel: alternatives: applying system-wide alternatives Mar 3 12:46:07.153808 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Mar 3 12:46:07.153826 kernel: devtmpfs: initialized Mar 3 12:46:07.153844 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 3 12:46:07.153867 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 3 12:46:07.153885 kernel: 16880 pages in range for non-PLT usage Mar 3 12:46:07.153903 kernel: 508400 pages in range for PLT usage Mar 3 12:46:07.153921 kernel: pinctrl core: initialized pinctrl subsystem Mar 3 12:46:07.153938 kernel: SMBIOS 3.0.0 present. Mar 3 12:46:07.153956 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 3 12:46:07.153974 kernel: DMI: Memory slots populated: 0/0 Mar 3 12:46:07.153992 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 3 12:46:07.154010 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 3 12:46:07.154077 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 3 12:46:07.154098 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 3 12:46:07.154116 kernel: audit: initializing netlink subsys (disabled) Mar 3 12:46:07.154134 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Mar 3 12:46:07.154152 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 3 12:46:07.154170 kernel: cpuidle: using governor menu Mar 3 12:46:07.154188 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 3 12:46:07.154206 kernel: ASID allocator initialised with 65536 entries Mar 3 12:46:07.154224 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 3 12:46:07.154246 kernel: Serial: AMBA PL011 UART driver Mar 3 12:46:07.154264 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 3 12:46:07.154282 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 3 12:46:07.154300 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 3 12:46:07.154317 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 3 12:46:07.154335 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 3 12:46:07.154353 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 3 12:46:07.154372 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 3 12:46:07.154390 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 3 12:46:07.154412 kernel: ACPI: Added _OSI(Module Device) Mar 3 12:46:07.154430 kernel: ACPI: Added _OSI(Processor Device) Mar 3 12:46:07.154448 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 3 12:46:07.154466 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 3 12:46:07.154484 kernel: ACPI: Interpreter enabled Mar 3 12:46:07.154502 kernel: ACPI: Using GIC for interrupt routing Mar 3 12:46:07.154520 kernel: ACPI: MCFG table detected, 1 entries Mar 3 12:46:07.154537 kernel: ACPI: CPU0 has been hot-added Mar 3 12:46:07.154555 kernel: ACPI: CPU1 has been hot-added Mar 3 12:46:07.154576 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Mar 3 12:46:07.154887 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 3 12:46:07.155125 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 3 12:46:07.155320 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 3 12:46:07.155509 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Mar 3 12:46:07.155695 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Mar 3 12:46:07.155720 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 3 12:46:07.155747 kernel: acpiphp: Slot [1] registered Mar 3 12:46:07.155766 kernel: acpiphp: Slot [2] registered Mar 3 12:46:07.155784 kernel: acpiphp: Slot [3] registered Mar 3 12:46:07.155802 kernel: acpiphp: Slot [4] registered Mar 3 12:46:07.155820 kernel: acpiphp: Slot [5] registered Mar 3 12:46:07.155838 kernel: acpiphp: Slot [6] registered Mar 3 12:46:07.155855 kernel: acpiphp: Slot [7] registered Mar 3 12:46:07.155873 kernel: acpiphp: Slot [8] registered Mar 3 12:46:07.155890 kernel: acpiphp: Slot [9] registered Mar 3 12:46:07.155908 kernel: acpiphp: Slot [10] registered Mar 3 12:46:07.155930 kernel: acpiphp: Slot [11] registered Mar 3 12:46:07.155948 kernel: acpiphp: Slot [12] registered Mar 3 12:46:07.155966 kernel: acpiphp: Slot [13] registered Mar 3 12:46:07.155984 kernel: acpiphp: Slot [14] registered Mar 3 12:46:07.156002 kernel: acpiphp: Slot [15] registered Mar 3 12:46:07.156057 kernel: acpiphp: Slot [16] registered Mar 3 12:46:07.156080 kernel: acpiphp: Slot [17] registered Mar 3 12:46:07.156098 kernel: acpiphp: Slot [18] registered Mar 3 12:46:07.156116 kernel: acpiphp: Slot [19] registered Mar 3 12:46:07.156140 kernel: acpiphp: Slot [20] registered Mar 3 12:46:07.156159 kernel: acpiphp: Slot [21] registered Mar 3 12:46:07.156177 kernel: acpiphp: Slot [22] registered Mar 3 12:46:07.156195 kernel: acpiphp: Slot [23] registered Mar 3 12:46:07.156212 kernel: acpiphp: Slot [24] registered Mar 3 12:46:07.156230 kernel: acpiphp: Slot [25] registered Mar 3 12:46:07.156248 kernel: acpiphp: Slot [26] registered Mar 3 12:46:07.156266 kernel: acpiphp: Slot [27] registered Mar 3 12:46:07.156283 kernel: acpiphp: Slot [28] registered Mar 3 12:46:07.156301 kernel: acpiphp: Slot [29] registered Mar 3 12:46:07.156323 kernel: acpiphp: Slot [30] registered Mar 3 12:46:07.156341 kernel: acpiphp: Slot [31] registered Mar 3 12:46:07.156359 kernel: PCI host bridge to bus 0000:00 Mar 3 12:46:07.156570 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 3 12:46:07.156742 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 3 12:46:07.156910 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 3 12:46:07.157224 kernel: pci_bus 0000:00: root bus resource [bus 00] Mar 3 12:46:07.157479 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Mar 3 12:46:07.157699 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Mar 3 12:46:07.157896 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Mar 3 12:46:07.158139 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Mar 3 12:46:07.158338 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Mar 3 12:46:07.158529 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 3 12:46:07.158739 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Mar 3 12:46:07.158930 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Mar 3 12:46:07.159175 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Mar 3 12:46:07.159371 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Mar 3 12:46:07.159561 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 3 12:46:07.159735 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 3 12:46:07.159903 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 3 12:46:07.160114 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 3 12:46:07.160140 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 3 12:46:07.160159 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 3 12:46:07.160177 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 3 12:46:07.160195 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 3 12:46:07.160213 kernel: iommu: Default domain type: Translated Mar 3 12:46:07.160231 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 3 12:46:07.160249 kernel: efivars: Registered efivars operations Mar 3 12:46:07.160266 kernel: vgaarb: loaded Mar 3 12:46:07.160290 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 3 12:46:07.160308 kernel: VFS: Disk quotas dquot_6.6.0 Mar 3 12:46:07.160326 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 3 12:46:07.160343 kernel: pnp: PnP ACPI init Mar 3 12:46:07.160547 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 3 12:46:07.160574 kernel: pnp: PnP ACPI: found 1 devices Mar 3 12:46:07.160592 kernel: NET: Registered PF_INET protocol family Mar 3 12:46:07.160611 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 3 12:46:07.160634 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 3 12:46:07.160652 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 3 12:46:07.160671 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 3 12:46:07.160688 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 3 12:46:07.160706 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 3 12:46:07.160725 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 3 12:46:07.160743 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 3 12:46:07.160761 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 3 12:46:07.160779 kernel: PCI: CLS 0 bytes, default 64 Mar 3 12:46:07.160800 kernel: kvm [1]: HYP mode not available Mar 3 12:46:07.160818 kernel: Initialise system trusted keyrings Mar 3 12:46:07.160836 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 3 12:46:07.160853 kernel: Key type asymmetric registered Mar 3 12:46:07.160871 kernel: Asymmetric key parser 'x509' registered Mar 3 12:46:07.160889 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 3 12:46:07.160907 kernel: io scheduler mq-deadline registered Mar 3 12:46:07.160924 kernel: io scheduler kyber registered Mar 3 12:46:07.160942 kernel: io scheduler bfq registered Mar 3 12:46:07.161190 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 3 12:46:07.161218 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 3 12:46:07.161237 kernel: ACPI: button: Power Button [PWRB] Mar 3 12:46:07.161255 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 3 12:46:07.161273 kernel: ACPI: button: Sleep Button [SLPB] Mar 3 12:46:07.161291 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 3 12:46:07.161310 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 3 12:46:07.161513 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 3 12:46:07.161547 kernel: printk: legacy console [ttyS0] disabled Mar 3 12:46:07.161565 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 3 12:46:07.161583 kernel: printk: legacy console [ttyS0] enabled Mar 3 12:46:07.161601 kernel: printk: legacy bootconsole [uart0] disabled Mar 3 12:46:07.161618 kernel: thunder_xcv, ver 1.0 Mar 3 12:46:07.161636 kernel: thunder_bgx, ver 1.0 Mar 3 12:46:07.161654 kernel: nicpf, ver 1.0 Mar 3 12:46:07.161672 kernel: nicvf, ver 1.0 Mar 3 12:46:07.161887 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 3 12:46:07.162112 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-03T12:46:06 UTC (1772541966) Mar 3 12:46:07.162141 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 3 12:46:07.162160 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Mar 3 12:46:07.162177 kernel: NET: Registered PF_INET6 protocol family Mar 3 12:46:07.162196 kernel: watchdog: NMI not fully supported Mar 3 12:46:07.162213 kernel: watchdog: Hard watchdog permanently disabled Mar 3 12:46:07.162231 kernel: Segment Routing with IPv6 Mar 3 12:46:07.162249 kernel: In-situ OAM (IOAM) with IPv6 Mar 3 12:46:07.162267 kernel: NET: Registered PF_PACKET protocol family Mar 3 12:46:07.162292 kernel: Key type dns_resolver registered Mar 3 12:46:07.162310 kernel: registered taskstats version 1 Mar 3 12:46:07.162327 kernel: Loading compiled-in X.509 certificates Mar 3 12:46:07.162345 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 14a741e1e2b172e51b42fe87d143cf4cae2ad92c' Mar 3 12:46:07.162364 kernel: Demotion targets for Node 0: null Mar 3 12:46:07.162382 kernel: Key type .fscrypt registered Mar 3 12:46:07.162400 kernel: Key type fscrypt-provisioning registered Mar 3 12:46:07.162417 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 3 12:46:07.162435 kernel: ima: Allocated hash algorithm: sha1 Mar 3 12:46:07.162457 kernel: ima: No architecture policies found Mar 3 12:46:07.162475 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 3 12:46:07.162493 kernel: clk: Disabling unused clocks Mar 3 12:46:07.162511 kernel: PM: genpd: Disabling unused power domains Mar 3 12:46:07.162528 kernel: Warning: unable to open an initial console. Mar 3 12:46:07.162546 kernel: Freeing unused kernel memory: 39552K Mar 3 12:46:07.162564 kernel: Run /init as init process Mar 3 12:46:07.162581 kernel: with arguments: Mar 3 12:46:07.162599 kernel: /init Mar 3 12:46:07.162620 kernel: with environment: Mar 3 12:46:07.162638 kernel: HOME=/ Mar 3 12:46:07.162655 kernel: TERM=linux Mar 3 12:46:07.162675 systemd[1]: Successfully made /usr/ read-only. Mar 3 12:46:07.162699 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 12:46:07.162719 systemd[1]: Detected virtualization amazon. Mar 3 12:46:07.162737 systemd[1]: Detected architecture arm64. Mar 3 12:46:07.162760 systemd[1]: Running in initrd. Mar 3 12:46:07.162779 systemd[1]: No hostname configured, using default hostname. Mar 3 12:46:07.162798 systemd[1]: Hostname set to . Mar 3 12:46:07.162817 systemd[1]: Initializing machine ID from VM UUID. Mar 3 12:46:07.162836 systemd[1]: Queued start job for default target initrd.target. Mar 3 12:46:07.162855 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 12:46:07.162874 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 12:46:07.162894 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 3 12:46:07.162918 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 12:46:07.162938 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 3 12:46:07.162958 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 3 12:46:07.162979 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 3 12:46:07.162998 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 3 12:46:07.163053 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 12:46:07.163080 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 12:46:07.163107 systemd[1]: Reached target paths.target - Path Units. Mar 3 12:46:07.163126 systemd[1]: Reached target slices.target - Slice Units. Mar 3 12:46:07.163146 systemd[1]: Reached target swap.target - Swaps. Mar 3 12:46:07.163165 systemd[1]: Reached target timers.target - Timer Units. Mar 3 12:46:07.163184 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 12:46:07.163204 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 12:46:07.163223 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 3 12:46:07.163243 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 3 12:46:07.163262 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 12:46:07.163285 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 12:46:07.163305 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 12:46:07.163325 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 12:46:07.163345 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 3 12:46:07.163364 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 12:46:07.163384 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 3 12:46:07.163404 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 3 12:46:07.163424 systemd[1]: Starting systemd-fsck-usr.service... Mar 3 12:46:07.163447 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 12:46:07.163467 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 12:46:07.163486 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 12:46:07.163505 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 3 12:46:07.163526 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 12:46:07.163550 systemd[1]: Finished systemd-fsck-usr.service. Mar 3 12:46:07.163570 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 3 12:46:07.163591 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 3 12:46:07.163610 kernel: Bridge firewalling registered Mar 3 12:46:07.163630 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 12:46:07.163701 systemd-journald[258]: Collecting audit messages is disabled. Mar 3 12:46:07.163753 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 12:46:07.163775 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 3 12:46:07.163796 systemd-journald[258]: Journal started Mar 3 12:46:07.163833 systemd-journald[258]: Runtime Journal (/run/log/journal/ec2b8d681c09fbb9d5c7ed06968a0ad5) is 8M, max 75.3M, 67.3M free. Mar 3 12:46:07.102324 systemd-modules-load[259]: Inserted module 'overlay' Mar 3 12:46:07.133076 systemd-modules-load[259]: Inserted module 'br_netfilter' Mar 3 12:46:07.178409 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 12:46:07.183120 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 12:46:07.191198 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 12:46:07.210248 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 12:46:07.220329 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 12:46:07.238558 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 12:46:07.250571 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 12:46:07.260448 systemd-tmpfiles[289]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 3 12:46:07.267815 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 3 12:46:07.276342 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 12:46:07.296259 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 12:46:07.306744 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 12:46:07.326646 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9550c2083f3062ad7c57f28a015a3afab95dfddb073076612b771af8d5df9e06 Mar 3 12:46:07.405235 systemd-resolved[301]: Positive Trust Anchors: Mar 3 12:46:07.406994 systemd-resolved[301]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 12:46:07.407078 systemd-resolved[301]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 12:46:07.517060 kernel: SCSI subsystem initialized Mar 3 12:46:07.525057 kernel: Loading iSCSI transport class v2.0-870. Mar 3 12:46:07.538253 kernel: iscsi: registered transport (tcp) Mar 3 12:46:07.560060 kernel: iscsi: registered transport (qla4xxx) Mar 3 12:46:07.560144 kernel: QLogic iSCSI HBA Driver Mar 3 12:46:07.594634 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 12:46:07.628211 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 12:46:07.643544 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 12:46:07.689307 kernel: random: crng init done Mar 3 12:46:07.689545 systemd-resolved[301]: Defaulting to hostname 'linux'. Mar 3 12:46:07.699129 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 12:46:07.713483 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 12:46:07.741555 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 3 12:46:07.749451 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 3 12:46:07.835070 kernel: raid6: neonx8 gen() 6520 MB/s Mar 3 12:46:07.852068 kernel: raid6: neonx4 gen() 6584 MB/s Mar 3 12:46:07.869066 kernel: raid6: neonx2 gen() 5456 MB/s Mar 3 12:46:07.886067 kernel: raid6: neonx1 gen() 3955 MB/s Mar 3 12:46:07.903063 kernel: raid6: int64x8 gen() 3669 MB/s Mar 3 12:46:07.920069 kernel: raid6: int64x4 gen() 3711 MB/s Mar 3 12:46:07.937064 kernel: raid6: int64x2 gen() 3604 MB/s Mar 3 12:46:07.955143 kernel: raid6: int64x1 gen() 2762 MB/s Mar 3 12:46:07.955205 kernel: raid6: using algorithm neonx4 gen() 6584 MB/s Mar 3 12:46:07.974135 kernel: raid6: .... xor() 4875 MB/s, rmw enabled Mar 3 12:46:07.974210 kernel: raid6: using neon recovery algorithm Mar 3 12:46:07.982952 kernel: xor: measuring software checksum speed Mar 3 12:46:07.983039 kernel: 8regs : 12947 MB/sec Mar 3 12:46:07.985510 kernel: 32regs : 12310 MB/sec Mar 3 12:46:07.985549 kernel: arm64_neon : 9083 MB/sec Mar 3 12:46:07.985586 kernel: xor: using function: 8regs (12947 MB/sec) Mar 3 12:46:08.078069 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 3 12:46:08.092092 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 3 12:46:08.103233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 12:46:08.170685 systemd-udevd[510]: Using default interface naming scheme 'v255'. Mar 3 12:46:08.182849 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 12:46:08.188070 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 3 12:46:08.230584 dracut-pre-trigger[519]: rd.md=0: removing MD RAID activation Mar 3 12:46:08.279155 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 12:46:08.286359 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 12:46:08.417965 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 12:46:08.434558 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 3 12:46:08.605112 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 3 12:46:08.605212 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 3 12:46:08.625060 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 3 12:46:08.625484 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 3 12:46:08.632180 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 3 12:46:08.632266 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 3 12:46:08.633439 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 12:46:08.633601 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 12:46:08.653795 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:6d:c5:dc:4a:23 Mar 3 12:46:08.653788 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 12:46:08.664686 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 3 12:46:08.664400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 12:46:08.672251 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 12:46:08.678610 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 3 12:46:08.678686 kernel: GPT:9289727 != 33554431 Mar 3 12:46:08.680227 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 3 12:46:08.681312 kernel: GPT:9289727 != 33554431 Mar 3 12:46:08.682768 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 3 12:46:08.683792 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 3 12:46:08.692516 (udev-worker)[556]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:46:08.721415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 12:46:08.739057 kernel: nvme nvme0: using unchecked data buffer Mar 3 12:46:08.876428 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 3 12:46:08.946716 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 3 12:46:08.976476 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 3 12:46:08.983225 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 3 12:46:09.013373 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 3 12:46:09.040769 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 3 12:46:09.047534 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 12:46:09.050713 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 12:46:09.054827 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 12:46:09.066980 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 3 12:46:09.074055 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 3 12:46:09.099301 disk-uuid[692]: Primary Header is updated. Mar 3 12:46:09.099301 disk-uuid[692]: Secondary Entries is updated. Mar 3 12:46:09.099301 disk-uuid[692]: Secondary Header is updated. Mar 3 12:46:09.114140 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 3 12:46:09.124177 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 3 12:46:10.142507 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 3 12:46:10.144057 disk-uuid[694]: The operation has completed successfully. Mar 3 12:46:10.342154 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 3 12:46:10.344099 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 3 12:46:10.444444 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 3 12:46:10.483408 sh[960]: Success Mar 3 12:46:10.512453 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 3 12:46:10.512536 kernel: device-mapper: uevent: version 1.0.3 Mar 3 12:46:10.514766 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 3 12:46:10.530070 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 3 12:46:10.644220 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 3 12:46:10.655601 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 3 12:46:10.674384 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 3 12:46:10.707100 kernel: BTRFS: device fsid 639fb782-fb4f-4fdd-a572-72667a093996 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (983) Mar 3 12:46:10.712711 kernel: BTRFS info (device dm-0): first mount of filesystem 639fb782-fb4f-4fdd-a572-72667a093996 Mar 3 12:46:10.712919 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 3 12:46:10.809072 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 3 12:46:10.809176 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 3 12:46:10.812077 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 3 12:46:10.832712 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 3 12:46:10.839838 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 3 12:46:10.845271 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 3 12:46:10.847468 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 3 12:46:10.858013 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 3 12:46:10.919075 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1016) Mar 3 12:46:10.924313 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5bcc6201-9983-4e1f-9352-8a67e2a2e71d Mar 3 12:46:10.924415 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 3 12:46:10.943747 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 3 12:46:10.943825 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 3 12:46:10.955130 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5bcc6201-9983-4e1f-9352-8a67e2a2e71d Mar 3 12:46:10.957904 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 3 12:46:10.966719 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 3 12:46:11.069079 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 12:46:11.082623 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 12:46:11.163400 systemd-networkd[1152]: lo: Link UP Mar 3 12:46:11.163934 systemd-networkd[1152]: lo: Gained carrier Mar 3 12:46:11.168396 systemd-networkd[1152]: Enumeration completed Mar 3 12:46:11.169480 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 12:46:11.170043 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 12:46:11.170055 systemd-networkd[1152]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 12:46:11.177449 systemd[1]: Reached target network.target - Network. Mar 3 12:46:11.185665 systemd-networkd[1152]: eth0: Link UP Mar 3 12:46:11.185673 systemd-networkd[1152]: eth0: Gained carrier Mar 3 12:46:11.185699 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 12:46:11.229195 systemd-networkd[1152]: eth0: DHCPv4 address 172.31.25.173/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 3 12:46:11.599937 ignition[1084]: Ignition 2.22.0 Mar 3 12:46:11.599970 ignition[1084]: Stage: fetch-offline Mar 3 12:46:11.603619 ignition[1084]: no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:11.603659 ignition[1084]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:11.609354 ignition[1084]: Ignition finished successfully Mar 3 12:46:11.612614 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 12:46:11.621595 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 3 12:46:11.669013 ignition[1163]: Ignition 2.22.0 Mar 3 12:46:11.669112 ignition[1163]: Stage: fetch Mar 3 12:46:11.669656 ignition[1163]: no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:11.669680 ignition[1163]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:11.669807 ignition[1163]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:11.681781 ignition[1163]: PUT result: OK Mar 3 12:46:11.685439 ignition[1163]: parsed url from cmdline: "" Mar 3 12:46:11.685462 ignition[1163]: no config URL provided Mar 3 12:46:11.685478 ignition[1163]: reading system config file "/usr/lib/ignition/user.ign" Mar 3 12:46:11.685504 ignition[1163]: no config at "/usr/lib/ignition/user.ign" Mar 3 12:46:11.685537 ignition[1163]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:11.694123 ignition[1163]: PUT result: OK Mar 3 12:46:11.698015 ignition[1163]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 3 12:46:11.701284 ignition[1163]: GET result: OK Mar 3 12:46:11.701473 ignition[1163]: parsing config with SHA512: 6ce5349544d39c030ed93c38bd0c404143145a3013b6436c72e68acb2d719c679f00f9c53c7bb0664b2acb6ee07a801133f5e31508779495640e6a982e49145f Mar 3 12:46:11.714283 unknown[1163]: fetched base config from "system" Mar 3 12:46:11.714970 ignition[1163]: fetch: fetch complete Mar 3 12:46:11.714306 unknown[1163]: fetched base config from "system" Mar 3 12:46:11.714983 ignition[1163]: fetch: fetch passed Mar 3 12:46:11.714319 unknown[1163]: fetched user config from "aws" Mar 3 12:46:11.715102 ignition[1163]: Ignition finished successfully Mar 3 12:46:11.726405 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 3 12:46:11.731626 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 3 12:46:11.802387 ignition[1169]: Ignition 2.22.0 Mar 3 12:46:11.802417 ignition[1169]: Stage: kargs Mar 3 12:46:11.803601 ignition[1169]: no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:11.803869 ignition[1169]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:11.804370 ignition[1169]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:11.808064 ignition[1169]: PUT result: OK Mar 3 12:46:11.817318 ignition[1169]: kargs: kargs passed Mar 3 12:46:11.817423 ignition[1169]: Ignition finished successfully Mar 3 12:46:11.822837 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 3 12:46:11.828248 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 3 12:46:11.895983 ignition[1175]: Ignition 2.22.0 Mar 3 12:46:11.896516 ignition[1175]: Stage: disks Mar 3 12:46:11.897444 ignition[1175]: no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:11.897466 ignition[1175]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:11.897596 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:11.906719 ignition[1175]: PUT result: OK Mar 3 12:46:11.911351 ignition[1175]: disks: disks passed Mar 3 12:46:11.911504 ignition[1175]: Ignition finished successfully Mar 3 12:46:11.915418 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 3 12:46:11.921886 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 3 12:46:11.927304 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 3 12:46:11.930602 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 12:46:11.935477 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 12:46:11.938288 systemd[1]: Reached target basic.target - Basic System. Mar 3 12:46:11.948565 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 3 12:46:12.046354 systemd-fsck[1184]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 3 12:46:12.055008 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 3 12:46:12.062634 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 3 12:46:12.204056 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f44cfd4f-a1a9-472a-86a7-c3154f299e07 r/w with ordered data mode. Quota mode: none. Mar 3 12:46:12.206179 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 3 12:46:12.215981 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 3 12:46:12.223622 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 12:46:12.231004 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 3 12:46:12.239471 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 3 12:46:12.239599 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 3 12:46:12.239664 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 12:46:12.268207 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 3 12:46:12.276516 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 3 12:46:12.289066 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1203) Mar 3 12:46:12.293729 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5bcc6201-9983-4e1f-9352-8a67e2a2e71d Mar 3 12:46:12.293804 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 3 12:46:12.303362 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 3 12:46:12.303444 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 3 12:46:12.306926 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 12:46:12.557114 initrd-setup-root[1227]: cut: /sysroot/etc/passwd: No such file or directory Mar 3 12:46:12.579309 initrd-setup-root[1234]: cut: /sysroot/etc/group: No such file or directory Mar 3 12:46:12.591930 initrd-setup-root[1241]: cut: /sysroot/etc/shadow: No such file or directory Mar 3 12:46:12.600720 initrd-setup-root[1248]: cut: /sysroot/etc/gshadow: No such file or directory Mar 3 12:46:12.965460 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 3 12:46:12.970874 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 3 12:46:12.982566 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 3 12:46:13.016285 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 3 12:46:13.020886 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5bcc6201-9983-4e1f-9352-8a67e2a2e71d Mar 3 12:46:13.053890 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 3 12:46:13.073035 ignition[1315]: INFO : Ignition 2.22.0 Mar 3 12:46:13.073035 ignition[1315]: INFO : Stage: mount Mar 3 12:46:13.077516 ignition[1315]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:13.077516 ignition[1315]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:13.077516 ignition[1315]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:13.077516 ignition[1315]: INFO : PUT result: OK Mar 3 12:46:13.088976 systemd-networkd[1152]: eth0: Gained IPv6LL Mar 3 12:46:13.096684 ignition[1315]: INFO : mount: mount passed Mar 3 12:46:13.100197 ignition[1315]: INFO : Ignition finished successfully Mar 3 12:46:13.105102 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 3 12:46:13.112189 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 3 12:46:13.209282 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 12:46:13.259082 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1327) Mar 3 12:46:13.263421 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5bcc6201-9983-4e1f-9352-8a67e2a2e71d Mar 3 12:46:13.263517 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 3 12:46:13.272616 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 3 12:46:13.272711 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 3 12:46:13.276480 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 12:46:13.332921 ignition[1344]: INFO : Ignition 2.22.0 Mar 3 12:46:13.332921 ignition[1344]: INFO : Stage: files Mar 3 12:46:13.337878 ignition[1344]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:13.337878 ignition[1344]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:13.337878 ignition[1344]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:13.337878 ignition[1344]: INFO : PUT result: OK Mar 3 12:46:13.348428 ignition[1344]: DEBUG : files: compiled without relabeling support, skipping Mar 3 12:46:13.351271 ignition[1344]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 3 12:46:13.351271 ignition[1344]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 3 12:46:13.369483 ignition[1344]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 3 12:46:13.372945 ignition[1344]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 3 12:46:13.377146 unknown[1344]: wrote ssh authorized keys file for user: core Mar 3 12:46:13.379940 ignition[1344]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 3 12:46:13.392997 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 3 12:46:13.392997 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 3 12:46:13.475546 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 3 12:46:13.820057 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 3 12:46:13.826183 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 3 12:46:13.826183 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 3 12:46:13.826183 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 3 12:46:13.826183 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 3 12:46:13.826183 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 12:46:13.826183 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 12:46:13.826183 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 12:46:13.826183 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 12:46:13.862249 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 12:46:13.862249 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 12:46:13.862249 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 3 12:46:13.862249 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 3 12:46:13.862249 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 3 12:46:13.862249 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Mar 3 12:46:14.298039 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 3 12:46:14.675743 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 3 12:46:14.680955 ignition[1344]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 3 12:46:14.680955 ignition[1344]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 12:46:14.689968 ignition[1344]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 12:46:14.689968 ignition[1344]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 3 12:46:14.689968 ignition[1344]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 3 12:46:14.689968 ignition[1344]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 3 12:46:14.689968 ignition[1344]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 3 12:46:14.689968 ignition[1344]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 3 12:46:14.689968 ignition[1344]: INFO : files: files passed Mar 3 12:46:14.689968 ignition[1344]: INFO : Ignition finished successfully Mar 3 12:46:14.716957 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 3 12:46:14.723492 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 3 12:46:14.733845 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 3 12:46:14.750154 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 3 12:46:14.753275 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 3 12:46:14.771709 initrd-setup-root-after-ignition[1373]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 12:46:14.771709 initrd-setup-root-after-ignition[1373]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 3 12:46:14.780095 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 12:46:14.786516 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 12:46:14.793375 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 3 12:46:14.800399 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 3 12:46:14.877234 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 3 12:46:14.879750 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 3 12:46:14.888792 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 3 12:46:14.891396 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 3 12:46:14.896126 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 3 12:46:14.897534 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 3 12:46:14.940539 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 12:46:14.948442 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 3 12:46:14.996621 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 3 12:46:15.002989 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 12:46:15.006051 systemd[1]: Stopped target timers.target - Timer Units. Mar 3 12:46:15.010914 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 3 12:46:15.011269 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 12:46:15.016610 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 3 12:46:15.019885 systemd[1]: Stopped target basic.target - Basic System. Mar 3 12:46:15.029108 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 3 12:46:15.038185 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 12:46:15.041414 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 3 12:46:15.049134 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 3 12:46:15.052303 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 3 12:46:15.059474 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 12:46:15.065459 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 3 12:46:15.070834 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 3 12:46:15.076121 systemd[1]: Stopped target swap.target - Swaps. Mar 3 12:46:15.079424 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 3 12:46:15.079707 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 3 12:46:15.089121 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 3 12:46:15.092811 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 12:46:15.100535 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 3 12:46:15.104166 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 12:46:15.107898 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 3 12:46:15.108201 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 3 12:46:15.119065 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 3 12:46:15.119560 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 12:46:15.128654 systemd[1]: ignition-files.service: Deactivated successfully. Mar 3 12:46:15.129154 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 3 12:46:15.137644 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 3 12:46:15.140642 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 3 12:46:15.140967 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 12:46:15.170107 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 3 12:46:15.183554 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 3 12:46:15.187694 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 12:46:15.191816 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 3 12:46:15.192629 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 12:46:15.219281 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 3 12:46:15.219724 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 3 12:46:15.234588 ignition[1397]: INFO : Ignition 2.22.0 Mar 3 12:46:15.237979 ignition[1397]: INFO : Stage: umount Mar 3 12:46:15.237979 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 12:46:15.237979 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 3 12:46:15.237979 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 3 12:46:15.249328 ignition[1397]: INFO : PUT result: OK Mar 3 12:46:15.253623 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 3 12:46:15.262709 ignition[1397]: INFO : umount: umount passed Mar 3 12:46:15.262709 ignition[1397]: INFO : Ignition finished successfully Mar 3 12:46:15.271909 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 3 12:46:15.274293 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 3 12:46:15.280354 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 3 12:46:15.280487 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 3 12:46:15.287407 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 3 12:46:15.287525 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 3 12:46:15.290980 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 3 12:46:15.291105 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 3 12:46:15.295071 systemd[1]: Stopped target network.target - Network. Mar 3 12:46:15.297680 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 3 12:46:15.297788 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 12:46:15.305524 systemd[1]: Stopped target paths.target - Path Units. Mar 3 12:46:15.307859 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 3 12:46:15.315399 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 12:46:15.318986 systemd[1]: Stopped target slices.target - Slice Units. Mar 3 12:46:15.323459 systemd[1]: Stopped target sockets.target - Socket Units. Mar 3 12:46:15.331419 systemd[1]: iscsid.socket: Deactivated successfully. Mar 3 12:46:15.332662 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 12:46:15.333925 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 3 12:46:15.333987 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 12:46:15.341615 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 3 12:46:15.341729 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 3 12:46:15.346247 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 3 12:46:15.346346 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 3 12:46:15.350676 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 3 12:46:15.354438 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 3 12:46:15.357478 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 3 12:46:15.357675 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 3 12:46:15.384510 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 3 12:46:15.384705 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 3 12:46:15.395585 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 3 12:46:15.396060 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 3 12:46:15.396301 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 3 12:46:15.404514 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 3 12:46:15.407079 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 3 12:46:15.414373 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 3 12:46:15.414477 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 3 12:46:15.427478 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 3 12:46:15.427591 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 3 12:46:15.437873 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 3 12:46:15.451819 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 3 12:46:15.451977 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 12:46:15.463233 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 3 12:46:15.463353 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 3 12:46:15.470999 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 3 12:46:15.471122 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 3 12:46:15.475223 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 3 12:46:15.475321 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 12:46:15.487235 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 12:46:15.495640 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 3 12:46:15.495801 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 3 12:46:15.518773 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 3 12:46:15.519163 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 12:46:15.522784 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 3 12:46:15.522865 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 3 12:46:15.525950 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 3 12:46:15.526043 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 12:46:15.526138 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 3 12:46:15.526223 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 3 12:46:15.526950 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 3 12:46:15.527055 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 3 12:46:15.527672 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 3 12:46:15.527751 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 12:46:15.534208 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 3 12:46:15.534634 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 3 12:46:15.534731 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 12:46:15.542532 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 3 12:46:15.542636 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 12:46:15.549746 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 12:46:15.549846 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 12:46:15.564905 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 3 12:46:15.565014 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 3 12:46:15.569264 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 12:46:15.569961 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 3 12:46:15.570166 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 3 12:46:15.579487 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 3 12:46:15.581093 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 3 12:46:15.585127 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 3 12:46:15.599630 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 3 12:46:15.662273 systemd[1]: Switching root. Mar 3 12:46:15.715834 systemd-journald[258]: Journal stopped Mar 3 12:46:18.226410 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Mar 3 12:46:18.226550 kernel: SELinux: policy capability network_peer_controls=1 Mar 3 12:46:18.226592 kernel: SELinux: policy capability open_perms=1 Mar 3 12:46:18.226623 kernel: SELinux: policy capability extended_socket_class=1 Mar 3 12:46:18.226652 kernel: SELinux: policy capability always_check_network=0 Mar 3 12:46:18.226680 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 3 12:46:18.231212 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 3 12:46:18.231246 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 3 12:46:18.231275 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 3 12:46:18.231307 kernel: SELinux: policy capability userspace_initial_context=0 Mar 3 12:46:18.231340 kernel: audit: type=1403 audit(1772541976.244:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 3 12:46:18.231397 systemd[1]: Successfully loaded SELinux policy in 131.398ms. Mar 3 12:46:18.231451 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.472ms. Mar 3 12:46:18.231484 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 12:46:18.231516 systemd[1]: Detected virtualization amazon. Mar 3 12:46:18.231549 systemd[1]: Detected architecture arm64. Mar 3 12:46:18.231579 systemd[1]: Detected first boot. Mar 3 12:46:18.231610 systemd[1]: Initializing machine ID from VM UUID. Mar 3 12:46:18.231640 kernel: NET: Registered PF_VSOCK protocol family Mar 3 12:46:18.231670 zram_generator::config[1442]: No configuration found. Mar 3 12:46:18.231710 systemd[1]: Populated /etc with preset unit settings. Mar 3 12:46:18.231741 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 3 12:46:18.231772 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 3 12:46:18.231802 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 3 12:46:18.231834 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 3 12:46:18.231863 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 3 12:46:18.231896 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 3 12:46:18.231928 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 3 12:46:18.231961 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 3 12:46:18.231991 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 3 12:46:18.233191 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 3 12:46:18.233240 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 3 12:46:18.233271 systemd[1]: Created slice user.slice - User and Session Slice. Mar 3 12:46:18.233299 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 12:46:18.233329 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 12:46:18.233359 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 3 12:46:18.233388 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 3 12:46:18.233426 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 3 12:46:18.233456 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 12:46:18.233487 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 3 12:46:18.233518 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 12:46:18.233549 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 12:46:18.233577 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 3 12:46:18.233609 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 3 12:46:18.233644 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 3 12:46:18.233676 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 3 12:46:18.233705 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 12:46:18.233739 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 12:46:18.233768 systemd[1]: Reached target slices.target - Slice Units. Mar 3 12:46:18.233806 systemd[1]: Reached target swap.target - Swaps. Mar 3 12:46:18.233837 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 3 12:46:18.233868 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 3 12:46:18.233913 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 3 12:46:18.233945 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 12:46:18.233982 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 12:46:18.234010 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 12:46:18.235130 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 3 12:46:18.235173 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 3 12:46:18.235202 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 3 12:46:18.235232 systemd[1]: Mounting media.mount - External Media Directory... Mar 3 12:46:18.235262 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 3 12:46:18.235290 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 3 12:46:18.235320 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 3 12:46:18.235359 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 3 12:46:18.235387 systemd[1]: Reached target machines.target - Containers. Mar 3 12:46:18.235418 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 3 12:46:18.235448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 12:46:18.235478 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 12:46:18.235509 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 3 12:46:18.235545 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 12:46:18.235576 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 12:46:18.235613 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 12:46:18.235644 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 3 12:46:18.235677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 12:46:18.235710 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 3 12:46:18.235739 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 3 12:46:18.235770 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 3 12:46:18.235802 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 3 12:46:18.235832 systemd[1]: Stopped systemd-fsck-usr.service. Mar 3 12:46:18.235869 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 12:46:18.235901 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 12:46:18.235932 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 12:46:18.235965 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 12:46:18.235999 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 3 12:46:18.236252 kernel: loop: module loaded Mar 3 12:46:18.236305 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 3 12:46:18.236339 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 12:46:18.236375 systemd[1]: verity-setup.service: Deactivated successfully. Mar 3 12:46:18.236409 kernel: fuse: init (API version 7.41) Mar 3 12:46:18.236438 systemd[1]: Stopped verity-setup.service. Mar 3 12:46:18.236467 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 3 12:46:18.236500 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 3 12:46:18.236540 systemd[1]: Mounted media.mount - External Media Directory. Mar 3 12:46:18.236574 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 3 12:46:18.236611 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 3 12:46:18.236643 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 3 12:46:18.236737 systemd-journald[1522]: Collecting audit messages is disabled. Mar 3 12:46:18.236806 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 12:46:18.236837 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 3 12:46:18.236867 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 3 12:46:18.236901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 12:46:18.236932 systemd-journald[1522]: Journal started Mar 3 12:46:18.236981 systemd-journald[1522]: Runtime Journal (/run/log/journal/ec2b8d681c09fbb9d5c7ed06968a0ad5) is 8M, max 75.3M, 67.3M free. Mar 3 12:46:17.632939 systemd[1]: Queued start job for default target multi-user.target. Mar 3 12:46:17.656576 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 3 12:46:17.657557 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 3 12:46:18.258093 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 12:46:18.258177 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 12:46:18.260835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 12:46:18.262129 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 12:46:18.268417 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 3 12:46:18.269107 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 3 12:46:18.274886 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 12:46:18.275600 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 12:46:18.279801 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 12:46:18.284695 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 12:46:18.291475 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 3 12:46:18.315292 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 3 12:46:18.343292 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 12:46:18.350086 kernel: ACPI: bus type drm_connector registered Mar 3 12:46:18.350566 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 3 12:46:18.359237 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 3 12:46:18.363188 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 3 12:46:18.363247 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 12:46:18.369938 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 3 12:46:18.376362 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 3 12:46:18.376646 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 12:46:18.383341 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 3 12:46:18.389485 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 3 12:46:18.389684 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 12:46:18.399495 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 3 12:46:18.404911 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 12:46:18.412589 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 12:46:18.427314 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 3 12:46:18.439847 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 3 12:46:18.443838 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 12:46:18.445088 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 12:46:18.452421 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 3 12:46:18.458519 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 3 12:46:18.474658 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 3 12:46:18.482119 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 3 12:46:18.488214 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 3 12:46:18.504881 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 3 12:46:18.538219 systemd-journald[1522]: Time spent on flushing to /var/log/journal/ec2b8d681c09fbb9d5c7ed06968a0ad5 is 156.309ms for 927 entries. Mar 3 12:46:18.538219 systemd-journald[1522]: System Journal (/var/log/journal/ec2b8d681c09fbb9d5c7ed06968a0ad5) is 8M, max 195.6M, 187.6M free. Mar 3 12:46:18.719058 systemd-journald[1522]: Received client request to flush runtime journal. Mar 3 12:46:18.719156 kernel: loop0: detected capacity change from 0 to 119840 Mar 3 12:46:18.719206 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 3 12:46:18.600338 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 12:46:18.631136 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 3 12:46:18.660313 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 12:46:18.668140 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 3 12:46:18.711145 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 3 12:46:18.720851 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 12:46:18.736132 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 3 12:46:18.745110 kernel: loop1: detected capacity change from 0 to 61264 Mar 3 12:46:18.816980 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Mar 3 12:46:18.817012 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Mar 3 12:46:18.831100 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 12:46:18.859952 kernel: loop2: detected capacity change from 0 to 100632 Mar 3 12:46:18.967098 kernel: loop3: detected capacity change from 0 to 197488 Mar 3 12:46:19.292070 kernel: loop4: detected capacity change from 0 to 119840 Mar 3 12:46:19.318061 kernel: loop5: detected capacity change from 0 to 61264 Mar 3 12:46:19.339070 kernel: loop6: detected capacity change from 0 to 100632 Mar 3 12:46:19.358082 kernel: loop7: detected capacity change from 0 to 197488 Mar 3 12:46:19.391178 (sd-merge)[1602]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 3 12:46:19.392237 (sd-merge)[1602]: Merged extensions into '/usr'. Mar 3 12:46:19.403967 systemd[1]: Reload requested from client PID 1574 ('systemd-sysext') (unit systemd-sysext.service)... Mar 3 12:46:19.404004 systemd[1]: Reloading... Mar 3 12:46:19.567078 zram_generator::config[1628]: No configuration found. Mar 3 12:46:19.988769 systemd[1]: Reloading finished in 583 ms. Mar 3 12:46:20.014522 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 3 12:46:20.019096 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 3 12:46:20.043323 systemd[1]: Starting ensure-sysext.service... Mar 3 12:46:20.047853 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 12:46:20.054709 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 12:46:20.089280 systemd[1]: Reload requested from client PID 1680 ('systemctl') (unit ensure-sysext.service)... Mar 3 12:46:20.089315 systemd[1]: Reloading... Mar 3 12:46:20.158605 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 3 12:46:20.161253 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 3 12:46:20.162439 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 3 12:46:20.162976 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 3 12:46:20.171472 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 3 12:46:20.172852 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Mar 3 12:46:20.173296 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Mar 3 12:46:20.186609 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 12:46:20.186634 systemd-tmpfiles[1681]: Skipping /boot Mar 3 12:46:20.209931 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 12:46:20.210195 systemd-tmpfiles[1681]: Skipping /boot Mar 3 12:46:20.248973 ldconfig[1569]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 3 12:46:20.271083 zram_generator::config[1711]: No configuration found. Mar 3 12:46:20.278464 systemd-udevd[1682]: Using default interface naming scheme 'v255'. Mar 3 12:46:20.630265 (udev-worker)[1734]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:46:20.935672 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 3 12:46:20.937396 systemd[1]: Reloading finished in 847 ms. Mar 3 12:46:20.961267 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 12:46:20.965662 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 3 12:46:20.989208 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 12:46:21.042530 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 12:46:21.049449 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 3 12:46:21.057492 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 3 12:46:21.066480 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 12:46:21.084325 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 12:46:21.090447 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 3 12:46:21.103107 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 12:46:21.110425 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 12:46:21.116177 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 12:46:21.123493 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 12:46:21.126218 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 12:46:21.126466 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 12:46:21.140128 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 12:46:21.141420 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 12:46:21.141647 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 12:46:21.154108 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 12:46:21.163828 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 12:46:21.166601 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 12:46:21.166877 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 12:46:21.167260 systemd[1]: Reached target time-set.target - System Time Set. Mar 3 12:46:21.187649 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 3 12:46:21.203790 systemd[1]: Finished ensure-sysext.service. Mar 3 12:46:21.263888 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 12:46:21.266354 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 12:46:21.295246 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 3 12:46:21.303146 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 3 12:46:21.311919 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 3 12:46:21.374441 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 12:46:21.390845 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 12:46:21.395531 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 3 12:46:21.400552 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 12:46:21.400958 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 12:46:21.405865 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 12:46:21.406287 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 12:46:21.418446 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 12:46:21.419154 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 12:46:21.419207 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 3 12:46:21.451139 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 3 12:46:21.493922 augenrules[1933]: No rules Mar 3 12:46:21.498671 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 12:46:21.501139 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 12:46:21.633483 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 12:46:21.712005 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 3 12:46:21.721451 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 3 12:46:21.769010 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 3 12:46:21.795604 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 3 12:46:21.915910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 12:46:21.950247 systemd-networkd[1853]: lo: Link UP Mar 3 12:46:21.950273 systemd-networkd[1853]: lo: Gained carrier Mar 3 12:46:21.953327 systemd-networkd[1853]: Enumeration completed Mar 3 12:46:21.953520 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 12:46:21.959491 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 3 12:46:21.964719 systemd-networkd[1853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 12:46:21.964728 systemd-networkd[1853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 12:46:21.966316 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 3 12:46:21.971318 systemd-networkd[1853]: eth0: Link UP Mar 3 12:46:21.971704 systemd-networkd[1853]: eth0: Gained carrier Mar 3 12:46:21.971764 systemd-networkd[1853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 12:46:21.982885 systemd-resolved[1854]: Positive Trust Anchors: Mar 3 12:46:21.983585 systemd-resolved[1854]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 12:46:21.983665 systemd-resolved[1854]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 12:46:21.988206 systemd-networkd[1853]: eth0: DHCPv4 address 172.31.25.173/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 3 12:46:22.005660 systemd-resolved[1854]: Defaulting to hostname 'linux'. Mar 3 12:46:22.009969 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 12:46:22.012991 systemd[1]: Reached target network.target - Network. Mar 3 12:46:22.017352 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 12:46:22.020459 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 12:46:22.023464 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 3 12:46:22.026648 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 3 12:46:22.030482 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 3 12:46:22.033604 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 3 12:46:22.036745 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 3 12:46:22.039854 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 3 12:46:22.039931 systemd[1]: Reached target paths.target - Path Units. Mar 3 12:46:22.042359 systemd[1]: Reached target timers.target - Timer Units. Mar 3 12:46:22.046613 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 3 12:46:22.052339 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 3 12:46:22.059597 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 3 12:46:22.063145 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 3 12:46:22.066266 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 3 12:46:22.073417 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 3 12:46:22.076926 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 3 12:46:22.083219 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 3 12:46:22.086831 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 3 12:46:22.090765 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 12:46:22.093801 systemd[1]: Reached target basic.target - Basic System. Mar 3 12:46:22.096428 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 3 12:46:22.096506 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 3 12:46:22.099192 systemd[1]: Starting containerd.service - containerd container runtime... Mar 3 12:46:22.105072 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 3 12:46:22.112566 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 3 12:46:22.117830 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 3 12:46:22.125423 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 3 12:46:22.136918 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 3 12:46:22.140015 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 3 12:46:22.152489 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 3 12:46:22.162565 systemd[1]: Started ntpd.service - Network Time Service. Mar 3 12:46:22.173196 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 3 12:46:22.180602 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 3 12:46:22.202436 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 3 12:46:22.211620 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 3 12:46:22.226613 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 3 12:46:22.232638 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 3 12:46:22.233681 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 3 12:46:22.245419 systemd[1]: Starting update-engine.service - Update Engine... Mar 3 12:46:22.259347 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 3 12:46:22.271465 jq[1970]: false Mar 3 12:46:22.278319 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 3 12:46:22.282139 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 3 12:46:22.284172 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 3 12:46:22.308313 jq[1982]: true Mar 3 12:46:22.360878 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 3 12:46:22.376527 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 3 12:46:22.408893 extend-filesystems[1971]: Found /dev/nvme0n1p6 Mar 3 12:46:22.440065 jq[1991]: true Mar 3 12:46:22.440515 extend-filesystems[1971]: Found /dev/nvme0n1p9 Mar 3 12:46:22.451547 ntpd[1973]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:21:35 UTC 2026 (1): Starting Mar 3 12:46:22.454001 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:21:35 UTC 2026 (1): Starting Mar 3 12:46:22.454001 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 12:46:22.454001 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: ---------------------------------------------------- Mar 3 12:46:22.454001 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Mar 3 12:46:22.454001 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 12:46:22.454001 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: corporation. Support and training for ntp-4 are Mar 3 12:46:22.454001 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: available at https://www.nwtime.org/support Mar 3 12:46:22.454001 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: ---------------------------------------------------- Mar 3 12:46:22.451682 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 12:46:22.451704 ntpd[1973]: ---------------------------------------------------- Mar 3 12:46:22.451721 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Mar 3 12:46:22.451738 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 12:46:22.451755 ntpd[1973]: corporation. Support and training for ntp-4 are Mar 3 12:46:22.451773 ntpd[1973]: available at https://www.nwtime.org/support Mar 3 12:46:22.471371 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: proto: precision = 0.096 usec (-23) Mar 3 12:46:22.471371 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: basedate set to 2026-02-19 Mar 3 12:46:22.471371 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: gps base set to 2026-02-22 (week 2407) Mar 3 12:46:22.471371 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 12:46:22.471371 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 12:46:22.471371 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 12:46:22.471371 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: Listen normally on 3 eth0 172.31.25.173:123 Mar 3 12:46:22.471371 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: Listen normally on 4 lo [::1]:123 Mar 3 12:46:22.471371 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: bind(21) AF_INET6 [fe80::46d:c5ff:fedc:4a23%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 12:46:22.471371 ntpd[1973]: 3 Mar 12:46:22 ntpd[1973]: unable to create socket on eth0 (5) for [fe80::46d:c5ff:fedc:4a23%2]:123 Mar 3 12:46:22.451795 ntpd[1973]: ---------------------------------------------------- Mar 3 12:46:22.460625 ntpd[1973]: proto: precision = 0.096 usec (-23) Mar 3 12:46:22.463633 ntpd[1973]: basedate set to 2026-02-19 Mar 3 12:46:22.463669 ntpd[1973]: gps base set to 2026-02-22 (week 2407) Mar 3 12:46:22.465127 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 12:46:22.465203 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 12:46:22.465559 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 12:46:22.465612 ntpd[1973]: Listen normally on 3 eth0 172.31.25.173:123 Mar 3 12:46:22.465664 ntpd[1973]: Listen normally on 4 lo [::1]:123 Mar 3 12:46:22.465720 ntpd[1973]: bind(21) AF_INET6 [fe80::46d:c5ff:fedc:4a23%2]:123 flags 0x811 failed: Cannot assign requested address Mar 3 12:46:22.465760 ntpd[1973]: unable to create socket on eth0 (5) for [fe80::46d:c5ff:fedc:4a23%2]:123 Mar 3 12:46:22.474007 extend-filesystems[1971]: Checking size of /dev/nvme0n1p9 Mar 3 12:46:22.476881 systemd-coredump[2016]: Process 1973 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 3 12:46:22.497287 dbus-daemon[1968]: [system] SELinux support is enabled Mar 3 12:46:22.500473 (ntainerd)[2007]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 3 12:46:22.501802 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Mar 3 12:46:22.519551 systemd[1]: Started systemd-coredump@0-2016-0.service - Process Core Dump (PID 2016/UID 0). Mar 3 12:46:22.530555 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 3 12:46:22.541477 systemd[1]: motdgen.service: Deactivated successfully. Mar 3 12:46:22.550640 update_engine[1979]: I20260303 12:46:22.532874 1979 main.cc:92] Flatcar Update Engine starting Mar 3 12:46:22.548887 dbus-daemon[1968]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1853 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 3 12:46:22.554196 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 3 12:46:22.560644 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 3 12:46:22.560706 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 3 12:46:22.577079 tar[1999]: linux-arm64/LICENSE Mar 3 12:46:22.577079 tar[1999]: linux-arm64/helm Mar 3 12:46:22.565351 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 3 12:46:22.565398 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 3 12:46:22.579839 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 3 12:46:22.594815 update_engine[1979]: I20260303 12:46:22.592370 1979 update_check_scheduler.cc:74] Next update check in 2m6s Mar 3 12:46:22.594942 extend-filesystems[1971]: Resized partition /dev/nvme0n1p9 Mar 3 12:46:22.597941 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 3 12:46:22.605551 systemd[1]: Started update-engine.service - Update Engine. Mar 3 12:46:22.629606 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 3 12:46:22.635161 extend-filesystems[2027]: resize2fs 1.47.3 (8-Jul-2025) Mar 3 12:46:22.670040 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 3 12:46:22.673196 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 3 12:46:22.711891 coreos-metadata[1967]: Mar 03 12:46:22.711 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 3 12:46:22.720073 coreos-metadata[1967]: Mar 03 12:46:22.717 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 3 12:46:22.724134 coreos-metadata[1967]: Mar 03 12:46:22.722 INFO Fetch successful Mar 3 12:46:22.724134 coreos-metadata[1967]: Mar 03 12:46:22.722 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 3 12:46:22.729087 coreos-metadata[1967]: Mar 03 12:46:22.727 INFO Fetch successful Mar 3 12:46:22.729087 coreos-metadata[1967]: Mar 03 12:46:22.727 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 3 12:46:22.738497 coreos-metadata[1967]: Mar 03 12:46:22.738 INFO Fetch successful Mar 3 12:46:22.738497 coreos-metadata[1967]: Mar 03 12:46:22.738 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 3 12:46:22.738497 coreos-metadata[1967]: Mar 03 12:46:22.738 INFO Fetch successful Mar 3 12:46:22.738497 coreos-metadata[1967]: Mar 03 12:46:22.738 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 3 12:46:22.747656 coreos-metadata[1967]: Mar 03 12:46:22.745 INFO Fetch failed with 404: resource not found Mar 3 12:46:22.747656 coreos-metadata[1967]: Mar 03 12:46:22.745 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 3 12:46:22.749473 coreos-metadata[1967]: Mar 03 12:46:22.748 INFO Fetch successful Mar 3 12:46:22.749473 coreos-metadata[1967]: Mar 03 12:46:22.748 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 3 12:46:22.754035 coreos-metadata[1967]: Mar 03 12:46:22.753 INFO Fetch successful Mar 3 12:46:22.754035 coreos-metadata[1967]: Mar 03 12:46:22.754 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 3 12:46:22.755372 coreos-metadata[1967]: Mar 03 12:46:22.755 INFO Fetch successful Mar 3 12:46:22.755372 coreos-metadata[1967]: Mar 03 12:46:22.755 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 3 12:46:22.762123 coreos-metadata[1967]: Mar 03 12:46:22.762 INFO Fetch successful Mar 3 12:46:22.762411 coreos-metadata[1967]: Mar 03 12:46:22.762 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 3 12:46:22.769062 coreos-metadata[1967]: Mar 03 12:46:22.768 INFO Fetch successful Mar 3 12:46:22.926571 bash[2050]: Updated "/home/core/.ssh/authorized_keys" Mar 3 12:46:22.932516 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 3 12:46:22.949370 systemd[1]: Starting sshkeys.service... Mar 3 12:46:22.958499 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 3 12:46:22.988401 extend-filesystems[2027]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 3 12:46:22.988401 extend-filesystems[2027]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 3 12:46:22.988401 extend-filesystems[2027]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 3 12:46:23.021168 extend-filesystems[1971]: Resized filesystem in /dev/nvme0n1p9 Mar 3 12:46:22.997792 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 3 12:46:22.998369 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 3 12:46:23.051502 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 3 12:46:23.056310 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 3 12:46:23.060685 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 3 12:46:23.086601 systemd-logind[1978]: Watching system buttons on /dev/input/event0 (Power Button) Mar 3 12:46:23.086673 systemd-logind[1978]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 3 12:46:23.091735 systemd-logind[1978]: New seat seat0. Mar 3 12:46:23.101120 systemd[1]: Started systemd-logind.service - User Login Management. Mar 3 12:46:23.112352 locksmithd[2028]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 3 12:46:23.119619 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 3 12:46:23.127817 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 3 12:46:23.200269 systemd-networkd[1853]: eth0: Gained IPv6LL Mar 3 12:46:23.263694 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 3 12:46:23.270759 systemd[1]: Reached target network-online.target - Network is Online. Mar 3 12:46:23.279804 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 3 12:46:23.293616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:46:23.301193 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 3 12:46:23.541961 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 3 12:46:23.761747 amazon-ssm-agent[2120]: Initializing new seelog logger Mar 3 12:46:23.761747 amazon-ssm-agent[2120]: New Seelog Logger Creation Complete Mar 3 12:46:23.761747 amazon-ssm-agent[2120]: 2026/03/03 12:46:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:23.761747 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:23.761747 amazon-ssm-agent[2120]: 2026/03/03 12:46:23 processing appconfig overrides Mar 3 12:46:23.761747 amazon-ssm-agent[2120]: 2026/03/03 12:46:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:23.761747 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:23.761747 amazon-ssm-agent[2120]: 2026/03/03 12:46:23 processing appconfig overrides Mar 3 12:46:23.761747 amazon-ssm-agent[2120]: 2026-03-03 12:46:23.7609 INFO Proxy environment variables: Mar 3 12:46:23.762915 amazon-ssm-agent[2120]: 2026/03/03 12:46:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:23.762915 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:23.762915 amazon-ssm-agent[2120]: 2026/03/03 12:46:23 processing appconfig overrides Mar 3 12:46:23.768394 coreos-metadata[2108]: Mar 03 12:46:23.763 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 3 12:46:23.779579 coreos-metadata[2108]: Mar 03 12:46:23.771 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 3 12:46:23.789068 coreos-metadata[2108]: Mar 03 12:46:23.787 INFO Fetch successful Mar 3 12:46:23.789068 coreos-metadata[2108]: Mar 03 12:46:23.787 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 3 12:46:23.789068 coreos-metadata[2108]: Mar 03 12:46:23.787 INFO Fetch successful Mar 3 12:46:23.789297 amazon-ssm-agent[2120]: 2026/03/03 12:46:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:23.789297 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:23.789297 amazon-ssm-agent[2120]: 2026/03/03 12:46:23 processing appconfig overrides Mar 3 12:46:23.819606 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 3 12:46:23.824723 unknown[2108]: wrote ssh authorized keys file for user: core Mar 3 12:46:23.857647 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 3 12:46:23.860853 containerd[2007]: time="2026-03-03T12:46:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 3 12:46:23.891199 containerd[2007]: time="2026-03-03T12:46:23.866989730Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 3 12:46:23.883728 dbus-daemon[1968]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2026 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 3 12:46:23.910152 amazon-ssm-agent[2120]: 2026-03-03 12:46:23.7609 INFO https_proxy: Mar 3 12:46:23.903363 systemd[1]: Starting polkit.service - Authorization Manager... Mar 3 12:46:23.950335 update-ssh-keys[2170]: Updated "/home/core/.ssh/authorized_keys" Mar 3 12:46:23.965519 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 3 12:46:23.977173 systemd[1]: Finished sshkeys.service. Mar 3 12:46:24.005180 amazon-ssm-agent[2120]: 2026-03-03 12:46:23.7609 INFO http_proxy: Mar 3 12:46:24.078640 containerd[2007]: time="2026-03-03T12:46:24.078446099Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.416µs" Mar 3 12:46:24.078640 containerd[2007]: time="2026-03-03T12:46:24.078529055Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 3 12:46:24.078640 containerd[2007]: time="2026-03-03T12:46:24.078575615Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 3 12:46:24.080222 containerd[2007]: time="2026-03-03T12:46:24.078939539Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 3 12:46:24.087096 containerd[2007]: time="2026-03-03T12:46:24.086349167Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 3 12:46:24.087096 containerd[2007]: time="2026-03-03T12:46:24.086481887Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 12:46:24.087096 containerd[2007]: time="2026-03-03T12:46:24.086676383Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 12:46:24.087096 containerd[2007]: time="2026-03-03T12:46:24.086713091Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 12:46:24.087341 containerd[2007]: time="2026-03-03T12:46:24.087262751Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 12:46:24.087341 containerd[2007]: time="2026-03-03T12:46:24.087311135Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 12:46:24.087489 containerd[2007]: time="2026-03-03T12:46:24.087343595Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 12:46:24.087489 containerd[2007]: time="2026-03-03T12:46:24.087367259Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 3 12:46:24.087961 containerd[2007]: time="2026-03-03T12:46:24.087613283Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 3 12:46:24.098317 containerd[2007]: time="2026-03-03T12:46:24.097822991Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 12:46:24.098317 containerd[2007]: time="2026-03-03T12:46:24.097945763Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 12:46:24.098317 containerd[2007]: time="2026-03-03T12:46:24.097976627Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 3 12:46:24.098317 containerd[2007]: time="2026-03-03T12:46:24.098084243Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 3 12:46:24.099802 containerd[2007]: time="2026-03-03T12:46:24.098722523Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 3 12:46:24.099802 containerd[2007]: time="2026-03-03T12:46:24.098942159Z" level=info msg="metadata content store policy set" policy=shared Mar 3 12:46:24.106138 amazon-ssm-agent[2120]: 2026-03-03 12:46:23.7609 INFO no_proxy: Mar 3 12:46:24.129648 containerd[2007]: time="2026-03-03T12:46:24.129562007Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 3 12:46:24.129813 containerd[2007]: time="2026-03-03T12:46:24.129737579Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 3 12:46:24.129813 containerd[2007]: time="2026-03-03T12:46:24.129803927Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 3 12:46:24.129954 containerd[2007]: time="2026-03-03T12:46:24.129836663Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 3 12:46:24.129954 containerd[2007]: time="2026-03-03T12:46:24.129868367Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 3 12:46:24.129954 containerd[2007]: time="2026-03-03T12:46:24.129897419Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 3 12:46:24.129954 containerd[2007]: time="2026-03-03T12:46:24.129942551Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 3 12:46:24.130157 containerd[2007]: time="2026-03-03T12:46:24.129974987Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 3 12:46:24.130157 containerd[2007]: time="2026-03-03T12:46:24.130006607Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 3 12:46:24.130157 containerd[2007]: time="2026-03-03T12:46:24.130070735Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 3 12:46:24.130157 containerd[2007]: time="2026-03-03T12:46:24.130104467Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 3 12:46:24.130157 containerd[2007]: time="2026-03-03T12:46:24.130139483Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 3 12:46:24.135968 containerd[2007]: time="2026-03-03T12:46:24.135476759Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 3 12:46:24.135968 containerd[2007]: time="2026-03-03T12:46:24.135560171Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 3 12:46:24.135968 containerd[2007]: time="2026-03-03T12:46:24.135611315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 3 12:46:24.135968 containerd[2007]: time="2026-03-03T12:46:24.135648479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 3 12:46:24.135968 containerd[2007]: time="2026-03-03T12:46:24.135678851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 3 12:46:24.135968 containerd[2007]: time="2026-03-03T12:46:24.135707483Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 3 12:46:24.139058 containerd[2007]: time="2026-03-03T12:46:24.135739811Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 3 12:46:24.139058 containerd[2007]: time="2026-03-03T12:46:24.137404547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 3 12:46:24.139058 containerd[2007]: time="2026-03-03T12:46:24.137451443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 3 12:46:24.139058 containerd[2007]: time="2026-03-03T12:46:24.137481371Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 3 12:46:24.139058 containerd[2007]: time="2026-03-03T12:46:24.137512967Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 3 12:46:24.146148 containerd[2007]: time="2026-03-03T12:46:24.145335275Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 3 12:46:24.146148 containerd[2007]: time="2026-03-03T12:46:24.145426907Z" level=info msg="Start snapshots syncer" Mar 3 12:46:24.146148 containerd[2007]: time="2026-03-03T12:46:24.145482515Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 3 12:46:24.146375 containerd[2007]: time="2026-03-03T12:46:24.145989911Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 3 12:46:24.146375 containerd[2007]: time="2026-03-03T12:46:24.146133803Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 3 12:46:24.146375 containerd[2007]: time="2026-03-03T12:46:24.146247263Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 3 12:46:24.146656 containerd[2007]: time="2026-03-03T12:46:24.146510927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 3 12:46:24.146656 containerd[2007]: time="2026-03-03T12:46:24.146561159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 3 12:46:24.146656 containerd[2007]: time="2026-03-03T12:46:24.146589851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 3 12:46:24.146656 containerd[2007]: time="2026-03-03T12:46:24.146621735Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 3 12:46:24.146839 containerd[2007]: time="2026-03-03T12:46:24.146656223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 3 12:46:24.146839 containerd[2007]: time="2026-03-03T12:46:24.146686895Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 3 12:46:24.146839 containerd[2007]: time="2026-03-03T12:46:24.146715971Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 3 12:46:24.146839 containerd[2007]: time="2026-03-03T12:46:24.146770367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 3 12:46:24.146839 containerd[2007]: time="2026-03-03T12:46:24.146804999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 3 12:46:24.146839 containerd[2007]: time="2026-03-03T12:46:24.146834735Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 3 12:46:24.151265 containerd[2007]: time="2026-03-03T12:46:24.146908031Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 12:46:24.151265 containerd[2007]: time="2026-03-03T12:46:24.146944763Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 12:46:24.151265 containerd[2007]: time="2026-03-03T12:46:24.146968355Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 12:46:24.151265 containerd[2007]: time="2026-03-03T12:46:24.146994059Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 12:46:24.157889 containerd[2007]: time="2026-03-03T12:46:24.157050383Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 3 12:46:24.157889 containerd[2007]: time="2026-03-03T12:46:24.157150883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 3 12:46:24.157889 containerd[2007]: time="2026-03-03T12:46:24.157186235Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 3 12:46:24.157889 containerd[2007]: time="2026-03-03T12:46:24.157410023Z" level=info msg="runtime interface created" Mar 3 12:46:24.157889 containerd[2007]: time="2026-03-03T12:46:24.157433927Z" level=info msg="created NRI interface" Mar 3 12:46:24.157889 containerd[2007]: time="2026-03-03T12:46:24.157458995Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 3 12:46:24.157889 containerd[2007]: time="2026-03-03T12:46:24.157498871Z" level=info msg="Connect containerd service" Mar 3 12:46:24.157889 containerd[2007]: time="2026-03-03T12:46:24.157563635Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 3 12:46:24.161053 containerd[2007]: time="2026-03-03T12:46:24.160809119Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 3 12:46:24.203436 systemd-coredump[2020]: Process 1973 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1973: #0 0x0000aaaad1500b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaad14afe60 n/a (ntpd + 0xfe60) #2 0x0000aaaad14b0240 n/a (ntpd + 0x10240) #3 0x0000aaaad14abe14 n/a (ntpd + 0xbe14) #4 0x0000aaaad14ad3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaad14b5a38 n/a (ntpd + 0x15a38) #6 0x0000aaaad14a738c n/a (ntpd + 0x738c) #7 0x0000ffff996c2034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff996c2118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaad14a73f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Mar 3 12:46:24.207060 amazon-ssm-agent[2120]: 2026-03-03 12:46:23.7614 INFO Checking if agent identity type OnPrem can be assumed Mar 3 12:46:24.220511 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 3 12:46:24.220824 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 3 12:46:24.242744 systemd[1]: systemd-coredump@0-2016-0.service: Deactivated successfully. Mar 3 12:46:24.308173 amazon-ssm-agent[2120]: 2026-03-03 12:46:23.7617 INFO Checking if agent identity type EC2 can be assumed Mar 3 12:46:24.382202 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Mar 3 12:46:24.391982 systemd[1]: Started ntpd.service - Network Time Service. Mar 3 12:46:24.408449 amazon-ssm-agent[2120]: 2026-03-03 12:46:24.1971 INFO Agent will take identity from EC2 Mar 3 12:46:24.506195 amazon-ssm-agent[2120]: 2026-03-03 12:46:24.2079 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Mar 3 12:46:24.568945 ntpd[2201]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:21:35 UTC 2026 (1): Starting Mar 3 12:46:24.571699 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: ntpd 4.2.8p18@1.4062-o Tue Mar 3 10:21:35 UTC 2026 (1): Starting Mar 3 12:46:24.571699 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 12:46:24.571699 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: ---------------------------------------------------- Mar 3 12:46:24.571699 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: ntp-4 is maintained by Network Time Foundation, Mar 3 12:46:24.571699 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 12:46:24.571699 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: corporation. Support and training for ntp-4 are Mar 3 12:46:24.571699 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: available at https://www.nwtime.org/support Mar 3 12:46:24.571699 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: ---------------------------------------------------- Mar 3 12:46:24.569131 ntpd[2201]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 3 12:46:24.569152 ntpd[2201]: ---------------------------------------------------- Mar 3 12:46:24.569170 ntpd[2201]: ntp-4 is maintained by Network Time Foundation, Mar 3 12:46:24.580964 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: proto: precision = 0.096 usec (-23) Mar 3 12:46:24.580964 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: basedate set to 2026-02-19 Mar 3 12:46:24.580964 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: gps base set to 2026-02-22 (week 2407) Mar 3 12:46:24.580964 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 12:46:24.580964 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 12:46:24.580964 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 12:46:24.580964 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: Listen normally on 3 eth0 172.31.25.173:123 Mar 3 12:46:24.580964 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: Listen normally on 4 lo [::1]:123 Mar 3 12:46:24.580964 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: Listen normally on 5 eth0 [fe80::46d:c5ff:fedc:4a23%2]:123 Mar 3 12:46:24.580964 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: Listening on routing socket on fd #22 for interface updates Mar 3 12:46:24.569187 ntpd[2201]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 3 12:46:24.569203 ntpd[2201]: corporation. Support and training for ntp-4 are Mar 3 12:46:24.569220 ntpd[2201]: available at https://www.nwtime.org/support Mar 3 12:46:24.569236 ntpd[2201]: ---------------------------------------------------- Mar 3 12:46:24.578489 ntpd[2201]: proto: precision = 0.096 usec (-23) Mar 3 12:46:24.578860 ntpd[2201]: basedate set to 2026-02-19 Mar 3 12:46:24.578888 ntpd[2201]: gps base set to 2026-02-22 (week 2407) Mar 3 12:46:24.579082 ntpd[2201]: Listen and drop on 0 v6wildcard [::]:123 Mar 3 12:46:24.579137 ntpd[2201]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 3 12:46:24.579464 ntpd[2201]: Listen normally on 2 lo 127.0.0.1:123 Mar 3 12:46:24.579512 ntpd[2201]: Listen normally on 3 eth0 172.31.25.173:123 Mar 3 12:46:24.579559 ntpd[2201]: Listen normally on 4 lo [::1]:123 Mar 3 12:46:24.579604 ntpd[2201]: Listen normally on 5 eth0 [fe80::46d:c5ff:fedc:4a23%2]:123 Mar 3 12:46:24.579646 ntpd[2201]: Listening on routing socket on fd #22 for interface updates Mar 3 12:46:24.594399 polkitd[2171]: Started polkitd version 126 Mar 3 12:46:24.609073 amazon-ssm-agent[2120]: 2026-03-03 12:46:24.2080 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 3 12:46:24.629928 ntpd[2201]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 12:46:24.630038 ntpd[2201]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 12:46:24.630249 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 12:46:24.630249 ntpd[2201]: 3 Mar 12:46:24 ntpd[2201]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 3 12:46:24.666132 polkitd[2171]: Loading rules from directory /etc/polkit-1/rules.d Mar 3 12:46:24.666839 polkitd[2171]: Loading rules from directory /run/polkit-1/rules.d Mar 3 12:46:24.666968 polkitd[2171]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 3 12:46:24.675828 polkitd[2171]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 3 12:46:24.675939 polkitd[2171]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 3 12:46:24.676104 polkitd[2171]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 3 12:46:24.687383 polkitd[2171]: Finished loading, compiling and executing 2 rules Mar 3 12:46:24.691685 systemd[1]: Started polkit.service - Authorization Manager. Mar 3 12:46:24.698922 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 3 12:46:24.704497 amazon-ssm-agent[2120]: 2026-03-03 12:46:24.2080 INFO [amazon-ssm-agent] Starting Core Agent Mar 3 12:46:24.707870 polkitd[2171]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 3 12:46:24.787870 systemd-hostnamed[2026]: Hostname set to (transient) Mar 3 12:46:24.789385 systemd-resolved[1854]: System hostname changed to 'ip-172-31-25-173'. Mar 3 12:46:24.806996 amazon-ssm-agent[2120]: 2026-03-03 12:46:24.2087 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Mar 3 12:46:24.839700 containerd[2007]: time="2026-03-03T12:46:24.838358883Z" level=info msg="Start subscribing containerd event" Mar 3 12:46:24.839700 containerd[2007]: time="2026-03-03T12:46:24.838477563Z" level=info msg="Start recovering state" Mar 3 12:46:24.839700 containerd[2007]: time="2026-03-03T12:46:24.838657659Z" level=info msg="Start event monitor" Mar 3 12:46:24.839700 containerd[2007]: time="2026-03-03T12:46:24.838688835Z" level=info msg="Start cni network conf syncer for default" Mar 3 12:46:24.839700 containerd[2007]: time="2026-03-03T12:46:24.838710627Z" level=info msg="Start streaming server" Mar 3 12:46:24.839700 containerd[2007]: time="2026-03-03T12:46:24.838734543Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 3 12:46:24.839700 containerd[2007]: time="2026-03-03T12:46:24.838753083Z" level=info msg="runtime interface starting up..." Mar 3 12:46:24.839700 containerd[2007]: time="2026-03-03T12:46:24.838768299Z" level=info msg="starting plugins..." Mar 3 12:46:24.839700 containerd[2007]: time="2026-03-03T12:46:24.838800855Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 3 12:46:24.841047 containerd[2007]: time="2026-03-03T12:46:24.840455763Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 3 12:46:24.843181 containerd[2007]: time="2026-03-03T12:46:24.842577459Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 3 12:46:24.845129 systemd[1]: Started containerd.service - containerd container runtime. Mar 3 12:46:24.849875 containerd[2007]: time="2026-03-03T12:46:24.849805815Z" level=info msg="containerd successfully booted in 1.000920s" Mar 3 12:46:24.909569 amazon-ssm-agent[2120]: 2026-03-03 12:46:24.2088 INFO [Registrar] Starting registrar module Mar 3 12:46:25.010458 amazon-ssm-agent[2120]: 2026-03-03 12:46:24.2132 INFO [EC2Identity] Checking disk for registration info Mar 3 12:46:25.110259 amazon-ssm-agent[2120]: 2026-03-03 12:46:24.2133 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Mar 3 12:46:25.210910 amazon-ssm-agent[2120]: 2026-03-03 12:46:24.2133 INFO [EC2Identity] Generating registration keypair Mar 3 12:46:25.248399 tar[1999]: linux-arm64/README.md Mar 3 12:46:25.263125 sshd_keygen[2005]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 3 12:46:25.288617 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 3 12:46:25.330219 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 3 12:46:25.338464 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 3 12:46:25.347187 systemd[1]: Started sshd@0-172.31.25.173:22-20.161.92.111:43696.service - OpenSSH per-connection server daemon (20.161.92.111:43696). Mar 3 12:46:25.394609 systemd[1]: issuegen.service: Deactivated successfully. Mar 3 12:46:25.397161 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 3 12:46:25.408520 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 3 12:46:25.460791 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 3 12:46:25.474526 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 3 12:46:25.482106 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 3 12:46:25.489591 systemd[1]: Reached target getty.target - Login Prompts. Mar 3 12:46:25.891399 sshd[2235]: Accepted publickey for core from 20.161.92.111 port 43696 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:25.897111 sshd-session[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:25.916723 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 3 12:46:25.922566 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 3 12:46:25.958488 systemd-logind[1978]: New session 1 of user core. Mar 3 12:46:25.984855 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 3 12:46:25.999251 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 3 12:46:26.018450 (systemd)[2247]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 3 12:46:26.027782 systemd-logind[1978]: New session c1 of user core. Mar 3 12:46:26.113987 amazon-ssm-agent[2120]: 2026-03-03 12:46:26.1137 INFO [EC2Identity] Checking write access before registering Mar 3 12:46:26.154383 amazon-ssm-agent[2120]: 2026/03/03 12:46:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:26.154383 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 3 12:46:26.154555 amazon-ssm-agent[2120]: 2026/03/03 12:46:26 processing appconfig overrides Mar 3 12:46:26.181906 amazon-ssm-agent[2120]: 2026-03-03 12:46:26.1145 INFO [EC2Identity] Registering EC2 instance with Systems Manager Mar 3 12:46:26.182039 amazon-ssm-agent[2120]: 2026-03-03 12:46:26.1540 INFO [EC2Identity] EC2 registration was successful. Mar 3 12:46:26.182039 amazon-ssm-agent[2120]: 2026-03-03 12:46:26.1541 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Mar 3 12:46:26.182039 amazon-ssm-agent[2120]: 2026-03-03 12:46:26.1542 INFO [CredentialRefresher] credentialRefresher has started Mar 3 12:46:26.182039 amazon-ssm-agent[2120]: 2026-03-03 12:46:26.1542 INFO [CredentialRefresher] Starting credentials refresher loop Mar 3 12:46:26.182039 amazon-ssm-agent[2120]: 2026-03-03 12:46:26.1815 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 3 12:46:26.182441 amazon-ssm-agent[2120]: 2026-03-03 12:46:26.1818 INFO [CredentialRefresher] Credentials ready Mar 3 12:46:26.217085 amazon-ssm-agent[2120]: 2026-03-03 12:46:26.1825 INFO [CredentialRefresher] Next credential rotation will be in 29.999984434 minutes Mar 3 12:46:26.347557 systemd[2247]: Queued start job for default target default.target. Mar 3 12:46:26.355570 systemd[2247]: Created slice app.slice - User Application Slice. Mar 3 12:46:26.355641 systemd[2247]: Reached target paths.target - Paths. Mar 3 12:46:26.355737 systemd[2247]: Reached target timers.target - Timers. Mar 3 12:46:26.361255 systemd[2247]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 3 12:46:26.387498 systemd[2247]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 3 12:46:26.389161 systemd[2247]: Reached target sockets.target - Sockets. Mar 3 12:46:26.389315 systemd[2247]: Reached target basic.target - Basic System. Mar 3 12:46:26.389409 systemd[2247]: Reached target default.target - Main User Target. Mar 3 12:46:26.389475 systemd[2247]: Startup finished in 339ms. Mar 3 12:46:26.390217 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 3 12:46:26.404366 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 3 12:46:26.691966 systemd[1]: Started sshd@1-172.31.25.173:22-20.161.92.111:43712.service - OpenSSH per-connection server daemon (20.161.92.111:43712). Mar 3 12:46:27.186353 sshd[2258]: Accepted publickey for core from 20.161.92.111 port 43712 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:27.189039 sshd-session[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:27.203962 systemd-logind[1978]: New session 2 of user core. Mar 3 12:46:27.213657 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 3 12:46:27.229051 amazon-ssm-agent[2120]: 2026-03-03 12:46:27.2286 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 3 12:46:27.277698 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:46:27.281327 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 3 12:46:27.284419 systemd[1]: Startup finished in 3.875s (kernel) + 9.486s (initrd) + 11.170s (userspace) = 24.532s. Mar 3 12:46:27.307868 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 12:46:27.331540 amazon-ssm-agent[2120]: 2026-03-03 12:46:27.2375 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2266) started Mar 3 12:46:27.432068 amazon-ssm-agent[2120]: 2026-03-03 12:46:27.2375 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 3 12:46:27.454220 sshd[2267]: Connection closed by 20.161.92.111 port 43712 Mar 3 12:46:27.454471 sshd-session[2258]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:27.463712 systemd[1]: sshd@1-172.31.25.173:22-20.161.92.111:43712.service: Deactivated successfully. Mar 3 12:46:27.472328 systemd[1]: session-2.scope: Deactivated successfully. Mar 3 12:46:27.480186 systemd-logind[1978]: Session 2 logged out. Waiting for processes to exit. Mar 3 12:46:27.485581 systemd-logind[1978]: Removed session 2. Mar 3 12:46:27.541476 systemd[1]: Started sshd@2-172.31.25.173:22-20.161.92.111:43728.service - OpenSSH per-connection server daemon (20.161.92.111:43728). Mar 3 12:46:28.010886 sshd[2292]: Accepted publickey for core from 20.161.92.111 port 43728 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:28.014155 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:28.022641 systemd-logind[1978]: New session 3 of user core. Mar 3 12:46:28.037278 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 3 12:46:28.245744 sshd[2299]: Connection closed by 20.161.92.111 port 43728 Mar 3 12:46:28.247384 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:28.257996 systemd[1]: sshd@2-172.31.25.173:22-20.161.92.111:43728.service: Deactivated successfully. Mar 3 12:46:28.263589 systemd[1]: session-3.scope: Deactivated successfully. Mar 3 12:46:28.267540 systemd-logind[1978]: Session 3 logged out. Waiting for processes to exit. Mar 3 12:46:28.270775 systemd-logind[1978]: Removed session 3. Mar 3 12:46:28.355368 systemd[1]: Started sshd@3-172.31.25.173:22-20.161.92.111:43740.service - OpenSSH per-connection server daemon (20.161.92.111:43740). Mar 3 12:46:28.826002 sshd[2305]: Accepted publickey for core from 20.161.92.111 port 43740 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:28.829653 sshd-session[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:28.846178 systemd-logind[1978]: New session 4 of user core. Mar 3 12:46:28.856398 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 3 12:46:28.948594 kubelet[2275]: E0303 12:46:28.948485 2275 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 12:46:28.953599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 12:46:28.954161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 12:46:28.956257 systemd[1]: kubelet.service: Consumed 1.364s CPU time, 248.1M memory peak. Mar 3 12:46:29.069216 sshd[2309]: Connection closed by 20.161.92.111 port 43740 Mar 3 12:46:29.070118 sshd-session[2305]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:29.079535 systemd[1]: sshd@3-172.31.25.173:22-20.161.92.111:43740.service: Deactivated successfully. Mar 3 12:46:29.083552 systemd[1]: session-4.scope: Deactivated successfully. Mar 3 12:46:29.086840 systemd-logind[1978]: Session 4 logged out. Waiting for processes to exit. Mar 3 12:46:29.090520 systemd-logind[1978]: Removed session 4. Mar 3 12:46:29.177695 systemd[1]: Started sshd@4-172.31.25.173:22-20.161.92.111:43742.service - OpenSSH per-connection server daemon (20.161.92.111:43742). Mar 3 12:46:29.685793 sshd[2316]: Accepted publickey for core from 20.161.92.111 port 43742 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:29.688349 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:29.699157 systemd-logind[1978]: New session 5 of user core. Mar 3 12:46:29.702391 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 3 12:46:29.881891 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 3 12:46:29.882652 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 12:46:29.900600 sudo[2320]: pam_unix(sudo:session): session closed for user root Mar 3 12:46:29.985902 sshd[2319]: Connection closed by 20.161.92.111 port 43742 Mar 3 12:46:29.987117 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:29.995548 systemd[1]: sshd@4-172.31.25.173:22-20.161.92.111:43742.service: Deactivated successfully. Mar 3 12:46:30.000598 systemd[1]: session-5.scope: Deactivated successfully. Mar 3 12:46:30.004278 systemd-logind[1978]: Session 5 logged out. Waiting for processes to exit. Mar 3 12:46:30.007714 systemd-logind[1978]: Removed session 5. Mar 3 12:46:30.085144 systemd[1]: Started sshd@5-172.31.25.173:22-20.161.92.111:54938.service - OpenSSH per-connection server daemon (20.161.92.111:54938). Mar 3 12:46:30.576006 sshd[2326]: Accepted publickey for core from 20.161.92.111 port 54938 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:30.578618 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:30.587642 systemd-logind[1978]: New session 6 of user core. Mar 3 12:46:30.601405 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 3 12:46:30.754906 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 3 12:46:30.755663 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 12:46:30.765349 sudo[2331]: pam_unix(sudo:session): session closed for user root Mar 3 12:46:30.776441 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 3 12:46:30.777742 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 12:46:30.798060 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 12:46:30.871140 augenrules[2353]: No rules Mar 3 12:46:30.873961 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 12:46:30.876154 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 12:46:30.878909 sudo[2330]: pam_unix(sudo:session): session closed for user root Mar 3 12:46:30.963997 sshd[2329]: Connection closed by 20.161.92.111 port 54938 Mar 3 12:46:30.965237 sshd-session[2326]: pam_unix(sshd:session): session closed for user core Mar 3 12:46:30.971865 systemd[1]: sshd@5-172.31.25.173:22-20.161.92.111:54938.service: Deactivated successfully. Mar 3 12:46:30.976536 systemd[1]: session-6.scope: Deactivated successfully. Mar 3 12:46:30.980089 systemd-logind[1978]: Session 6 logged out. Waiting for processes to exit. Mar 3 12:46:30.984266 systemd-logind[1978]: Removed session 6. Mar 3 12:46:31.056452 systemd[1]: Started sshd@6-172.31.25.173:22-20.161.92.111:54942.service - OpenSSH per-connection server daemon (20.161.92.111:54942). Mar 3 12:46:31.520083 sshd[2362]: Accepted publickey for core from 20.161.92.111 port 54942 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:46:31.523285 sshd-session[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:46:31.532673 systemd-logind[1978]: New session 7 of user core. Mar 3 12:46:31.539383 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 3 12:46:31.725602 systemd-resolved[1854]: Clock change detected. Flushing caches. Mar 3 12:46:31.843219 sudo[2366]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 3 12:46:31.844556 sudo[2366]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 12:46:32.528353 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 3 12:46:32.544362 (dockerd)[2384]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 3 12:46:32.976439 dockerd[2384]: time="2026-03-03T12:46:32.976320362Z" level=info msg="Starting up" Mar 3 12:46:32.978087 dockerd[2384]: time="2026-03-03T12:46:32.978018110Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 3 12:46:33.001874 dockerd[2384]: time="2026-03-03T12:46:33.001799050Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 3 12:46:33.045477 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2057282431-merged.mount: Deactivated successfully. Mar 3 12:46:33.131812 dockerd[2384]: time="2026-03-03T12:46:33.130479262Z" level=info msg="Loading containers: start." Mar 3 12:46:33.147776 kernel: Initializing XFRM netlink socket Mar 3 12:46:33.540642 (udev-worker)[2407]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:46:33.624545 systemd-networkd[1853]: docker0: Link UP Mar 3 12:46:33.631440 dockerd[2384]: time="2026-03-03T12:46:33.631349173Z" level=info msg="Loading containers: done." Mar 3 12:46:33.661093 dockerd[2384]: time="2026-03-03T12:46:33.660989209Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 3 12:46:33.661344 dockerd[2384]: time="2026-03-03T12:46:33.661134769Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 3 12:46:33.661344 dockerd[2384]: time="2026-03-03T12:46:33.661310761Z" level=info msg="Initializing buildkit" Mar 3 12:46:33.705673 dockerd[2384]: time="2026-03-03T12:46:33.705555613Z" level=info msg="Completed buildkit initialization" Mar 3 12:46:33.722209 dockerd[2384]: time="2026-03-03T12:46:33.722116993Z" level=info msg="Daemon has completed initialization" Mar 3 12:46:33.722616 dockerd[2384]: time="2026-03-03T12:46:33.722416537Z" level=info msg="API listen on /run/docker.sock" Mar 3 12:46:33.723007 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 3 12:46:34.036446 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4063815358-merged.mount: Deactivated successfully. Mar 3 12:46:34.587105 containerd[2007]: time="2026-03-03T12:46:34.587050382Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 3 12:46:35.210215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2894042353.mount: Deactivated successfully. Mar 3 12:46:37.057854 containerd[2007]: time="2026-03-03T12:46:37.057784478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:37.059551 containerd[2007]: time="2026-03-03T12:46:37.059496374Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=24701796" Mar 3 12:46:37.061317 containerd[2007]: time="2026-03-03T12:46:37.060448346Z" level=info msg="ImageCreate event name:\"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:37.070576 containerd[2007]: time="2026-03-03T12:46:37.070470758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:37.072934 containerd[2007]: time="2026-03-03T12:46:37.072860006Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"24698395\" in 2.485748328s" Mar 3 12:46:37.072934 containerd[2007]: time="2026-03-03T12:46:37.072933962Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\"" Mar 3 12:46:37.073719 containerd[2007]: time="2026-03-03T12:46:37.073634942Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 3 12:46:38.735044 containerd[2007]: time="2026-03-03T12:46:38.734961570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:38.736984 containerd[2007]: time="2026-03-03T12:46:38.736924530Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=19063039" Mar 3 12:46:38.739029 containerd[2007]: time="2026-03-03T12:46:38.738422274Z" level=info msg="ImageCreate event name:\"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:38.745193 containerd[2007]: time="2026-03-03T12:46:38.745107678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:38.751927 containerd[2007]: time="2026-03-03T12:46:38.751860222Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"20675140\" in 1.678164428s" Mar 3 12:46:38.751927 containerd[2007]: time="2026-03-03T12:46:38.751927638Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\"" Mar 3 12:46:38.753181 containerd[2007]: time="2026-03-03T12:46:38.752837262Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 3 12:46:39.287653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 3 12:46:39.291229 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:46:39.659313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:46:39.674340 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 12:46:39.790922 kubelet[2668]: E0303 12:46:39.789723 2668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 12:46:39.799925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 12:46:39.800219 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 12:46:39.801320 systemd[1]: kubelet.service: Consumed 341ms CPU time, 106.5M memory peak. Mar 3 12:46:40.239205 containerd[2007]: time="2026-03-03T12:46:40.239125830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:40.242804 containerd[2007]: time="2026-03-03T12:46:40.242750682Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=13797901" Mar 3 12:46:40.245144 containerd[2007]: time="2026-03-03T12:46:40.245068446Z" level=info msg="ImageCreate event name:\"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:40.250513 containerd[2007]: time="2026-03-03T12:46:40.250436442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:40.252930 containerd[2007]: time="2026-03-03T12:46:40.252416070Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"15410020\" in 1.499521412s" Mar 3 12:46:40.252930 containerd[2007]: time="2026-03-03T12:46:40.252467730Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\"" Mar 3 12:46:40.253868 containerd[2007]: time="2026-03-03T12:46:40.253425558Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 3 12:46:41.759160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1389037305.mount: Deactivated successfully. Mar 3 12:46:42.178033 containerd[2007]: time="2026-03-03T12:46:42.177947683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:42.180290 containerd[2007]: time="2026-03-03T12:46:42.179912191Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=22329583" Mar 3 12:46:42.182756 containerd[2007]: time="2026-03-03T12:46:42.182667775Z" level=info msg="ImageCreate event name:\"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:42.188217 containerd[2007]: time="2026-03-03T12:46:42.188159455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:42.189326 containerd[2007]: time="2026-03-03T12:46:42.189264151Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"22328602\" in 1.935782037s" Mar 3 12:46:42.189453 containerd[2007]: time="2026-03-03T12:46:42.189323515Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\"" Mar 3 12:46:42.190367 containerd[2007]: time="2026-03-03T12:46:42.190031407Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 3 12:46:42.782071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3163689637.mount: Deactivated successfully. Mar 3 12:46:44.074920 containerd[2007]: time="2026-03-03T12:46:44.074829549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:44.077911 containerd[2007]: time="2026-03-03T12:46:44.077849001Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172211" Mar 3 12:46:44.079660 containerd[2007]: time="2026-03-03T12:46:44.079574025Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:44.085743 containerd[2007]: time="2026-03-03T12:46:44.084778965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:44.087149 containerd[2007]: time="2026-03-03T12:46:44.087073713Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.89698353s" Mar 3 12:46:44.087149 containerd[2007]: time="2026-03-03T12:46:44.087137349Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Mar 3 12:46:44.087968 containerd[2007]: time="2026-03-03T12:46:44.087853881Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 3 12:46:44.553162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount216417015.mount: Deactivated successfully. Mar 3 12:46:44.566914 containerd[2007]: time="2026-03-03T12:46:44.566834675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:44.568885 containerd[2007]: time="2026-03-03T12:46:44.568813415Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 3 12:46:44.571398 containerd[2007]: time="2026-03-03T12:46:44.571326311Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:44.577965 containerd[2007]: time="2026-03-03T12:46:44.577902335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:44.580723 containerd[2007]: time="2026-03-03T12:46:44.580449851Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 492.531926ms" Mar 3 12:46:44.580723 containerd[2007]: time="2026-03-03T12:46:44.580564499Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 3 12:46:44.583723 containerd[2007]: time="2026-03-03T12:46:44.582741011Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 3 12:46:45.227370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount776875791.mount: Deactivated successfully. Mar 3 12:46:46.294280 containerd[2007]: time="2026-03-03T12:46:46.294198552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:46.296791 containerd[2007]: time="2026-03-03T12:46:46.295892136Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21738165" Mar 3 12:46:46.302411 containerd[2007]: time="2026-03-03T12:46:46.302339820Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:46.308683 containerd[2007]: time="2026-03-03T12:46:46.308608932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:46:46.310758 containerd[2007]: time="2026-03-03T12:46:46.310678320Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.727844393s" Mar 3 12:46:46.311155 containerd[2007]: time="2026-03-03T12:46:46.310755528Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Mar 3 12:46:50.037638 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 3 12:46:50.042007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:46:50.226458 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 3 12:46:50.226655 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 3 12:46:50.227934 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:46:50.237080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:46:50.288932 systemd[1]: Reload requested from client PID 2838 ('systemctl') (unit session-7.scope)... Mar 3 12:46:50.288964 systemd[1]: Reloading... Mar 3 12:46:50.524736 zram_generator::config[2883]: No configuration found. Mar 3 12:46:51.008675 systemd[1]: Reloading finished in 719 ms. Mar 3 12:46:51.124970 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 3 12:46:51.125566 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 3 12:46:51.126102 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:46:51.126173 systemd[1]: kubelet.service: Consumed 235ms CPU time, 95M memory peak. Mar 3 12:46:51.131037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:46:52.140600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:46:52.162252 (kubelet)[2947]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 12:46:52.232542 kubelet[2947]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 12:46:53.576259 kubelet[2947]: I0303 12:46:53.575988 2947 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 3 12:46:53.576259 kubelet[2947]: I0303 12:46:53.576059 2947 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 12:46:53.576259 kubelet[2947]: I0303 12:46:53.576106 2947 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 3 12:46:53.576259 kubelet[2947]: I0303 12:46:53.576119 2947 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 12:46:53.576968 kubelet[2947]: I0303 12:46:53.576607 2947 server.go:951] "Client rotation is on, will bootstrap in background" Mar 3 12:46:53.586747 kubelet[2947]: I0303 12:46:53.586494 2947 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 12:46:53.587281 kubelet[2947]: E0303 12:46:53.587241 2947 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.25.173:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.173:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 3 12:46:53.593782 kubelet[2947]: I0303 12:46:53.593743 2947 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 12:46:53.600397 kubelet[2947]: I0303 12:46:53.600345 2947 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 3 12:46:53.601022 kubelet[2947]: I0303 12:46:53.600963 2947 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 12:46:53.601279 kubelet[2947]: I0303 12:46:53.601019 2947 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-173","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 12:46:53.601444 kubelet[2947]: I0303 12:46:53.601286 2947 topology_manager.go:143] "Creating topology manager with none policy" Mar 3 12:46:53.601444 kubelet[2947]: I0303 12:46:53.601305 2947 container_manager_linux.go:308] "Creating device plugin manager" Mar 3 12:46:53.601549 kubelet[2947]: I0303 12:46:53.601462 2947 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 3 12:46:53.603886 kubelet[2947]: I0303 12:46:53.603824 2947 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 3 12:46:53.604342 kubelet[2947]: I0303 12:46:53.604304 2947 kubelet.go:482] "Attempting to sync node with API server" Mar 3 12:46:53.604486 kubelet[2947]: I0303 12:46:53.604368 2947 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 12:46:53.604486 kubelet[2947]: I0303 12:46:53.604405 2947 kubelet.go:394] "Adding apiserver pod source" Mar 3 12:46:53.604486 kubelet[2947]: I0303 12:46:53.604428 2947 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 12:46:53.612290 kubelet[2947]: I0303 12:46:53.612217 2947 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 12:46:53.614241 kubelet[2947]: I0303 12:46:53.614157 2947 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 12:46:53.614241 kubelet[2947]: I0303 12:46:53.614243 2947 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 3 12:46:53.614462 kubelet[2947]: W0303 12:46:53.614340 2947 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 3 12:46:53.619735 kubelet[2947]: I0303 12:46:53.619643 2947 server.go:1257] "Started kubelet" Mar 3 12:46:53.626514 kubelet[2947]: I0303 12:46:53.626443 2947 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 3 12:46:53.631007 kubelet[2947]: I0303 12:46:53.630903 2947 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 12:46:53.636869 kubelet[2947]: I0303 12:46:53.636769 2947 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 12:46:53.637160 kubelet[2947]: I0303 12:46:53.637120 2947 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 3 12:46:53.638012 kubelet[2947]: I0303 12:46:53.637973 2947 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 12:46:53.644552 kubelet[2947]: I0303 12:46:53.644498 2947 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 12:46:53.648654 kubelet[2947]: I0303 12:46:53.648598 2947 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 3 12:46:53.649087 kubelet[2947]: E0303 12:46:53.649028 2947 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-25-173\" not found" Mar 3 12:46:53.650625 kubelet[2947]: E0303 12:46:53.650552 2947 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-173?timeout=10s\": dial tcp 172.31.25.173:6443: connect: connection refused" interval="200ms" Mar 3 12:46:53.652187 kubelet[2947]: I0303 12:46:53.652056 2947 server.go:317] "Adding debug handlers to kubelet server" Mar 3 12:46:53.655943 kubelet[2947]: E0303 12:46:53.653732 2947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.173:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.173:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-173.18995591ef3f7314 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-173,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-173,},FirstTimestamp:2026-03-03 12:46:53.619532564 +0000 UTC m=+1.450898288,LastTimestamp:2026-03-03 12:46:53.619532564 +0000 UTC m=+1.450898288,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-173,}" Mar 3 12:46:53.656976 kubelet[2947]: I0303 12:46:53.656925 2947 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 3 12:46:53.657274 kubelet[2947]: I0303 12:46:53.657042 2947 reconciler.go:29] "Reconciler: start to sync state" Mar 3 12:46:53.658229 kubelet[2947]: I0303 12:46:53.658140 2947 factory.go:223] Registration of the systemd container factory successfully Mar 3 12:46:53.659477 kubelet[2947]: I0303 12:46:53.659289 2947 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 12:46:53.663739 kubelet[2947]: I0303 12:46:53.662377 2947 factory.go:223] Registration of the containerd container factory successfully Mar 3 12:46:53.664304 kubelet[2947]: E0303 12:46:53.664266 2947 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 12:46:53.693168 kubelet[2947]: I0303 12:46:53.693133 2947 cpu_manager.go:225] "Starting" policy="none" Mar 3 12:46:53.693430 kubelet[2947]: I0303 12:46:53.693397 2947 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 3 12:46:53.695741 kubelet[2947]: I0303 12:46:53.694463 2947 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 3 12:46:53.702418 kubelet[2947]: I0303 12:46:53.701789 2947 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 3 12:46:53.703357 kubelet[2947]: I0303 12:46:53.703301 2947 policy_none.go:50] "Start" Mar 3 12:46:53.703357 kubelet[2947]: I0303 12:46:53.703347 2947 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 3 12:46:53.703547 kubelet[2947]: I0303 12:46:53.703376 2947 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 3 12:46:53.706636 kubelet[2947]: I0303 12:46:53.706559 2947 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 3 12:46:53.706636 kubelet[2947]: I0303 12:46:53.706616 2947 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 3 12:46:53.706906 kubelet[2947]: I0303 12:46:53.706682 2947 kubelet.go:2501] "Starting kubelet main sync loop" Mar 3 12:46:53.706906 kubelet[2947]: E0303 12:46:53.706798 2947 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 12:46:53.707581 kubelet[2947]: I0303 12:46:53.707425 2947 policy_none.go:44] "Start" Mar 3 12:46:53.719798 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 3 12:46:53.742422 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 3 12:46:53.749531 kubelet[2947]: E0303 12:46:53.749450 2947 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-25-173\" not found" Mar 3 12:46:53.758003 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 3 12:46:53.762733 kubelet[2947]: E0303 12:46:53.761866 2947 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 12:46:53.762733 kubelet[2947]: I0303 12:46:53.762229 2947 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 3 12:46:53.762733 kubelet[2947]: I0303 12:46:53.762250 2947 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 12:46:53.763491 kubelet[2947]: I0303 12:46:53.763445 2947 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 3 12:46:53.766512 kubelet[2947]: E0303 12:46:53.766461 2947 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 12:46:53.766826 kubelet[2947]: E0303 12:46:53.766799 2947 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-173\" not found" Mar 3 12:46:53.833108 systemd[1]: Created slice kubepods-burstable-podb7ca0cf91e81936908ad5b9d418ec590.slice - libcontainer container kubepods-burstable-podb7ca0cf91e81936908ad5b9d418ec590.slice. Mar 3 12:46:53.851289 kubelet[2947]: E0303 12:46:53.850873 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-173\" not found" node="ip-172-31-25-173" Mar 3 12:46:53.855643 kubelet[2947]: E0303 12:46:53.855578 2947 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-173?timeout=10s\": dial tcp 172.31.25.173:6443: connect: connection refused" interval="400ms" Mar 3 12:46:53.856981 systemd[1]: Created slice kubepods-burstable-pod9e91a8aae6f80c8fbd37cd57ac6531b8.slice - libcontainer container kubepods-burstable-pod9e91a8aae6f80c8fbd37cd57ac6531b8.slice. Mar 3 12:46:53.868646 kubelet[2947]: E0303 12:46:53.868588 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-173\" not found" node="ip-172-31-25-173" Mar 3 12:46:53.869361 kubelet[2947]: I0303 12:46:53.869315 2947 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-25-173" Mar 3 12:46:53.870851 kubelet[2947]: E0303 12:46:53.870683 2947 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.25.173:6443/api/v1/nodes\": dial tcp 172.31.25.173:6443: connect: connection refused" node="ip-172-31-25-173" Mar 3 12:46:53.876670 systemd[1]: Created slice kubepods-burstable-podeca7a6798109eb6017488935d0216232.slice - libcontainer container kubepods-burstable-podeca7a6798109eb6017488935d0216232.slice. Mar 3 12:46:53.883735 kubelet[2947]: E0303 12:46:53.883373 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-173\" not found" node="ip-172-31-25-173" Mar 3 12:46:53.958859 kubelet[2947]: I0303 12:46:53.958805 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7ca0cf91e81936908ad5b9d418ec590-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-173\" (UID: \"b7ca0cf91e81936908ad5b9d418ec590\") " pod="kube-system/kube-apiserver-ip-172-31-25-173" Mar 3 12:46:53.959270 kubelet[2947]: I0303 12:46:53.959163 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7ca0cf91e81936908ad5b9d418ec590-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-173\" (UID: \"b7ca0cf91e81936908ad5b9d418ec590\") " pod="kube-system/kube-apiserver-ip-172-31-25-173" Mar 3 12:46:53.959498 kubelet[2947]: I0303 12:46:53.959421 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9e91a8aae6f80c8fbd37cd57ac6531b8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-173\" (UID: \"9e91a8aae6f80c8fbd37cd57ac6531b8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-173" Mar 3 12:46:53.959643 kubelet[2947]: I0303 12:46:53.959619 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e91a8aae6f80c8fbd37cd57ac6531b8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-173\" (UID: \"9e91a8aae6f80c8fbd37cd57ac6531b8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-173" Mar 3 12:46:53.959883 kubelet[2947]: I0303 12:46:53.959827 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e91a8aae6f80c8fbd37cd57ac6531b8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-173\" (UID: \"9e91a8aae6f80c8fbd37cd57ac6531b8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-173" Mar 3 12:46:53.960050 kubelet[2947]: I0303 12:46:53.960025 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e91a8aae6f80c8fbd37cd57ac6531b8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-173\" (UID: \"9e91a8aae6f80c8fbd37cd57ac6531b8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-173" Mar 3 12:46:53.960249 kubelet[2947]: I0303 12:46:53.960191 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eca7a6798109eb6017488935d0216232-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-173\" (UID: \"eca7a6798109eb6017488935d0216232\") " pod="kube-system/kube-scheduler-ip-172-31-25-173" Mar 3 12:46:53.960421 kubelet[2947]: I0303 12:46:53.960390 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7ca0cf91e81936908ad5b9d418ec590-ca-certs\") pod \"kube-apiserver-ip-172-31-25-173\" (UID: \"b7ca0cf91e81936908ad5b9d418ec590\") " pod="kube-system/kube-apiserver-ip-172-31-25-173" Mar 3 12:46:53.960421 kubelet[2947]: I0303 12:46:53.960505 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e91a8aae6f80c8fbd37cd57ac6531b8-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-173\" (UID: \"9e91a8aae6f80c8fbd37cd57ac6531b8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-173" Mar 3 12:46:54.073369 kubelet[2947]: I0303 12:46:54.073307 2947 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-25-173" Mar 3 12:46:54.074077 kubelet[2947]: E0303 12:46:54.074025 2947 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.25.173:6443/api/v1/nodes\": dial tcp 172.31.25.173:6443: connect: connection refused" node="ip-172-31-25-173" Mar 3 12:46:54.156825 containerd[2007]: time="2026-03-03T12:46:54.156598231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-173,Uid:b7ca0cf91e81936908ad5b9d418ec590,Namespace:kube-system,Attempt:0,}" Mar 3 12:46:54.174176 containerd[2007]: time="2026-03-03T12:46:54.174060079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-173,Uid:9e91a8aae6f80c8fbd37cd57ac6531b8,Namespace:kube-system,Attempt:0,}" Mar 3 12:46:54.189561 containerd[2007]: time="2026-03-03T12:46:54.189121483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-173,Uid:eca7a6798109eb6017488935d0216232,Namespace:kube-system,Attempt:0,}" Mar 3 12:46:54.257047 kubelet[2947]: E0303 12:46:54.256994 2947 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-173?timeout=10s\": dial tcp 172.31.25.173:6443: connect: connection refused" interval="800ms" Mar 3 12:46:54.477389 kubelet[2947]: I0303 12:46:54.477270 2947 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-25-173" Mar 3 12:46:54.478103 kubelet[2947]: E0303 12:46:54.478053 2947 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.25.173:6443/api/v1/nodes\": dial tcp 172.31.25.173:6443: connect: connection refused" node="ip-172-31-25-173" Mar 3 12:46:54.676620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173795167.mount: Deactivated successfully. Mar 3 12:46:54.690644 containerd[2007]: time="2026-03-03T12:46:54.690559474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 12:46:54.699385 containerd[2007]: time="2026-03-03T12:46:54.699319930Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 3 12:46:54.702225 containerd[2007]: time="2026-03-03T12:46:54.702140422Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 12:46:54.704726 containerd[2007]: time="2026-03-03T12:46:54.704264242Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 12:46:54.708112 containerd[2007]: time="2026-03-03T12:46:54.708063694Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 12:46:54.709876 containerd[2007]: time="2026-03-03T12:46:54.709823734Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 3 12:46:54.712969 containerd[2007]: time="2026-03-03T12:46:54.712900990Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 3 12:46:54.715049 containerd[2007]: time="2026-03-03T12:46:54.714971530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 12:46:54.718052 containerd[2007]: time="2026-03-03T12:46:54.718008574Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 558.198231ms" Mar 3 12:46:54.721995 containerd[2007]: time="2026-03-03T12:46:54.721914310Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 530.444319ms" Mar 3 12:46:54.749468 containerd[2007]: time="2026-03-03T12:46:54.749227738Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 572.381163ms" Mar 3 12:46:54.777722 containerd[2007]: time="2026-03-03T12:46:54.777376318Z" level=info msg="connecting to shim 26fad5c813e6549b049d482b2d4a9989aebab6b83d9a8a4e11216e9f8a106f57" address="unix:///run/containerd/s/2fff81ccb6d947b8603ae440952377499dcf5d24cfae45cb392c0989c62e9fb3" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:46:54.782339 containerd[2007]: time="2026-03-03T12:46:54.782278462Z" level=info msg="connecting to shim 11f6459db7022fdb1e626f51cb14946e87444d7de4861031a2784e1c6d47ed69" address="unix:///run/containerd/s/e5d7f8fa18d511b1a24c75975413fd30b7c1201da6f709ce7fc1da99ef448efb" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:46:54.819058 containerd[2007]: time="2026-03-03T12:46:54.818988178Z" level=info msg="connecting to shim 65fce30a070ccbcc931e22e72309187a37a0a61944c6043a228fb009d914ccd6" address="unix:///run/containerd/s/9f8fdde935027bb7ab0d5ecba3cc16b2a36b0298bb82517ef7b6a0e8a56822e7" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:46:54.851040 systemd[1]: Started cri-containerd-26fad5c813e6549b049d482b2d4a9989aebab6b83d9a8a4e11216e9f8a106f57.scope - libcontainer container 26fad5c813e6549b049d482b2d4a9989aebab6b83d9a8a4e11216e9f8a106f57. Mar 3 12:46:54.877002 systemd[1]: Started cri-containerd-11f6459db7022fdb1e626f51cb14946e87444d7de4861031a2784e1c6d47ed69.scope - libcontainer container 11f6459db7022fdb1e626f51cb14946e87444d7de4861031a2784e1c6d47ed69. Mar 3 12:46:54.920990 systemd[1]: Started cri-containerd-65fce30a070ccbcc931e22e72309187a37a0a61944c6043a228fb009d914ccd6.scope - libcontainer container 65fce30a070ccbcc931e22e72309187a37a0a61944c6043a228fb009d914ccd6. Mar 3 12:46:54.983509 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 3 12:46:55.012795 containerd[2007]: time="2026-03-03T12:46:55.006863227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-173,Uid:b7ca0cf91e81936908ad5b9d418ec590,Namespace:kube-system,Attempt:0,} returns sandbox id \"26fad5c813e6549b049d482b2d4a9989aebab6b83d9a8a4e11216e9f8a106f57\"" Mar 3 12:46:55.036865 containerd[2007]: time="2026-03-03T12:46:55.036782659Z" level=info msg="CreateContainer within sandbox \"26fad5c813e6549b049d482b2d4a9989aebab6b83d9a8a4e11216e9f8a106f57\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 3 12:46:55.052756 containerd[2007]: time="2026-03-03T12:46:55.052677295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-173,Uid:eca7a6798109eb6017488935d0216232,Namespace:kube-system,Attempt:0,} returns sandbox id \"11f6459db7022fdb1e626f51cb14946e87444d7de4861031a2784e1c6d47ed69\"" Mar 3 12:46:55.059194 kubelet[2947]: E0303 12:46:55.059081 2947 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-173?timeout=10s\": dial tcp 172.31.25.173:6443: connect: connection refused" interval="1.6s" Mar 3 12:46:55.065578 containerd[2007]: time="2026-03-03T12:46:55.065532067Z" level=info msg="CreateContainer within sandbox \"11f6459db7022fdb1e626f51cb14946e87444d7de4861031a2784e1c6d47ed69\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 3 12:46:55.068808 containerd[2007]: time="2026-03-03T12:46:55.068756659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-173,Uid:9e91a8aae6f80c8fbd37cd57ac6531b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"65fce30a070ccbcc931e22e72309187a37a0a61944c6043a228fb009d914ccd6\"" Mar 3 12:46:55.081144 containerd[2007]: time="2026-03-03T12:46:55.081076772Z" level=info msg="CreateContainer within sandbox \"65fce30a070ccbcc931e22e72309187a37a0a61944c6043a228fb009d914ccd6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 3 12:46:55.083153 containerd[2007]: time="2026-03-03T12:46:55.083096132Z" level=info msg="Container b7b7740151596b07e9c2344da45e330ce554bfee7a50f37ec5706de4366cfec2: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:46:55.101551 containerd[2007]: time="2026-03-03T12:46:55.101411432Z" level=info msg="Container a20359060d327f54bb181ee55565337ca2f87633ebc09f62be30b85b3e3a4f56: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:46:55.108949 containerd[2007]: time="2026-03-03T12:46:55.108889592Z" level=info msg="Container eb50e97bf9b4f17ad87436999f7686a0a94b154dfaee124fd94b5d995f7173af: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:46:55.116258 containerd[2007]: time="2026-03-03T12:46:55.116147552Z" level=info msg="CreateContainer within sandbox \"26fad5c813e6549b049d482b2d4a9989aebab6b83d9a8a4e11216e9f8a106f57\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b7b7740151596b07e9c2344da45e330ce554bfee7a50f37ec5706de4366cfec2\"" Mar 3 12:46:55.117621 containerd[2007]: time="2026-03-03T12:46:55.117564740Z" level=info msg="StartContainer for \"b7b7740151596b07e9c2344da45e330ce554bfee7a50f37ec5706de4366cfec2\"" Mar 3 12:46:55.122180 containerd[2007]: time="2026-03-03T12:46:55.122007500Z" level=info msg="connecting to shim b7b7740151596b07e9c2344da45e330ce554bfee7a50f37ec5706de4366cfec2" address="unix:///run/containerd/s/2fff81ccb6d947b8603ae440952377499dcf5d24cfae45cb392c0989c62e9fb3" protocol=ttrpc version=3 Mar 3 12:46:55.127427 containerd[2007]: time="2026-03-03T12:46:55.127353140Z" level=info msg="CreateContainer within sandbox \"11f6459db7022fdb1e626f51cb14946e87444d7de4861031a2784e1c6d47ed69\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a20359060d327f54bb181ee55565337ca2f87633ebc09f62be30b85b3e3a4f56\"" Mar 3 12:46:55.128819 containerd[2007]: time="2026-03-03T12:46:55.128686040Z" level=info msg="StartContainer for \"a20359060d327f54bb181ee55565337ca2f87633ebc09f62be30b85b3e3a4f56\"" Mar 3 12:46:55.131844 containerd[2007]: time="2026-03-03T12:46:55.131731448Z" level=info msg="connecting to shim a20359060d327f54bb181ee55565337ca2f87633ebc09f62be30b85b3e3a4f56" address="unix:///run/containerd/s/e5d7f8fa18d511b1a24c75975413fd30b7c1201da6f709ce7fc1da99ef448efb" protocol=ttrpc version=3 Mar 3 12:46:55.134714 containerd[2007]: time="2026-03-03T12:46:55.134578220Z" level=info msg="CreateContainer within sandbox \"65fce30a070ccbcc931e22e72309187a37a0a61944c6043a228fb009d914ccd6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eb50e97bf9b4f17ad87436999f7686a0a94b154dfaee124fd94b5d995f7173af\"" Mar 3 12:46:55.136583 containerd[2007]: time="2026-03-03T12:46:55.136512308Z" level=info msg="StartContainer for \"eb50e97bf9b4f17ad87436999f7686a0a94b154dfaee124fd94b5d995f7173af\"" Mar 3 12:46:55.143599 containerd[2007]: time="2026-03-03T12:46:55.143485748Z" level=info msg="connecting to shim eb50e97bf9b4f17ad87436999f7686a0a94b154dfaee124fd94b5d995f7173af" address="unix:///run/containerd/s/9f8fdde935027bb7ab0d5ecba3cc16b2a36b0298bb82517ef7b6a0e8a56822e7" protocol=ttrpc version=3 Mar 3 12:46:55.170240 systemd[1]: Started cri-containerd-b7b7740151596b07e9c2344da45e330ce554bfee7a50f37ec5706de4366cfec2.scope - libcontainer container b7b7740151596b07e9c2344da45e330ce554bfee7a50f37ec5706de4366cfec2. Mar 3 12:46:55.206044 systemd[1]: Started cri-containerd-a20359060d327f54bb181ee55565337ca2f87633ebc09f62be30b85b3e3a4f56.scope - libcontainer container a20359060d327f54bb181ee55565337ca2f87633ebc09f62be30b85b3e3a4f56. Mar 3 12:46:55.221188 systemd[1]: Started cri-containerd-eb50e97bf9b4f17ad87436999f7686a0a94b154dfaee124fd94b5d995f7173af.scope - libcontainer container eb50e97bf9b4f17ad87436999f7686a0a94b154dfaee124fd94b5d995f7173af. Mar 3 12:46:55.285968 kubelet[2947]: I0303 12:46:55.284246 2947 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-25-173" Mar 3 12:46:55.287525 kubelet[2947]: E0303 12:46:55.286942 2947 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.25.173:6443/api/v1/nodes\": dial tcp 172.31.25.173:6443: connect: connection refused" node="ip-172-31-25-173" Mar 3 12:46:55.346556 containerd[2007]: time="2026-03-03T12:46:55.346411329Z" level=info msg="StartContainer for \"b7b7740151596b07e9c2344da45e330ce554bfee7a50f37ec5706de4366cfec2\" returns successfully" Mar 3 12:46:55.367287 containerd[2007]: time="2026-03-03T12:46:55.366945657Z" level=info msg="StartContainer for \"a20359060d327f54bb181ee55565337ca2f87633ebc09f62be30b85b3e3a4f56\" returns successfully" Mar 3 12:46:55.420043 containerd[2007]: time="2026-03-03T12:46:55.419957109Z" level=info msg="StartContainer for \"eb50e97bf9b4f17ad87436999f7686a0a94b154dfaee124fd94b5d995f7173af\" returns successfully" Mar 3 12:46:55.731343 kubelet[2947]: E0303 12:46:55.730832 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-173\" not found" node="ip-172-31-25-173" Mar 3 12:46:55.735479 kubelet[2947]: E0303 12:46:55.735393 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-173\" not found" node="ip-172-31-25-173" Mar 3 12:46:55.744682 kubelet[2947]: E0303 12:46:55.744625 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-173\" not found" node="ip-172-31-25-173" Mar 3 12:46:56.748723 kubelet[2947]: E0303 12:46:56.748661 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-173\" not found" node="ip-172-31-25-173" Mar 3 12:46:56.749242 kubelet[2947]: E0303 12:46:56.749185 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-173\" not found" node="ip-172-31-25-173" Mar 3 12:46:56.889573 kubelet[2947]: I0303 12:46:56.889513 2947 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-25-173" Mar 3 12:46:57.153944 kubelet[2947]: E0303 12:46:57.153897 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-173\" not found" node="ip-172-31-25-173" Mar 3 12:46:58.240206 kubelet[2947]: E0303 12:46:58.240160 2947 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-173\" not found" node="ip-172-31-25-173" Mar 3 12:46:58.599747 kubelet[2947]: E0303 12:46:58.599668 2947 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-173\" not found" node="ip-172-31-25-173" Mar 3 12:46:58.613814 kubelet[2947]: I0303 12:46:58.613755 2947 apiserver.go:52] "Watching apiserver" Mar 3 12:46:58.657208 kubelet[2947]: I0303 12:46:58.657142 2947 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 3 12:46:58.707412 kubelet[2947]: E0303 12:46:58.707254 2947 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-25-173.18995591ef3f7314 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-173,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-173,},FirstTimestamp:2026-03-03 12:46:53.619532564 +0000 UTC m=+1.450898288,LastTimestamp:2026-03-03 12:46:53.619532564 +0000 UTC m=+1.450898288,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-173,}" Mar 3 12:46:58.756396 kubelet[2947]: I0303 12:46:58.756328 2947 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-25-173" Mar 3 12:46:58.850339 kubelet[2947]: I0303 12:46:58.849766 2947 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-173" Mar 3 12:46:58.871731 kubelet[2947]: E0303 12:46:58.870918 2947 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-173\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-25-173" Mar 3 12:46:58.871731 kubelet[2947]: I0303 12:46:58.870968 2947 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-173" Mar 3 12:46:58.879143 kubelet[2947]: E0303 12:46:58.879057 2947 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-173\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-25-173" Mar 3 12:46:58.879143 kubelet[2947]: I0303 12:46:58.879129 2947 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-173" Mar 3 12:46:58.883675 kubelet[2947]: E0303 12:46:58.883586 2947 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-25-173\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-25-173" Mar 3 12:46:58.994031 kubelet[2947]: I0303 12:46:58.993439 2947 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-173" Mar 3 12:46:58.998752 kubelet[2947]: E0303 12:46:58.998682 2947 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-173\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-25-173" Mar 3 12:47:01.162477 systemd[1]: Reload requested from client PID 3238 ('systemctl') (unit session-7.scope)... Mar 3 12:47:01.162510 systemd[1]: Reloading... Mar 3 12:47:01.443754 zram_generator::config[3285]: No configuration found. Mar 3 12:47:01.934160 systemd[1]: Reloading finished in 770 ms. Mar 3 12:47:02.012275 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:47:02.032545 systemd[1]: kubelet.service: Deactivated successfully. Mar 3 12:47:02.033198 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:47:02.033390 systemd[1]: kubelet.service: Consumed 2.271s CPU time, 119.7M memory peak. Mar 3 12:47:02.038405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 12:47:02.415463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 12:47:02.439419 (kubelet)[3342]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 12:47:02.546747 kubelet[3342]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 12:47:02.568084 kubelet[3342]: I0303 12:47:02.567968 3342 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 3 12:47:02.568084 kubelet[3342]: I0303 12:47:02.568039 3342 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 12:47:02.568084 kubelet[3342]: I0303 12:47:02.568080 3342 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 3 12:47:02.568084 kubelet[3342]: I0303 12:47:02.568095 3342 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 12:47:02.568753 kubelet[3342]: I0303 12:47:02.568648 3342 server.go:951] "Client rotation is on, will bootstrap in background" Mar 3 12:47:02.571382 kubelet[3342]: I0303 12:47:02.571314 3342 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 3 12:47:02.576441 kubelet[3342]: I0303 12:47:02.576369 3342 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 12:47:02.591735 kubelet[3342]: I0303 12:47:02.590819 3342 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 12:47:02.596619 kubelet[3342]: I0303 12:47:02.596574 3342 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 3 12:47:02.597174 kubelet[3342]: I0303 12:47:02.597127 3342 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 12:47:02.597575 kubelet[3342]: I0303 12:47:02.597282 3342 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-173","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 12:47:02.597817 kubelet[3342]: I0303 12:47:02.597795 3342 topology_manager.go:143] "Creating topology manager with none policy" Mar 3 12:47:02.597914 kubelet[3342]: I0303 12:47:02.597897 3342 container_manager_linux.go:308] "Creating device plugin manager" Mar 3 12:47:02.598122 kubelet[3342]: I0303 12:47:02.598102 3342 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 3 12:47:02.598585 kubelet[3342]: I0303 12:47:02.598563 3342 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 3 12:47:02.598994 kubelet[3342]: I0303 12:47:02.598973 3342 kubelet.go:482] "Attempting to sync node with API server" Mar 3 12:47:02.599102 kubelet[3342]: I0303 12:47:02.599083 3342 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 12:47:02.599224 kubelet[3342]: I0303 12:47:02.599207 3342 kubelet.go:394] "Adding apiserver pod source" Mar 3 12:47:02.599318 kubelet[3342]: I0303 12:47:02.599301 3342 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 12:47:02.604266 kubelet[3342]: I0303 12:47:02.603770 3342 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 12:47:02.606601 kubelet[3342]: I0303 12:47:02.605869 3342 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 12:47:02.610299 kubelet[3342]: I0303 12:47:02.607395 3342 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 3 12:47:02.622736 kubelet[3342]: I0303 12:47:02.618277 3342 server.go:1257] "Started kubelet" Mar 3 12:47:02.632292 kubelet[3342]: I0303 12:47:02.632257 3342 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 3 12:47:02.635960 kubelet[3342]: I0303 12:47:02.635878 3342 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 12:47:02.644526 kubelet[3342]: I0303 12:47:02.643658 3342 server.go:317] "Adding debug handlers to kubelet server" Mar 3 12:47:02.675904 kubelet[3342]: I0303 12:47:02.644794 3342 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 12:47:02.695551 kubelet[3342]: I0303 12:47:02.695512 3342 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 3 12:47:02.697026 kubelet[3342]: I0303 12:47:02.696993 3342 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 12:47:02.697193 kubelet[3342]: I0303 12:47:02.665229 3342 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 3 12:47:02.697281 kubelet[3342]: E0303 12:47:02.665263 3342 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-25-173\" not found" Mar 3 12:47:02.697371 kubelet[3342]: I0303 12:47:02.687150 3342 factory.go:223] Registration of the systemd container factory successfully Mar 3 12:47:02.697574 kubelet[3342]: I0303 12:47:02.697545 3342 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 12:47:02.713806 kubelet[3342]: I0303 12:47:02.656763 3342 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 12:47:02.714105 kubelet[3342]: I0303 12:47:02.665209 3342 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 3 12:47:02.715251 kubelet[3342]: I0303 12:47:02.715220 3342 reconciler.go:29] "Reconciler: start to sync state" Mar 3 12:47:02.726953 kubelet[3342]: I0303 12:47:02.726913 3342 factory.go:223] Registration of the containerd container factory successfully Mar 3 12:47:02.754286 kubelet[3342]: I0303 12:47:02.754224 3342 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 3 12:47:02.756547 kubelet[3342]: I0303 12:47:02.756505 3342 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 3 12:47:02.756759 kubelet[3342]: I0303 12:47:02.756733 3342 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 3 12:47:02.756905 kubelet[3342]: I0303 12:47:02.756887 3342 kubelet.go:2501] "Starting kubelet main sync loop" Mar 3 12:47:02.757081 kubelet[3342]: E0303 12:47:02.757049 3342 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 12:47:02.856665 kubelet[3342]: I0303 12:47:02.856173 3342 cpu_manager.go:225] "Starting" policy="none" Mar 3 12:47:02.856665 kubelet[3342]: I0303 12:47:02.856250 3342 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 3 12:47:02.858556 kubelet[3342]: I0303 12:47:02.856925 3342 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 3 12:47:02.858556 kubelet[3342]: I0303 12:47:02.857165 3342 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 3 12:47:02.858556 kubelet[3342]: I0303 12:47:02.857188 3342 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 3 12:47:02.858556 kubelet[3342]: E0303 12:47:02.857209 3342 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 3 12:47:02.858556 kubelet[3342]: I0303 12:47:02.857220 3342 policy_none.go:50] "Start" Mar 3 12:47:02.858556 kubelet[3342]: I0303 12:47:02.857272 3342 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 3 12:47:02.858556 kubelet[3342]: I0303 12:47:02.857299 3342 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 3 12:47:02.858556 kubelet[3342]: I0303 12:47:02.857885 3342 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 3 12:47:02.858556 kubelet[3342]: I0303 12:47:02.857918 3342 policy_none.go:44] "Start" Mar 3 12:47:02.870019 kubelet[3342]: E0303 12:47:02.869832 3342 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 12:47:02.870932 kubelet[3342]: I0303 12:47:02.870776 3342 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 3 12:47:02.872743 kubelet[3342]: I0303 12:47:02.871007 3342 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 12:47:02.876851 kubelet[3342]: I0303 12:47:02.876822 3342 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 3 12:47:02.877909 kubelet[3342]: E0303 12:47:02.877461 3342 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 12:47:02.994866 kubelet[3342]: I0303 12:47:02.994682 3342 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-25-173" Mar 3 12:47:03.013049 kubelet[3342]: I0303 12:47:03.013010 3342 kubelet_node_status.go:123] "Node was previously registered" node="ip-172-31-25-173" Mar 3 12:47:03.013656 kubelet[3342]: I0303 12:47:03.013436 3342 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-25-173" Mar 3 12:47:03.058479 kubelet[3342]: I0303 12:47:03.058330 3342 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-173" Mar 3 12:47:03.061749 kubelet[3342]: I0303 12:47:03.059244 3342 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-173" Mar 3 12:47:03.061749 kubelet[3342]: I0303 12:47:03.059328 3342 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-173" Mar 3 12:47:03.118514 kubelet[3342]: I0303 12:47:03.118448 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7ca0cf91e81936908ad5b9d418ec590-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-173\" (UID: \"b7ca0cf91e81936908ad5b9d418ec590\") " pod="kube-system/kube-apiserver-ip-172-31-25-173" Mar 3 12:47:03.118514 kubelet[3342]: I0303 12:47:03.118514 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e91a8aae6f80c8fbd37cd57ac6531b8-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-173\" (UID: \"9e91a8aae6f80c8fbd37cd57ac6531b8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-173" Mar 3 12:47:03.118744 kubelet[3342]: I0303 12:47:03.118554 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e91a8aae6f80c8fbd37cd57ac6531b8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-173\" (UID: \"9e91a8aae6f80c8fbd37cd57ac6531b8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-173" Mar 3 12:47:03.118744 kubelet[3342]: I0303 12:47:03.118591 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e91a8aae6f80c8fbd37cd57ac6531b8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-173\" (UID: \"9e91a8aae6f80c8fbd37cd57ac6531b8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-173" Mar 3 12:47:03.118744 kubelet[3342]: I0303 12:47:03.118631 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eca7a6798109eb6017488935d0216232-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-173\" (UID: \"eca7a6798109eb6017488935d0216232\") " pod="kube-system/kube-scheduler-ip-172-31-25-173" Mar 3 12:47:03.118744 kubelet[3342]: I0303 12:47:03.118664 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7ca0cf91e81936908ad5b9d418ec590-ca-certs\") pod \"kube-apiserver-ip-172-31-25-173\" (UID: \"b7ca0cf91e81936908ad5b9d418ec590\") " pod="kube-system/kube-apiserver-ip-172-31-25-173" Mar 3 12:47:03.118951 kubelet[3342]: I0303 12:47:03.118754 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7ca0cf91e81936908ad5b9d418ec590-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-173\" (UID: \"b7ca0cf91e81936908ad5b9d418ec590\") " pod="kube-system/kube-apiserver-ip-172-31-25-173" Mar 3 12:47:03.118951 kubelet[3342]: I0303 12:47:03.118796 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9e91a8aae6f80c8fbd37cd57ac6531b8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-173\" (UID: \"9e91a8aae6f80c8fbd37cd57ac6531b8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-173" Mar 3 12:47:03.118951 kubelet[3342]: I0303 12:47:03.118844 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e91a8aae6f80c8fbd37cd57ac6531b8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-173\" (UID: \"9e91a8aae6f80c8fbd37cd57ac6531b8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-173" Mar 3 12:47:03.601731 kubelet[3342]: I0303 12:47:03.601668 3342 apiserver.go:52] "Watching apiserver" Mar 3 12:47:03.698371 kubelet[3342]: I0303 12:47:03.698294 3342 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 3 12:47:03.884988 kubelet[3342]: I0303 12:47:03.884344 3342 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-173" podStartSLOduration=0.884325487 podStartE2EDuration="884.325487ms" podCreationTimestamp="2026-03-03 12:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:47:03.860746051 +0000 UTC m=+1.412451404" watchObservedRunningTime="2026-03-03 12:47:03.884325487 +0000 UTC m=+1.436030816" Mar 3 12:47:03.904819 kubelet[3342]: I0303 12:47:03.903986 3342 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-173" podStartSLOduration=0.903967915 podStartE2EDuration="903.967915ms" podCreationTimestamp="2026-03-03 12:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:47:03.886113139 +0000 UTC m=+1.437818480" watchObservedRunningTime="2026-03-03 12:47:03.903967915 +0000 UTC m=+1.455673256" Mar 3 12:47:03.967815 kubelet[3342]: I0303 12:47:03.967658 3342 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-173" podStartSLOduration=0.967639448 podStartE2EDuration="967.639448ms" podCreationTimestamp="2026-03-03 12:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:47:03.909447331 +0000 UTC m=+1.461152672" watchObservedRunningTime="2026-03-03 12:47:03.967639448 +0000 UTC m=+1.519344777" Mar 3 12:47:06.079244 kubelet[3342]: I0303 12:47:06.079180 3342 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 3 12:47:06.082051 containerd[2007]: time="2026-03-03T12:47:06.081890898Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 3 12:47:06.082821 kubelet[3342]: I0303 12:47:06.082771 3342 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 3 12:47:06.975511 systemd[1]: Created slice kubepods-besteffort-pode3e88444_fb18_4f8d_ac3e_cdb52b5ce0c5.slice - libcontainer container kubepods-besteffort-pode3e88444_fb18_4f8d_ac3e_cdb52b5ce0c5.slice. Mar 3 12:47:07.045883 kubelet[3342]: I0303 12:47:07.045762 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3e88444-fb18-4f8d-ac3e-cdb52b5ce0c5-xtables-lock\") pod \"kube-proxy-bgqmx\" (UID: \"e3e88444-fb18-4f8d-ac3e-cdb52b5ce0c5\") " pod="kube-system/kube-proxy-bgqmx" Mar 3 12:47:07.045883 kubelet[3342]: I0303 12:47:07.045830 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9kxx\" (UniqueName: \"kubernetes.io/projected/e3e88444-fb18-4f8d-ac3e-cdb52b5ce0c5-kube-api-access-g9kxx\") pod \"kube-proxy-bgqmx\" (UID: \"e3e88444-fb18-4f8d-ac3e-cdb52b5ce0c5\") " pod="kube-system/kube-proxy-bgqmx" Mar 3 12:47:07.046121 kubelet[3342]: I0303 12:47:07.045935 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e3e88444-fb18-4f8d-ac3e-cdb52b5ce0c5-kube-proxy\") pod \"kube-proxy-bgqmx\" (UID: \"e3e88444-fb18-4f8d-ac3e-cdb52b5ce0c5\") " pod="kube-system/kube-proxy-bgqmx" Mar 3 12:47:07.046121 kubelet[3342]: I0303 12:47:07.045973 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3e88444-fb18-4f8d-ac3e-cdb52b5ce0c5-lib-modules\") pod \"kube-proxy-bgqmx\" (UID: \"e3e88444-fb18-4f8d-ac3e-cdb52b5ce0c5\") " pod="kube-system/kube-proxy-bgqmx" Mar 3 12:47:07.294139 containerd[2007]: time="2026-03-03T12:47:07.294025940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bgqmx,Uid:e3e88444-fb18-4f8d-ac3e-cdb52b5ce0c5,Namespace:kube-system,Attempt:0,}" Mar 3 12:47:07.359351 containerd[2007]: time="2026-03-03T12:47:07.359056304Z" level=info msg="connecting to shim 3e56986a2a93650b5fcd941fae7dba8478df33c285621acecaf1c1e5e9da54e7" address="unix:///run/containerd/s/5095238394b8af28bcd718c06a5f809e174f265c216744a016b01b67ae2fb570" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:07.411235 systemd[1]: Created slice kubepods-besteffort-pod312e25c8_3ead_41ca_bf81_ddf2c973c8ba.slice - libcontainer container kubepods-besteffort-pod312e25c8_3ead_41ca_bf81_ddf2c973c8ba.slice. Mar 3 12:47:07.435009 systemd[1]: Started cri-containerd-3e56986a2a93650b5fcd941fae7dba8478df33c285621acecaf1c1e5e9da54e7.scope - libcontainer container 3e56986a2a93650b5fcd941fae7dba8478df33c285621acecaf1c1e5e9da54e7. Mar 3 12:47:07.447601 kubelet[3342]: I0303 12:47:07.447503 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/312e25c8-3ead-41ca-bf81-ddf2c973c8ba-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-8ng9r\" (UID: \"312e25c8-3ead-41ca-bf81-ddf2c973c8ba\") " pod="tigera-operator/tigera-operator-6cf4cccc57-8ng9r" Mar 3 12:47:07.447601 kubelet[3342]: I0303 12:47:07.447581 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wshlf\" (UniqueName: \"kubernetes.io/projected/312e25c8-3ead-41ca-bf81-ddf2c973c8ba-kube-api-access-wshlf\") pod \"tigera-operator-6cf4cccc57-8ng9r\" (UID: \"312e25c8-3ead-41ca-bf81-ddf2c973c8ba\") " pod="tigera-operator/tigera-operator-6cf4cccc57-8ng9r" Mar 3 12:47:07.483959 containerd[2007]: time="2026-03-03T12:47:07.483891909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bgqmx,Uid:e3e88444-fb18-4f8d-ac3e-cdb52b5ce0c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e56986a2a93650b5fcd941fae7dba8478df33c285621acecaf1c1e5e9da54e7\"" Mar 3 12:47:07.496310 containerd[2007]: time="2026-03-03T12:47:07.496252953Z" level=info msg="CreateContainer within sandbox \"3e56986a2a93650b5fcd941fae7dba8478df33c285621acecaf1c1e5e9da54e7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 3 12:47:07.517096 containerd[2007]: time="2026-03-03T12:47:07.516781641Z" level=info msg="Container ae4e8b32d90bde34afb182717f5e32e5f11dcf7aba1b69a83e6674d7bcf35637: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:07.536331 containerd[2007]: time="2026-03-03T12:47:07.536281173Z" level=info msg="CreateContainer within sandbox \"3e56986a2a93650b5fcd941fae7dba8478df33c285621acecaf1c1e5e9da54e7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ae4e8b32d90bde34afb182717f5e32e5f11dcf7aba1b69a83e6674d7bcf35637\"" Mar 3 12:47:07.537477 containerd[2007]: time="2026-03-03T12:47:07.537362721Z" level=info msg="StartContainer for \"ae4e8b32d90bde34afb182717f5e32e5f11dcf7aba1b69a83e6674d7bcf35637\"" Mar 3 12:47:07.544782 containerd[2007]: time="2026-03-03T12:47:07.544456089Z" level=info msg="connecting to shim ae4e8b32d90bde34afb182717f5e32e5f11dcf7aba1b69a83e6674d7bcf35637" address="unix:///run/containerd/s/5095238394b8af28bcd718c06a5f809e174f265c216744a016b01b67ae2fb570" protocol=ttrpc version=3 Mar 3 12:47:07.592013 systemd[1]: Started cri-containerd-ae4e8b32d90bde34afb182717f5e32e5f11dcf7aba1b69a83e6674d7bcf35637.scope - libcontainer container ae4e8b32d90bde34afb182717f5e32e5f11dcf7aba1b69a83e6674d7bcf35637. Mar 3 12:47:07.715686 containerd[2007]: time="2026-03-03T12:47:07.715640386Z" level=info msg="StartContainer for \"ae4e8b32d90bde34afb182717f5e32e5f11dcf7aba1b69a83e6674d7bcf35637\" returns successfully" Mar 3 12:47:07.724250 containerd[2007]: time="2026-03-03T12:47:07.724202110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-8ng9r,Uid:312e25c8-3ead-41ca-bf81-ddf2c973c8ba,Namespace:tigera-operator,Attempt:0,}" Mar 3 12:47:07.768631 containerd[2007]: time="2026-03-03T12:47:07.768553979Z" level=info msg="connecting to shim fd536d6ba4ed69b285dc7075cf022068fe1dc6e25c3d3bc3f55caa955b71d64c" address="unix:///run/containerd/s/3ba045487275b4292dfcd2492386548bf48bd68e58f36a5ce8967459a86f86ce" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:07.828139 systemd[1]: Started cri-containerd-fd536d6ba4ed69b285dc7075cf022068fe1dc6e25c3d3bc3f55caa955b71d64c.scope - libcontainer container fd536d6ba4ed69b285dc7075cf022068fe1dc6e25c3d3bc3f55caa955b71d64c. Mar 3 12:47:07.908553 update_engine[1979]: I20260303 12:47:07.907760 1979 update_attempter.cc:509] Updating boot flags... Mar 3 12:47:07.980306 containerd[2007]: time="2026-03-03T12:47:07.977905368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-8ng9r,Uid:312e25c8-3ead-41ca-bf81-ddf2c973c8ba,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fd536d6ba4ed69b285dc7075cf022068fe1dc6e25c3d3bc3f55caa955b71d64c\"" Mar 3 12:47:07.988991 containerd[2007]: time="2026-03-03T12:47:07.988829052Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 3 12:47:08.605662 kubelet[3342]: I0303 12:47:08.605577 3342 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-bgqmx" podStartSLOduration=2.605560523 podStartE2EDuration="2.605560523s" podCreationTimestamp="2026-03-03 12:47:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:47:07.869857391 +0000 UTC m=+5.421562732" watchObservedRunningTime="2026-03-03 12:47:08.605560523 +0000 UTC m=+6.157265852" Mar 3 12:47:09.305290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3145663015.mount: Deactivated successfully. Mar 3 12:47:10.544758 containerd[2007]: time="2026-03-03T12:47:10.544519968Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:10.546421 containerd[2007]: time="2026-03-03T12:47:10.546347040Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 3 12:47:10.548875 containerd[2007]: time="2026-03-03T12:47:10.548804928Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:10.553757 containerd[2007]: time="2026-03-03T12:47:10.553433220Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:10.555589 containerd[2007]: time="2026-03-03T12:47:10.554757924Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.565840756s" Mar 3 12:47:10.555589 containerd[2007]: time="2026-03-03T12:47:10.554814816Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 3 12:47:10.564449 containerd[2007]: time="2026-03-03T12:47:10.564385872Z" level=info msg="CreateContainer within sandbox \"fd536d6ba4ed69b285dc7075cf022068fe1dc6e25c3d3bc3f55caa955b71d64c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 3 12:47:10.586053 containerd[2007]: time="2026-03-03T12:47:10.585983041Z" level=info msg="Container a981da9b4dde37fb5f43e873445439375bf52028c47047da944806104b536f04: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:10.599830 containerd[2007]: time="2026-03-03T12:47:10.599683141Z" level=info msg="CreateContainer within sandbox \"fd536d6ba4ed69b285dc7075cf022068fe1dc6e25c3d3bc3f55caa955b71d64c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a981da9b4dde37fb5f43e873445439375bf52028c47047da944806104b536f04\"" Mar 3 12:47:10.602418 containerd[2007]: time="2026-03-03T12:47:10.601544773Z" level=info msg="StartContainer for \"a981da9b4dde37fb5f43e873445439375bf52028c47047da944806104b536f04\"" Mar 3 12:47:10.603990 containerd[2007]: time="2026-03-03T12:47:10.603915793Z" level=info msg="connecting to shim a981da9b4dde37fb5f43e873445439375bf52028c47047da944806104b536f04" address="unix:///run/containerd/s/3ba045487275b4292dfcd2492386548bf48bd68e58f36a5ce8967459a86f86ce" protocol=ttrpc version=3 Mar 3 12:47:10.649000 systemd[1]: Started cri-containerd-a981da9b4dde37fb5f43e873445439375bf52028c47047da944806104b536f04.scope - libcontainer container a981da9b4dde37fb5f43e873445439375bf52028c47047da944806104b536f04. Mar 3 12:47:10.708409 containerd[2007]: time="2026-03-03T12:47:10.708330445Z" level=info msg="StartContainer for \"a981da9b4dde37fb5f43e873445439375bf52028c47047da944806104b536f04\" returns successfully" Mar 3 12:47:12.843016 kubelet[3342]: I0303 12:47:12.842907 3342 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-8ng9r" podStartSLOduration=3.272441328 podStartE2EDuration="5.842889724s" podCreationTimestamp="2026-03-03 12:47:07 +0000 UTC" firstStartedPulling="2026-03-03 12:47:07.986204568 +0000 UTC m=+5.537909897" lastFinishedPulling="2026-03-03 12:47:10.556652964 +0000 UTC m=+8.108358293" observedRunningTime="2026-03-03 12:47:10.904548938 +0000 UTC m=+8.456254327" watchObservedRunningTime="2026-03-03 12:47:12.842889724 +0000 UTC m=+10.394595041" Mar 3 12:47:17.278897 sudo[2366]: pam_unix(sudo:session): session closed for user root Mar 3 12:47:17.357515 sshd[2365]: Connection closed by 20.161.92.111 port 54942 Mar 3 12:47:17.359105 sshd-session[2362]: pam_unix(sshd:session): session closed for user core Mar 3 12:47:17.368969 systemd[1]: sshd@6-172.31.25.173:22-20.161.92.111:54942.service: Deactivated successfully. Mar 3 12:47:17.376976 systemd[1]: session-7.scope: Deactivated successfully. Mar 3 12:47:17.377665 systemd[1]: session-7.scope: Consumed 7.968s CPU time, 217.9M memory peak. Mar 3 12:47:17.385479 systemd-logind[1978]: Session 7 logged out. Waiting for processes to exit. Mar 3 12:47:17.390795 systemd-logind[1978]: Removed session 7. Mar 3 12:47:29.192382 systemd[1]: Created slice kubepods-besteffort-pod933bfbed_4063_484f_8133_90b0354b6989.slice - libcontainer container kubepods-besteffort-pod933bfbed_4063_484f_8133_90b0354b6989.slice. Mar 3 12:47:29.294669 kubelet[3342]: I0303 12:47:29.294589 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7fvg\" (UniqueName: \"kubernetes.io/projected/933bfbed-4063-484f-8133-90b0354b6989-kube-api-access-q7fvg\") pod \"calico-typha-575c44c45b-j4cp5\" (UID: \"933bfbed-4063-484f-8133-90b0354b6989\") " pod="calico-system/calico-typha-575c44c45b-j4cp5" Mar 3 12:47:29.294669 kubelet[3342]: I0303 12:47:29.294670 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933bfbed-4063-484f-8133-90b0354b6989-tigera-ca-bundle\") pod \"calico-typha-575c44c45b-j4cp5\" (UID: \"933bfbed-4063-484f-8133-90b0354b6989\") " pod="calico-system/calico-typha-575c44c45b-j4cp5" Mar 3 12:47:29.295309 kubelet[3342]: I0303 12:47:29.294743 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/933bfbed-4063-484f-8133-90b0354b6989-typha-certs\") pod \"calico-typha-575c44c45b-j4cp5\" (UID: \"933bfbed-4063-484f-8133-90b0354b6989\") " pod="calico-system/calico-typha-575c44c45b-j4cp5" Mar 3 12:47:29.382948 systemd[1]: Created slice kubepods-besteffort-podd6ed3be2_8799_4eac_bca8_cc0ba786a00b.slice - libcontainer container kubepods-besteffort-podd6ed3be2_8799_4eac_bca8_cc0ba786a00b.slice. Mar 3 12:47:29.497204 kubelet[3342]: I0303 12:47:29.496873 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d6ed3be2-8799-4eac-bca8-cc0ba786a00b-cni-log-dir\") pod \"calico-node-2bcqj\" (UID: \"d6ed3be2-8799-4eac-bca8-cc0ba786a00b\") " pod="calico-system/calico-node-2bcqj" Mar 3 12:47:29.497204 kubelet[3342]: I0303 12:47:29.496946 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d6ed3be2-8799-4eac-bca8-cc0ba786a00b-var-run-calico\") pod \"calico-node-2bcqj\" (UID: \"d6ed3be2-8799-4eac-bca8-cc0ba786a00b\") " pod="calico-system/calico-node-2bcqj" Mar 3 12:47:29.497204 kubelet[3342]: I0303 12:47:29.496989 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/d6ed3be2-8799-4eac-bca8-cc0ba786a00b-bpffs\") pod \"calico-node-2bcqj\" (UID: \"d6ed3be2-8799-4eac-bca8-cc0ba786a00b\") " pod="calico-system/calico-node-2bcqj" Mar 3 12:47:29.497204 kubelet[3342]: I0303 12:47:29.497026 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d6ed3be2-8799-4eac-bca8-cc0ba786a00b-policysync\") pod \"calico-node-2bcqj\" (UID: \"d6ed3be2-8799-4eac-bca8-cc0ba786a00b\") " pod="calico-system/calico-node-2bcqj" Mar 3 12:47:29.497204 kubelet[3342]: I0303 12:47:29.497065 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d6ed3be2-8799-4eac-bca8-cc0ba786a00b-cni-net-dir\") pod \"calico-node-2bcqj\" (UID: \"d6ed3be2-8799-4eac-bca8-cc0ba786a00b\") " pod="calico-system/calico-node-2bcqj" Mar 3 12:47:29.497662 kubelet[3342]: I0303 12:47:29.497100 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d6ed3be2-8799-4eac-bca8-cc0ba786a00b-flexvol-driver-host\") pod \"calico-node-2bcqj\" (UID: \"d6ed3be2-8799-4eac-bca8-cc0ba786a00b\") " pod="calico-system/calico-node-2bcqj" Mar 3 12:47:29.497662 kubelet[3342]: I0303 12:47:29.497135 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6ed3be2-8799-4eac-bca8-cc0ba786a00b-xtables-lock\") pod \"calico-node-2bcqj\" (UID: \"d6ed3be2-8799-4eac-bca8-cc0ba786a00b\") " pod="calico-system/calico-node-2bcqj" Mar 3 12:47:29.499419 kubelet[3342]: I0303 12:47:29.497179 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6ed3be2-8799-4eac-bca8-cc0ba786a00b-tigera-ca-bundle\") pod \"calico-node-2bcqj\" (UID: \"d6ed3be2-8799-4eac-bca8-cc0ba786a00b\") " pod="calico-system/calico-node-2bcqj" Mar 3 12:47:29.499571 kubelet[3342]: I0303 12:47:29.499445 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d6ed3be2-8799-4eac-bca8-cc0ba786a00b-node-certs\") pod \"calico-node-2bcqj\" (UID: \"d6ed3be2-8799-4eac-bca8-cc0ba786a00b\") " pod="calico-system/calico-node-2bcqj" Mar 3 12:47:29.499571 kubelet[3342]: I0303 12:47:29.499488 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/d6ed3be2-8799-4eac-bca8-cc0ba786a00b-nodeproc\") pod \"calico-node-2bcqj\" (UID: \"d6ed3be2-8799-4eac-bca8-cc0ba786a00b\") " pod="calico-system/calico-node-2bcqj" Mar 3 12:47:29.499571 kubelet[3342]: I0303 12:47:29.499521 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/d6ed3be2-8799-4eac-bca8-cc0ba786a00b-sys-fs\") pod \"calico-node-2bcqj\" (UID: \"d6ed3be2-8799-4eac-bca8-cc0ba786a00b\") " pod="calico-system/calico-node-2bcqj" Mar 3 12:47:29.499571 kubelet[3342]: I0303 12:47:29.499564 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d6ed3be2-8799-4eac-bca8-cc0ba786a00b-var-lib-calico\") pod \"calico-node-2bcqj\" (UID: \"d6ed3be2-8799-4eac-bca8-cc0ba786a00b\") " pod="calico-system/calico-node-2bcqj" Mar 3 12:47:29.500086 kubelet[3342]: I0303 12:47:29.499607 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6ed3be2-8799-4eac-bca8-cc0ba786a00b-lib-modules\") pod \"calico-node-2bcqj\" (UID: \"d6ed3be2-8799-4eac-bca8-cc0ba786a00b\") " pod="calico-system/calico-node-2bcqj" Mar 3 12:47:29.500086 kubelet[3342]: I0303 12:47:29.499645 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d6ed3be2-8799-4eac-bca8-cc0ba786a00b-cni-bin-dir\") pod \"calico-node-2bcqj\" (UID: \"d6ed3be2-8799-4eac-bca8-cc0ba786a00b\") " pod="calico-system/calico-node-2bcqj" Mar 3 12:47:29.500086 kubelet[3342]: I0303 12:47:29.499679 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pbqw\" (UniqueName: \"kubernetes.io/projected/d6ed3be2-8799-4eac-bca8-cc0ba786a00b-kube-api-access-7pbqw\") pod \"calico-node-2bcqj\" (UID: \"d6ed3be2-8799-4eac-bca8-cc0ba786a00b\") " pod="calico-system/calico-node-2bcqj" Mar 3 12:47:29.508601 containerd[2007]: time="2026-03-03T12:47:29.508550971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-575c44c45b-j4cp5,Uid:933bfbed-4063-484f-8133-90b0354b6989,Namespace:calico-system,Attempt:0,}" Mar 3 12:47:29.521028 kubelet[3342]: E0303 12:47:29.519799 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-89x9d" podUID="7cdb4c7c-a3da-4294-8141-e0392c7cd04f" Mar 3 12:47:29.577722 containerd[2007]: time="2026-03-03T12:47:29.577481935Z" level=info msg="connecting to shim 6c412aa5f56ca3f325e18d5338b9857722d8ce2567915f3acbeede6c2555e547" address="unix:///run/containerd/s/1a2987538bec2b5bed306c34b384fde18385b97105d0c42a6c5f4939595c9e3c" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:29.601161 kubelet[3342]: I0303 12:47:29.601108 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7cdb4c7c-a3da-4294-8141-e0392c7cd04f-varrun\") pod \"csi-node-driver-89x9d\" (UID: \"7cdb4c7c-a3da-4294-8141-e0392c7cd04f\") " pod="calico-system/csi-node-driver-89x9d" Mar 3 12:47:29.601629 kubelet[3342]: I0303 12:47:29.601391 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89jfw\" (UniqueName: \"kubernetes.io/projected/7cdb4c7c-a3da-4294-8141-e0392c7cd04f-kube-api-access-89jfw\") pod \"csi-node-driver-89x9d\" (UID: \"7cdb4c7c-a3da-4294-8141-e0392c7cd04f\") " pod="calico-system/csi-node-driver-89x9d" Mar 3 12:47:29.604852 kubelet[3342]: I0303 12:47:29.604111 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cdb4c7c-a3da-4294-8141-e0392c7cd04f-socket-dir\") pod \"csi-node-driver-89x9d\" (UID: \"7cdb4c7c-a3da-4294-8141-e0392c7cd04f\") " pod="calico-system/csi-node-driver-89x9d" Mar 3 12:47:29.607770 kubelet[3342]: I0303 12:47:29.605189 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cdb4c7c-a3da-4294-8141-e0392c7cd04f-kubelet-dir\") pod \"csi-node-driver-89x9d\" (UID: \"7cdb4c7c-a3da-4294-8141-e0392c7cd04f\") " pod="calico-system/csi-node-driver-89x9d" Mar 3 12:47:29.618026 kubelet[3342]: I0303 12:47:29.617969 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cdb4c7c-a3da-4294-8141-e0392c7cd04f-registration-dir\") pod \"csi-node-driver-89x9d\" (UID: \"7cdb4c7c-a3da-4294-8141-e0392c7cd04f\") " pod="calico-system/csi-node-driver-89x9d" Mar 3 12:47:29.636935 kubelet[3342]: E0303 12:47:29.635869 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.636935 kubelet[3342]: W0303 12:47:29.635913 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.636935 kubelet[3342]: E0303 12:47:29.635958 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.651736 kubelet[3342]: E0303 12:47:29.650295 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.651736 kubelet[3342]: W0303 12:47:29.650333 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.651736 kubelet[3342]: E0303 12:47:29.650369 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.679249 systemd[1]: Started cri-containerd-6c412aa5f56ca3f325e18d5338b9857722d8ce2567915f3acbeede6c2555e547.scope - libcontainer container 6c412aa5f56ca3f325e18d5338b9857722d8ce2567915f3acbeede6c2555e547. Mar 3 12:47:29.694018 kubelet[3342]: E0303 12:47:29.693889 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.694018 kubelet[3342]: W0303 12:47:29.693933 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.694018 kubelet[3342]: E0303 12:47:29.693970 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.719336 kubelet[3342]: E0303 12:47:29.719287 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.719336 kubelet[3342]: W0303 12:47:29.719324 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.719336 kubelet[3342]: E0303 12:47:29.719358 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.719738 kubelet[3342]: E0303 12:47:29.719687 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.719738 kubelet[3342]: W0303 12:47:29.719724 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.719866 kubelet[3342]: E0303 12:47:29.719746 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.720993 kubelet[3342]: E0303 12:47:29.720944 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.720993 kubelet[3342]: W0303 12:47:29.720982 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.721450 kubelet[3342]: E0303 12:47:29.721017 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.722084 kubelet[3342]: E0303 12:47:29.721949 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.722290 kubelet[3342]: W0303 12:47:29.722218 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.722472 kubelet[3342]: E0303 12:47:29.722259 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.723199 kubelet[3342]: E0303 12:47:29.723152 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.723424 kubelet[3342]: W0303 12:47:29.723297 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.723424 kubelet[3342]: E0303 12:47:29.723331 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.723985 kubelet[3342]: E0303 12:47:29.723942 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.723985 kubelet[3342]: W0303 12:47:29.723974 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.724306 kubelet[3342]: E0303 12:47:29.724003 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.725125 kubelet[3342]: E0303 12:47:29.725080 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.725125 kubelet[3342]: W0303 12:47:29.725122 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.725125 kubelet[3342]: E0303 12:47:29.725154 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.726826 kubelet[3342]: E0303 12:47:29.726778 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.726826 kubelet[3342]: W0303 12:47:29.726816 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.727114 kubelet[3342]: E0303 12:47:29.726851 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.727566 kubelet[3342]: E0303 12:47:29.727516 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.727566 kubelet[3342]: W0303 12:47:29.727548 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.727997 kubelet[3342]: E0303 12:47:29.727579 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.727997 kubelet[3342]: E0303 12:47:29.727956 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.727997 kubelet[3342]: W0303 12:47:29.727975 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.727997 kubelet[3342]: E0303 12:47:29.727998 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.728630 kubelet[3342]: E0303 12:47:29.728313 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.728630 kubelet[3342]: W0303 12:47:29.728329 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.728630 kubelet[3342]: E0303 12:47:29.728349 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.728630 kubelet[3342]: E0303 12:47:29.728609 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.728630 kubelet[3342]: W0303 12:47:29.728625 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.729334 kubelet[3342]: E0303 12:47:29.728645 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.729334 kubelet[3342]: E0303 12:47:29.728959 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.729334 kubelet[3342]: W0303 12:47:29.728975 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.729334 kubelet[3342]: E0303 12:47:29.728994 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.729712 kubelet[3342]: E0303 12:47:29.729656 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.729807 kubelet[3342]: W0303 12:47:29.729684 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.729807 kubelet[3342]: E0303 12:47:29.729755 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.730272 kubelet[3342]: E0303 12:47:29.730092 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.730272 kubelet[3342]: W0303 12:47:29.730112 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.730272 kubelet[3342]: E0303 12:47:29.730134 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.730546 kubelet[3342]: E0303 12:47:29.730442 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.730546 kubelet[3342]: W0303 12:47:29.730458 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.730546 kubelet[3342]: E0303 12:47:29.730477 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.730907 kubelet[3342]: E0303 12:47:29.730786 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.730907 kubelet[3342]: W0303 12:47:29.730803 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.730907 kubelet[3342]: E0303 12:47:29.730841 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.731978 kubelet[3342]: E0303 12:47:29.731929 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.731978 kubelet[3342]: W0303 12:47:29.731971 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.731978 kubelet[3342]: E0303 12:47:29.732006 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.733513 kubelet[3342]: E0303 12:47:29.733466 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.733513 kubelet[3342]: W0303 12:47:29.733503 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.734621 kubelet[3342]: E0303 12:47:29.733540 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.735350 kubelet[3342]: E0303 12:47:29.735303 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.735350 kubelet[3342]: W0303 12:47:29.735340 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.735601 kubelet[3342]: E0303 12:47:29.735375 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.736824 kubelet[3342]: E0303 12:47:29.736738 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.736824 kubelet[3342]: W0303 12:47:29.736813 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.737005 kubelet[3342]: E0303 12:47:29.736849 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.737301 kubelet[3342]: E0303 12:47:29.737267 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.737301 kubelet[3342]: W0303 12:47:29.737296 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.737529 kubelet[3342]: E0303 12:47:29.737322 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.737642 kubelet[3342]: E0303 12:47:29.737630 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.737855 kubelet[3342]: W0303 12:47:29.737647 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.737855 kubelet[3342]: E0303 12:47:29.737666 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.738057 kubelet[3342]: E0303 12:47:29.738018 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.738057 kubelet[3342]: W0303 12:47:29.738046 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.738414 kubelet[3342]: E0303 12:47:29.738070 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.738414 kubelet[3342]: E0303 12:47:29.738406 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.738527 kubelet[3342]: W0303 12:47:29.738424 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.738527 kubelet[3342]: E0303 12:47:29.738445 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.769754 kubelet[3342]: E0303 12:47:29.769407 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:29.769754 kubelet[3342]: W0303 12:47:29.769574 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:29.770048 kubelet[3342]: E0303 12:47:29.769613 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:29.815065 containerd[2007]: time="2026-03-03T12:47:29.814898000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-575c44c45b-j4cp5,Uid:933bfbed-4063-484f-8133-90b0354b6989,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c412aa5f56ca3f325e18d5338b9857722d8ce2567915f3acbeede6c2555e547\"" Mar 3 12:47:29.819187 containerd[2007]: time="2026-03-03T12:47:29.819063236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 3 12:47:29.994205 containerd[2007]: time="2026-03-03T12:47:29.994157361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2bcqj,Uid:d6ed3be2-8799-4eac-bca8-cc0ba786a00b,Namespace:calico-system,Attempt:0,}" Mar 3 12:47:30.032370 containerd[2007]: time="2026-03-03T12:47:30.031957997Z" level=info msg="connecting to shim 7ea07ca292564cc640cd3c7bf1e9fea46e534d28e5a1d246daf5e026c2c746e9" address="unix:///run/containerd/s/9e9f8b54994b688185b30af38e3c2f30294645c1401c54f251ae09ce5b265f57" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:30.073027 systemd[1]: Started cri-containerd-7ea07ca292564cc640cd3c7bf1e9fea46e534d28e5a1d246daf5e026c2c746e9.scope - libcontainer container 7ea07ca292564cc640cd3c7bf1e9fea46e534d28e5a1d246daf5e026c2c746e9. Mar 3 12:47:30.126393 containerd[2007]: time="2026-03-03T12:47:30.126302922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2bcqj,Uid:d6ed3be2-8799-4eac-bca8-cc0ba786a00b,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ea07ca292564cc640cd3c7bf1e9fea46e534d28e5a1d246daf5e026c2c746e9\"" Mar 3 12:47:30.763164 kubelet[3342]: E0303 12:47:30.762832 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-89x9d" podUID="7cdb4c7c-a3da-4294-8141-e0392c7cd04f" Mar 3 12:47:31.493111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3563990743.mount: Deactivated successfully. Mar 3 12:47:32.760960 kubelet[3342]: E0303 12:47:32.760869 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-89x9d" podUID="7cdb4c7c-a3da-4294-8141-e0392c7cd04f" Mar 3 12:47:32.879435 containerd[2007]: time="2026-03-03T12:47:32.879372251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:32.880833 containerd[2007]: time="2026-03-03T12:47:32.880768391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 3 12:47:32.882447 containerd[2007]: time="2026-03-03T12:47:32.882363563Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:32.889667 containerd[2007]: time="2026-03-03T12:47:32.889608467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:32.892588 containerd[2007]: time="2026-03-03T12:47:32.892215899Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 3.073048599s" Mar 3 12:47:32.892588 containerd[2007]: time="2026-03-03T12:47:32.892268399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 3 12:47:32.896729 containerd[2007]: time="2026-03-03T12:47:32.895651259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 3 12:47:32.928720 containerd[2007]: time="2026-03-03T12:47:32.927382319Z" level=info msg="CreateContainer within sandbox \"6c412aa5f56ca3f325e18d5338b9857722d8ce2567915f3acbeede6c2555e547\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 3 12:47:32.942735 containerd[2007]: time="2026-03-03T12:47:32.940977312Z" level=info msg="Container 8cbe8169d34043daafa20c8b6fe46fb9dc9caff2086600433000fe6c5f065078: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:32.951803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3909803666.mount: Deactivated successfully. Mar 3 12:47:32.962061 containerd[2007]: time="2026-03-03T12:47:32.961979076Z" level=info msg="CreateContainer within sandbox \"6c412aa5f56ca3f325e18d5338b9857722d8ce2567915f3acbeede6c2555e547\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8cbe8169d34043daafa20c8b6fe46fb9dc9caff2086600433000fe6c5f065078\"" Mar 3 12:47:32.964298 containerd[2007]: time="2026-03-03T12:47:32.963314316Z" level=info msg="StartContainer for \"8cbe8169d34043daafa20c8b6fe46fb9dc9caff2086600433000fe6c5f065078\"" Mar 3 12:47:32.968823 containerd[2007]: time="2026-03-03T12:47:32.968744652Z" level=info msg="connecting to shim 8cbe8169d34043daafa20c8b6fe46fb9dc9caff2086600433000fe6c5f065078" address="unix:///run/containerd/s/1a2987538bec2b5bed306c34b384fde18385b97105d0c42a6c5f4939595c9e3c" protocol=ttrpc version=3 Mar 3 12:47:33.013004 systemd[1]: Started cri-containerd-8cbe8169d34043daafa20c8b6fe46fb9dc9caff2086600433000fe6c5f065078.scope - libcontainer container 8cbe8169d34043daafa20c8b6fe46fb9dc9caff2086600433000fe6c5f065078. Mar 3 12:47:33.130113 containerd[2007]: time="2026-03-03T12:47:33.129893780Z" level=info msg="StartContainer for \"8cbe8169d34043daafa20c8b6fe46fb9dc9caff2086600433000fe6c5f065078\" returns successfully" Mar 3 12:47:34.023173 kubelet[3342]: I0303 12:47:34.022826 3342 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-575c44c45b-j4cp5" podStartSLOduration=1.946390138 podStartE2EDuration="5.022668225s" podCreationTimestamp="2026-03-03 12:47:29 +0000 UTC" firstStartedPulling="2026-03-03 12:47:29.817650716 +0000 UTC m=+27.369356045" lastFinishedPulling="2026-03-03 12:47:32.893928803 +0000 UTC m=+30.445634132" observedRunningTime="2026-03-03 12:47:34.019382409 +0000 UTC m=+31.571087750" watchObservedRunningTime="2026-03-03 12:47:34.022668225 +0000 UTC m=+31.574373578" Mar 3 12:47:34.089732 kubelet[3342]: E0303 12:47:34.089596 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.089928 kubelet[3342]: W0303 12:47:34.089653 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.090104 kubelet[3342]: E0303 12:47:34.090009 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.090525 kubelet[3342]: E0303 12:47:34.090492 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.090793 kubelet[3342]: W0303 12:47:34.090603 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.090793 kubelet[3342]: E0303 12:47:34.090630 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.091349 kubelet[3342]: E0303 12:47:34.091227 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.091349 kubelet[3342]: W0303 12:47:34.091249 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.091349 kubelet[3342]: E0303 12:47:34.091272 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.091919 kubelet[3342]: E0303 12:47:34.091898 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.092124 kubelet[3342]: W0303 12:47:34.092009 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.092124 kubelet[3342]: E0303 12:47:34.092039 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.093662 kubelet[3342]: E0303 12:47:34.093419 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.093662 kubelet[3342]: W0303 12:47:34.093452 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.093662 kubelet[3342]: E0303 12:47:34.093482 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.094003 kubelet[3342]: E0303 12:47:34.093982 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.094795 kubelet[3342]: W0303 12:47:34.094552 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.094795 kubelet[3342]: E0303 12:47:34.094593 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.095412 kubelet[3342]: E0303 12:47:34.095196 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.095412 kubelet[3342]: W0303 12:47:34.095222 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.095412 kubelet[3342]: E0303 12:47:34.095250 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.096204 kubelet[3342]: E0303 12:47:34.096123 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.096204 kubelet[3342]: W0303 12:47:34.096149 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.096390 kubelet[3342]: E0303 12:47:34.096367 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.097076 kubelet[3342]: E0303 12:47:34.097020 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.097076 kubelet[3342]: W0303 12:47:34.097056 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.097222 kubelet[3342]: E0303 12:47:34.097086 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.097390 kubelet[3342]: E0303 12:47:34.097365 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.097446 kubelet[3342]: W0303 12:47:34.097389 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.097446 kubelet[3342]: E0303 12:47:34.097410 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.097730 kubelet[3342]: E0303 12:47:34.097681 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.097730 kubelet[3342]: W0303 12:47:34.097715 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.097828 kubelet[3342]: E0303 12:47:34.097749 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.098078 kubelet[3342]: E0303 12:47:34.098051 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.098153 kubelet[3342]: W0303 12:47:34.098075 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.098153 kubelet[3342]: E0303 12:47:34.098097 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.098393 kubelet[3342]: E0303 12:47:34.098368 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.098449 kubelet[3342]: W0303 12:47:34.098392 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.098449 kubelet[3342]: E0303 12:47:34.098413 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.098764 kubelet[3342]: E0303 12:47:34.098688 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.098825 kubelet[3342]: W0303 12:47:34.098761 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.098825 kubelet[3342]: E0303 12:47:34.098785 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.099090 kubelet[3342]: E0303 12:47:34.099064 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.099168 kubelet[3342]: W0303 12:47:34.099089 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.099168 kubelet[3342]: E0303 12:47:34.099110 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.165385 kubelet[3342]: E0303 12:47:34.165264 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.165385 kubelet[3342]: W0303 12:47:34.165318 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.165385 kubelet[3342]: E0303 12:47:34.165350 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.166124 kubelet[3342]: E0303 12:47:34.166102 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.166243 kubelet[3342]: W0303 12:47:34.166222 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.166381 kubelet[3342]: E0303 12:47:34.166360 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.166838 kubelet[3342]: E0303 12:47:34.166807 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.166959 kubelet[3342]: W0303 12:47:34.166836 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.166959 kubelet[3342]: E0303 12:47:34.166861 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.167187 kubelet[3342]: E0303 12:47:34.167156 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.167187 kubelet[3342]: W0303 12:47:34.167182 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.167374 kubelet[3342]: E0303 12:47:34.167206 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.167533 kubelet[3342]: E0303 12:47:34.167500 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.167533 kubelet[3342]: W0303 12:47:34.167525 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.167763 kubelet[3342]: E0303 12:47:34.167546 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.167921 kubelet[3342]: E0303 12:47:34.167894 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.168327 kubelet[3342]: W0303 12:47:34.167919 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.168327 kubelet[3342]: E0303 12:47:34.167942 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.168327 kubelet[3342]: E0303 12:47:34.168224 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.168327 kubelet[3342]: W0303 12:47:34.168240 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.168327 kubelet[3342]: E0303 12:47:34.168259 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.168658 kubelet[3342]: E0303 12:47:34.168631 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.168782 kubelet[3342]: W0303 12:47:34.168657 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.168782 kubelet[3342]: E0303 12:47:34.168680 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.169033 kubelet[3342]: E0303 12:47:34.169007 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.169103 kubelet[3342]: W0303 12:47:34.169031 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.169103 kubelet[3342]: E0303 12:47:34.169053 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.169389 kubelet[3342]: E0303 12:47:34.169364 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.169449 kubelet[3342]: W0303 12:47:34.169388 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.169449 kubelet[3342]: E0303 12:47:34.169412 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.169810 kubelet[3342]: E0303 12:47:34.169783 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.169908 kubelet[3342]: W0303 12:47:34.169808 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.169908 kubelet[3342]: E0303 12:47:34.169833 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.170152 kubelet[3342]: E0303 12:47:34.170127 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.170209 kubelet[3342]: W0303 12:47:34.170150 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.170209 kubelet[3342]: E0303 12:47:34.170171 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.170520 kubelet[3342]: E0303 12:47:34.170494 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.170588 kubelet[3342]: W0303 12:47:34.170519 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.170588 kubelet[3342]: E0303 12:47:34.170539 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.170935 kubelet[3342]: E0303 12:47:34.170907 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.171012 kubelet[3342]: W0303 12:47:34.170933 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.171012 kubelet[3342]: E0303 12:47:34.170956 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.171338 kubelet[3342]: E0303 12:47:34.171309 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.171410 kubelet[3342]: W0303 12:47:34.171335 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.171410 kubelet[3342]: E0303 12:47:34.171357 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.172809 kubelet[3342]: E0303 12:47:34.171966 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.172809 kubelet[3342]: W0303 12:47:34.171993 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.172809 kubelet[3342]: E0303 12:47:34.172015 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.172809 kubelet[3342]: E0303 12:47:34.172402 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.172809 kubelet[3342]: W0303 12:47:34.172424 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.172809 kubelet[3342]: E0303 12:47:34.172449 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.173393 kubelet[3342]: E0303 12:47:34.173368 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:34.173500 kubelet[3342]: W0303 12:47:34.173475 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:34.173603 kubelet[3342]: E0303 12:47:34.173581 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:34.759031 kubelet[3342]: E0303 12:47:34.757883 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-89x9d" podUID="7cdb4c7c-a3da-4294-8141-e0392c7cd04f" Mar 3 12:47:35.003422 kubelet[3342]: I0303 12:47:35.003376 3342 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 3 12:47:35.104485 kubelet[3342]: E0303 12:47:35.104335 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.104485 kubelet[3342]: W0303 12:47:35.104375 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.104485 kubelet[3342]: E0303 12:47:35.104412 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.106387 kubelet[3342]: E0303 12:47:35.106163 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.106387 kubelet[3342]: W0303 12:47:35.106201 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.106387 kubelet[3342]: E0303 12:47:35.106236 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.106978 kubelet[3342]: E0303 12:47:35.106911 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.106978 kubelet[3342]: W0303 12:47:35.106943 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.106978 kubelet[3342]: E0303 12:47:35.106972 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.108246 kubelet[3342]: E0303 12:47:35.108209 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.108394 kubelet[3342]: W0303 12:47:35.108244 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.108394 kubelet[3342]: E0303 12:47:35.108276 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.108678 kubelet[3342]: E0303 12:47:35.108650 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.108678 kubelet[3342]: W0303 12:47:35.108677 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.108849 kubelet[3342]: E0303 12:47:35.108743 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.109369 kubelet[3342]: E0303 12:47:35.109337 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.109489 kubelet[3342]: W0303 12:47:35.109367 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.109489 kubelet[3342]: E0303 12:47:35.109394 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.110649 kubelet[3342]: E0303 12:47:35.110597 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.110649 kubelet[3342]: W0303 12:47:35.110627 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.111024 kubelet[3342]: E0303 12:47:35.110654 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.111024 kubelet[3342]: E0303 12:47:35.110995 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.111024 kubelet[3342]: W0303 12:47:35.111013 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.111279 kubelet[3342]: E0303 12:47:35.111034 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.112997 kubelet[3342]: E0303 12:47:35.112807 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.112997 kubelet[3342]: W0303 12:47:35.112832 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.112997 kubelet[3342]: E0303 12:47:35.112859 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.113849 kubelet[3342]: E0303 12:47:35.113729 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.113849 kubelet[3342]: W0303 12:47:35.113752 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.113849 kubelet[3342]: E0303 12:47:35.113776 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.114710 kubelet[3342]: E0303 12:47:35.114529 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.114710 kubelet[3342]: W0303 12:47:35.114551 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.114710 kubelet[3342]: E0303 12:47:35.114573 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.115611 kubelet[3342]: E0303 12:47:35.115462 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.115611 kubelet[3342]: W0303 12:47:35.115492 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.115611 kubelet[3342]: E0303 12:47:35.115518 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.117347 kubelet[3342]: E0303 12:47:35.117178 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.117347 kubelet[3342]: W0303 12:47:35.117205 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.117347 kubelet[3342]: E0303 12:47:35.117234 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.118269 kubelet[3342]: E0303 12:47:35.118045 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.118269 kubelet[3342]: W0303 12:47:35.118098 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.118269 kubelet[3342]: E0303 12:47:35.118124 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.119257 kubelet[3342]: E0303 12:47:35.119207 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.120279 kubelet[3342]: W0303 12:47:35.119852 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.120279 kubelet[3342]: E0303 12:47:35.120008 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.177322 kubelet[3342]: E0303 12:47:35.177276 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.177572 kubelet[3342]: W0303 12:47:35.177499 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.177572 kubelet[3342]: E0303 12:47:35.177538 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.178488 kubelet[3342]: E0303 12:47:35.178322 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.178488 kubelet[3342]: W0303 12:47:35.178425 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.178488 kubelet[3342]: E0303 12:47:35.178450 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.179058 kubelet[3342]: E0303 12:47:35.178934 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.179058 kubelet[3342]: W0303 12:47:35.178955 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.179058 kubelet[3342]: E0303 12:47:35.178979 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.179354 kubelet[3342]: E0303 12:47:35.179322 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.179354 kubelet[3342]: W0303 12:47:35.179349 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.179737 kubelet[3342]: E0303 12:47:35.179372 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.179737 kubelet[3342]: E0303 12:47:35.179732 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.179955 kubelet[3342]: W0303 12:47:35.179751 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.179955 kubelet[3342]: E0303 12:47:35.179774 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.180260 kubelet[3342]: E0303 12:47:35.180230 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.180340 kubelet[3342]: W0303 12:47:35.180258 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.180340 kubelet[3342]: E0303 12:47:35.180283 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.181008 kubelet[3342]: E0303 12:47:35.180973 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.181008 kubelet[3342]: W0303 12:47:35.181003 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.181575 kubelet[3342]: E0303 12:47:35.181030 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.181575 kubelet[3342]: E0303 12:47:35.181347 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.181575 kubelet[3342]: W0303 12:47:35.181364 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.181575 kubelet[3342]: E0303 12:47:35.181417 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.181872 kubelet[3342]: E0303 12:47:35.181840 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.181872 kubelet[3342]: W0303 12:47:35.181858 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.182014 kubelet[3342]: E0303 12:47:35.181880 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.182238 kubelet[3342]: E0303 12:47:35.182210 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.182325 kubelet[3342]: W0303 12:47:35.182240 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.182325 kubelet[3342]: E0303 12:47:35.182263 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.182602 kubelet[3342]: E0303 12:47:35.182572 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.182602 kubelet[3342]: W0303 12:47:35.182597 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.182814 kubelet[3342]: E0303 12:47:35.182618 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.182999 kubelet[3342]: E0303 12:47:35.182972 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.183082 kubelet[3342]: W0303 12:47:35.182998 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.183082 kubelet[3342]: E0303 12:47:35.183019 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.183361 kubelet[3342]: E0303 12:47:35.183319 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.183361 kubelet[3342]: W0303 12:47:35.183336 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.183361 kubelet[3342]: E0303 12:47:35.183356 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.183730 kubelet[3342]: E0303 12:47:35.183642 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.183855 kubelet[3342]: W0303 12:47:35.183710 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.183855 kubelet[3342]: E0303 12:47:35.183758 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.184098 kubelet[3342]: E0303 12:47:35.184072 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.184403 kubelet[3342]: W0303 12:47:35.184097 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.184403 kubelet[3342]: E0303 12:47:35.184121 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.184960 kubelet[3342]: E0303 12:47:35.184928 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.185280 kubelet[3342]: W0303 12:47:35.184958 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.185280 kubelet[3342]: E0303 12:47:35.184987 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.186272 kubelet[3342]: E0303 12:47:35.186001 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.186272 kubelet[3342]: W0303 12:47:35.186035 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.186272 kubelet[3342]: E0303 12:47:35.186068 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.186894 kubelet[3342]: E0303 12:47:35.186798 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 12:47:35.186894 kubelet[3342]: W0303 12:47:35.186824 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 12:47:35.186894 kubelet[3342]: E0303 12:47:35.186851 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 12:47:35.585304 containerd[2007]: time="2026-03-03T12:47:35.585217177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:35.587407 containerd[2007]: time="2026-03-03T12:47:35.587104945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 3 12:47:35.590588 containerd[2007]: time="2026-03-03T12:47:35.590531725Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:35.599276 containerd[2007]: time="2026-03-03T12:47:35.599184853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:35.602724 containerd[2007]: time="2026-03-03T12:47:35.601912357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 2.706172982s" Mar 3 12:47:35.602724 containerd[2007]: time="2026-03-03T12:47:35.601983481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 3 12:47:35.613367 containerd[2007]: time="2026-03-03T12:47:35.613280293Z" level=info msg="CreateContainer within sandbox \"7ea07ca292564cc640cd3c7bf1e9fea46e534d28e5a1d246daf5e026c2c746e9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 3 12:47:35.632495 containerd[2007]: time="2026-03-03T12:47:35.631549693Z" level=info msg="Container ed49794fcab0590801b2f0e0e036d53fa910af8ee980dce1e226655e176c4b00: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:35.650642 containerd[2007]: time="2026-03-03T12:47:35.650586829Z" level=info msg="CreateContainer within sandbox \"7ea07ca292564cc640cd3c7bf1e9fea46e534d28e5a1d246daf5e026c2c746e9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ed49794fcab0590801b2f0e0e036d53fa910af8ee980dce1e226655e176c4b00\"" Mar 3 12:47:35.652794 containerd[2007]: time="2026-03-03T12:47:35.652377481Z" level=info msg="StartContainer for \"ed49794fcab0590801b2f0e0e036d53fa910af8ee980dce1e226655e176c4b00\"" Mar 3 12:47:35.655721 containerd[2007]: time="2026-03-03T12:47:35.655646065Z" level=info msg="connecting to shim ed49794fcab0590801b2f0e0e036d53fa910af8ee980dce1e226655e176c4b00" address="unix:///run/containerd/s/9e9f8b54994b688185b30af38e3c2f30294645c1401c54f251ae09ce5b265f57" protocol=ttrpc version=3 Mar 3 12:47:35.709033 systemd[1]: Started cri-containerd-ed49794fcab0590801b2f0e0e036d53fa910af8ee980dce1e226655e176c4b00.scope - libcontainer container ed49794fcab0590801b2f0e0e036d53fa910af8ee980dce1e226655e176c4b00. Mar 3 12:47:35.831456 containerd[2007]: time="2026-03-03T12:47:35.831359366Z" level=info msg="StartContainer for \"ed49794fcab0590801b2f0e0e036d53fa910af8ee980dce1e226655e176c4b00\" returns successfully" Mar 3 12:47:35.861872 systemd[1]: cri-containerd-ed49794fcab0590801b2f0e0e036d53fa910af8ee980dce1e226655e176c4b00.scope: Deactivated successfully. Mar 3 12:47:35.870436 containerd[2007]: time="2026-03-03T12:47:35.870378158Z" level=info msg="received container exit event container_id:\"ed49794fcab0590801b2f0e0e036d53fa910af8ee980dce1e226655e176c4b00\" id:\"ed49794fcab0590801b2f0e0e036d53fa910af8ee980dce1e226655e176c4b00\" pid:4273 exited_at:{seconds:1772542055 nanos:869316854}" Mar 3 12:47:35.912045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed49794fcab0590801b2f0e0e036d53fa910af8ee980dce1e226655e176c4b00-rootfs.mount: Deactivated successfully. Mar 3 12:47:36.758307 kubelet[3342]: E0303 12:47:36.758231 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-89x9d" podUID="7cdb4c7c-a3da-4294-8141-e0392c7cd04f" Mar 3 12:47:37.023589 containerd[2007]: time="2026-03-03T12:47:37.022026876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 3 12:47:38.759570 kubelet[3342]: E0303 12:47:38.758044 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-89x9d" podUID="7cdb4c7c-a3da-4294-8141-e0392c7cd04f" Mar 3 12:47:40.758234 kubelet[3342]: E0303 12:47:40.758164 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-89x9d" podUID="7cdb4c7c-a3da-4294-8141-e0392c7cd04f" Mar 3 12:47:42.218415 kubelet[3342]: I0303 12:47:42.218375 3342 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 3 12:47:42.760420 kubelet[3342]: E0303 12:47:42.760346 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-89x9d" podUID="7cdb4c7c-a3da-4294-8141-e0392c7cd04f" Mar 3 12:47:44.566076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1840783958.mount: Deactivated successfully. Mar 3 12:47:44.627316 containerd[2007]: time="2026-03-03T12:47:44.627237250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:44.629723 containerd[2007]: time="2026-03-03T12:47:44.629629162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 3 12:47:44.631055 containerd[2007]: time="2026-03-03T12:47:44.630961090Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:44.635043 containerd[2007]: time="2026-03-03T12:47:44.634893682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:44.636553 containerd[2007]: time="2026-03-03T12:47:44.636493282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 7.614399806s" Mar 3 12:47:44.636777 containerd[2007]: time="2026-03-03T12:47:44.636745642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 3 12:47:44.646058 containerd[2007]: time="2026-03-03T12:47:44.645865810Z" level=info msg="CreateContainer within sandbox \"7ea07ca292564cc640cd3c7bf1e9fea46e534d28e5a1d246daf5e026c2c746e9\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 3 12:47:44.662080 containerd[2007]: time="2026-03-03T12:47:44.662007970Z" level=info msg="Container 4a7aa93cbdb23caf32d6233a7a03ae547ddfb9d435937805a0b1946bd47b8d1e: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:44.674453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781384162.mount: Deactivated successfully. Mar 3 12:47:44.686001 containerd[2007]: time="2026-03-03T12:47:44.685878634Z" level=info msg="CreateContainer within sandbox \"7ea07ca292564cc640cd3c7bf1e9fea46e534d28e5a1d246daf5e026c2c746e9\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"4a7aa93cbdb23caf32d6233a7a03ae547ddfb9d435937805a0b1946bd47b8d1e\"" Mar 3 12:47:44.689569 containerd[2007]: time="2026-03-03T12:47:44.687507106Z" level=info msg="StartContainer for \"4a7aa93cbdb23caf32d6233a7a03ae547ddfb9d435937805a0b1946bd47b8d1e\"" Mar 3 12:47:44.694425 containerd[2007]: time="2026-03-03T12:47:44.694246198Z" level=info msg="connecting to shim 4a7aa93cbdb23caf32d6233a7a03ae547ddfb9d435937805a0b1946bd47b8d1e" address="unix:///run/containerd/s/9e9f8b54994b688185b30af38e3c2f30294645c1401c54f251ae09ce5b265f57" protocol=ttrpc version=3 Mar 3 12:47:44.738090 systemd[1]: Started cri-containerd-4a7aa93cbdb23caf32d6233a7a03ae547ddfb9d435937805a0b1946bd47b8d1e.scope - libcontainer container 4a7aa93cbdb23caf32d6233a7a03ae547ddfb9d435937805a0b1946bd47b8d1e. Mar 3 12:47:44.759058 kubelet[3342]: E0303 12:47:44.758971 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-89x9d" podUID="7cdb4c7c-a3da-4294-8141-e0392c7cd04f" Mar 3 12:47:44.876829 containerd[2007]: time="2026-03-03T12:47:44.876419099Z" level=info msg="StartContainer for \"4a7aa93cbdb23caf32d6233a7a03ae547ddfb9d435937805a0b1946bd47b8d1e\" returns successfully" Mar 3 12:47:45.100235 systemd[1]: cri-containerd-4a7aa93cbdb23caf32d6233a7a03ae547ddfb9d435937805a0b1946bd47b8d1e.scope: Deactivated successfully. Mar 3 12:47:45.114927 containerd[2007]: time="2026-03-03T12:47:45.114573836Z" level=info msg="received container exit event container_id:\"4a7aa93cbdb23caf32d6233a7a03ae547ddfb9d435937805a0b1946bd47b8d1e\" id:\"4a7aa93cbdb23caf32d6233a7a03ae547ddfb9d435937805a0b1946bd47b8d1e\" pid:4335 exited_at:{seconds:1772542065 nanos:113018852}" Mar 3 12:47:45.565244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a7aa93cbdb23caf32d6233a7a03ae547ddfb9d435937805a0b1946bd47b8d1e-rootfs.mount: Deactivated successfully. Mar 3 12:47:46.076030 containerd[2007]: time="2026-03-03T12:47:46.075535245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 3 12:47:46.758149 kubelet[3342]: E0303 12:47:46.758072 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-89x9d" podUID="7cdb4c7c-a3da-4294-8141-e0392c7cd04f" Mar 3 12:47:48.759035 kubelet[3342]: E0303 12:47:48.758678 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-89x9d" podUID="7cdb4c7c-a3da-4294-8141-e0392c7cd04f" Mar 3 12:47:49.576169 containerd[2007]: time="2026-03-03T12:47:49.576112790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:49.578209 containerd[2007]: time="2026-03-03T12:47:49.578163698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 3 12:47:49.578597 containerd[2007]: time="2026-03-03T12:47:49.578523446Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:49.587732 containerd[2007]: time="2026-03-03T12:47:49.586645250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:49.591267 containerd[2007]: time="2026-03-03T12:47:49.591209186Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.515612693s" Mar 3 12:47:49.591579 containerd[2007]: time="2026-03-03T12:47:49.591534038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 3 12:47:49.600915 containerd[2007]: time="2026-03-03T12:47:49.600854522Z" level=info msg="CreateContainer within sandbox \"7ea07ca292564cc640cd3c7bf1e9fea46e534d28e5a1d246daf5e026c2c746e9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 3 12:47:49.613097 containerd[2007]: time="2026-03-03T12:47:49.613024502Z" level=info msg="Container aecce55dd61db17f5c0efcae9be5ecb6c2fff5d2e8507ce59db04899f4477204: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:49.637290 containerd[2007]: time="2026-03-03T12:47:49.637092170Z" level=info msg="CreateContainer within sandbox \"7ea07ca292564cc640cd3c7bf1e9fea46e534d28e5a1d246daf5e026c2c746e9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"aecce55dd61db17f5c0efcae9be5ecb6c2fff5d2e8507ce59db04899f4477204\"" Mar 3 12:47:49.638838 containerd[2007]: time="2026-03-03T12:47:49.638736098Z" level=info msg="StartContainer for \"aecce55dd61db17f5c0efcae9be5ecb6c2fff5d2e8507ce59db04899f4477204\"" Mar 3 12:47:49.645277 containerd[2007]: time="2026-03-03T12:47:49.645149871Z" level=info msg="connecting to shim aecce55dd61db17f5c0efcae9be5ecb6c2fff5d2e8507ce59db04899f4477204" address="unix:///run/containerd/s/9e9f8b54994b688185b30af38e3c2f30294645c1401c54f251ae09ce5b265f57" protocol=ttrpc version=3 Mar 3 12:47:49.687014 systemd[1]: Started cri-containerd-aecce55dd61db17f5c0efcae9be5ecb6c2fff5d2e8507ce59db04899f4477204.scope - libcontainer container aecce55dd61db17f5c0efcae9be5ecb6c2fff5d2e8507ce59db04899f4477204. Mar 3 12:47:49.813525 containerd[2007]: time="2026-03-03T12:47:49.813384915Z" level=info msg="StartContainer for \"aecce55dd61db17f5c0efcae9be5ecb6c2fff5d2e8507ce59db04899f4477204\" returns successfully" Mar 3 12:47:50.758330 kubelet[3342]: E0303 12:47:50.757728 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-89x9d" podUID="7cdb4c7c-a3da-4294-8141-e0392c7cd04f" Mar 3 12:47:51.353024 containerd[2007]: time="2026-03-03T12:47:51.352940103Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 3 12:47:51.357485 systemd[1]: cri-containerd-aecce55dd61db17f5c0efcae9be5ecb6c2fff5d2e8507ce59db04899f4477204.scope: Deactivated successfully. Mar 3 12:47:51.358491 systemd[1]: cri-containerd-aecce55dd61db17f5c0efcae9be5ecb6c2fff5d2e8507ce59db04899f4477204.scope: Consumed 999ms CPU time, 188.4M memory peak, 936K read from disk, 171.3M written to disk. Mar 3 12:47:51.364465 containerd[2007]: time="2026-03-03T12:47:51.364323783Z" level=info msg="received container exit event container_id:\"aecce55dd61db17f5c0efcae9be5ecb6c2fff5d2e8507ce59db04899f4477204\" id:\"aecce55dd61db17f5c0efcae9be5ecb6c2fff5d2e8507ce59db04899f4477204\" pid:4395 exited_at:{seconds:1772542071 nanos:363973659}" Mar 3 12:47:51.442902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aecce55dd61db17f5c0efcae9be5ecb6c2fff5d2e8507ce59db04899f4477204-rootfs.mount: Deactivated successfully. Mar 3 12:47:51.453811 kubelet[3342]: I0303 12:47:51.453762 3342 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 3 12:47:51.542018 systemd[1]: Created slice kubepods-burstable-pod355afa55_697e_446f_9dc6_af783b23ea1e.slice - libcontainer container kubepods-burstable-pod355afa55_697e_446f_9dc6_af783b23ea1e.slice. Mar 3 12:47:51.571941 systemd[1]: Created slice kubepods-besteffort-pod1be38b67_1840_4e80_88ec_b6d5bc54edee.slice - libcontainer container kubepods-besteffort-pod1be38b67_1840_4e80_88ec_b6d5bc54edee.slice. Mar 3 12:47:51.595269 systemd[1]: Created slice kubepods-besteffort-pod30f7a08d_9fea_401e_84d5_e66e236079a5.slice - libcontainer container kubepods-besteffort-pod30f7a08d_9fea_401e_84d5_e66e236079a5.slice. Mar 3 12:47:51.608595 kubelet[3342]: I0303 12:47:51.608426 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4h5p\" (UniqueName: \"kubernetes.io/projected/6a0ee5eb-a971-47d3-b4fb-2d02d9887304-kube-api-access-w4h5p\") pod \"calico-apiserver-579696bc9c-ppbr5\" (UID: \"6a0ee5eb-a971-47d3-b4fb-2d02d9887304\") " pod="calico-system/calico-apiserver-579696bc9c-ppbr5" Mar 3 12:47:51.609424 kubelet[3342]: I0303 12:47:51.608882 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30f7a08d-9fea-401e-84d5-e66e236079a5-whisker-backend-key-pair\") pod \"whisker-58b67fdd54-f2w47\" (UID: \"30f7a08d-9fea-401e-84d5-e66e236079a5\") " pod="calico-system/whisker-58b67fdd54-f2w47" Mar 3 12:47:51.610507 kubelet[3342]: I0303 12:47:51.610107 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/30f7a08d-9fea-401e-84d5-e66e236079a5-nginx-config\") pod \"whisker-58b67fdd54-f2w47\" (UID: \"30f7a08d-9fea-401e-84d5-e66e236079a5\") " pod="calico-system/whisker-58b67fdd54-f2w47" Mar 3 12:47:51.611238 kubelet[3342]: I0303 12:47:51.611179 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1be38b67-1840-4e80-88ec-b6d5bc54edee-goldmane-key-pair\") pod \"goldmane-9f7667bb8-wjtnn\" (UID: \"1be38b67-1840-4e80-88ec-b6d5bc54edee\") " pod="calico-system/goldmane-9f7667bb8-wjtnn" Mar 3 12:47:51.611376 kubelet[3342]: I0303 12:47:51.611248 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6a0ee5eb-a971-47d3-b4fb-2d02d9887304-calico-apiserver-certs\") pod \"calico-apiserver-579696bc9c-ppbr5\" (UID: \"6a0ee5eb-a971-47d3-b4fb-2d02d9887304\") " pod="calico-system/calico-apiserver-579696bc9c-ppbr5" Mar 3 12:47:51.611376 kubelet[3342]: I0303 12:47:51.611305 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/355afa55-697e-446f-9dc6-af783b23ea1e-config-volume\") pod \"coredns-7d764666f9-b8v6l\" (UID: \"355afa55-697e-446f-9dc6-af783b23ea1e\") " pod="kube-system/coredns-7d764666f9-b8v6l" Mar 3 12:47:51.611376 kubelet[3342]: I0303 12:47:51.611343 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1be38b67-1840-4e80-88ec-b6d5bc54edee-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-wjtnn\" (UID: \"1be38b67-1840-4e80-88ec-b6d5bc54edee\") " pod="calico-system/goldmane-9f7667bb8-wjtnn" Mar 3 12:47:51.611536 kubelet[3342]: I0303 12:47:51.611386 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sxlb\" (UniqueName: \"kubernetes.io/projected/ec879cb1-4c4f-4c95-8084-9fe73f99a62f-kube-api-access-2sxlb\") pod \"calico-kube-controllers-795844594-88vdk\" (UID: \"ec879cb1-4c4f-4c95-8084-9fe73f99a62f\") " pod="calico-system/calico-kube-controllers-795844594-88vdk" Mar 3 12:47:51.611536 kubelet[3342]: I0303 12:47:51.611423 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c5hk\" (UniqueName: \"kubernetes.io/projected/1be38b67-1840-4e80-88ec-b6d5bc54edee-kube-api-access-7c5hk\") pod \"goldmane-9f7667bb8-wjtnn\" (UID: \"1be38b67-1840-4e80-88ec-b6d5bc54edee\") " pod="calico-system/goldmane-9f7667bb8-wjtnn" Mar 3 12:47:51.611536 kubelet[3342]: I0303 12:47:51.611478 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1be38b67-1840-4e80-88ec-b6d5bc54edee-config\") pod \"goldmane-9f7667bb8-wjtnn\" (UID: \"1be38b67-1840-4e80-88ec-b6d5bc54edee\") " pod="calico-system/goldmane-9f7667bb8-wjtnn" Mar 3 12:47:51.611536 kubelet[3342]: I0303 12:47:51.611524 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec879cb1-4c4f-4c95-8084-9fe73f99a62f-tigera-ca-bundle\") pod \"calico-kube-controllers-795844594-88vdk\" (UID: \"ec879cb1-4c4f-4c95-8084-9fe73f99a62f\") " pod="calico-system/calico-kube-controllers-795844594-88vdk" Mar 3 12:47:51.613130 kubelet[3342]: I0303 12:47:51.611562 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95dg2\" (UniqueName: \"kubernetes.io/projected/355afa55-697e-446f-9dc6-af783b23ea1e-kube-api-access-95dg2\") pod \"coredns-7d764666f9-b8v6l\" (UID: \"355afa55-697e-446f-9dc6-af783b23ea1e\") " pod="kube-system/coredns-7d764666f9-b8v6l" Mar 3 12:47:51.613130 kubelet[3342]: I0303 12:47:51.611598 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30f7a08d-9fea-401e-84d5-e66e236079a5-whisker-ca-bundle\") pod \"whisker-58b67fdd54-f2w47\" (UID: \"30f7a08d-9fea-401e-84d5-e66e236079a5\") " pod="calico-system/whisker-58b67fdd54-f2w47" Mar 3 12:47:51.613130 kubelet[3342]: I0303 12:47:51.611642 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfbg6\" (UniqueName: \"kubernetes.io/projected/30f7a08d-9fea-401e-84d5-e66e236079a5-kube-api-access-vfbg6\") pod \"whisker-58b67fdd54-f2w47\" (UID: \"30f7a08d-9fea-401e-84d5-e66e236079a5\") " pod="calico-system/whisker-58b67fdd54-f2w47" Mar 3 12:47:51.613868 systemd[1]: Created slice kubepods-besteffort-podec879cb1_4c4f_4c95_8084_9fe73f99a62f.slice - libcontainer container kubepods-besteffort-podec879cb1_4c4f_4c95_8084_9fe73f99a62f.slice. Mar 3 12:47:51.634489 systemd[1]: Created slice kubepods-besteffort-pod6a0ee5eb_a971_47d3_b4fb_2d02d9887304.slice - libcontainer container kubepods-besteffort-pod6a0ee5eb_a971_47d3_b4fb_2d02d9887304.slice. Mar 3 12:47:51.648319 systemd[1]: Created slice kubepods-burstable-pod83fbd844_9c20_4ee5_9da3_482034c2125b.slice - libcontainer container kubepods-burstable-pod83fbd844_9c20_4ee5_9da3_482034c2125b.slice. Mar 3 12:47:51.666920 systemd[1]: Created slice kubepods-besteffort-pod6e1a928f_acc4_4d48_9017_c2fbcea7def2.slice - libcontainer container kubepods-besteffort-pod6e1a928f_acc4_4d48_9017_c2fbcea7def2.slice. Mar 3 12:47:51.714556 kubelet[3342]: I0303 12:47:51.713232 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xclj\" (UniqueName: \"kubernetes.io/projected/83fbd844-9c20-4ee5-9da3-482034c2125b-kube-api-access-4xclj\") pod \"coredns-7d764666f9-gnfdj\" (UID: \"83fbd844-9c20-4ee5-9da3-482034c2125b\") " pod="kube-system/coredns-7d764666f9-gnfdj" Mar 3 12:47:51.714556 kubelet[3342]: I0303 12:47:51.713302 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2wx7\" (UniqueName: \"kubernetes.io/projected/6e1a928f-acc4-4d48-9017-c2fbcea7def2-kube-api-access-s2wx7\") pod \"calico-apiserver-579696bc9c-5btc9\" (UID: \"6e1a928f-acc4-4d48-9017-c2fbcea7def2\") " pod="calico-system/calico-apiserver-579696bc9c-5btc9" Mar 3 12:47:51.714556 kubelet[3342]: I0303 12:47:51.713367 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83fbd844-9c20-4ee5-9da3-482034c2125b-config-volume\") pod \"coredns-7d764666f9-gnfdj\" (UID: \"83fbd844-9c20-4ee5-9da3-482034c2125b\") " pod="kube-system/coredns-7d764666f9-gnfdj" Mar 3 12:47:51.714556 kubelet[3342]: I0303 12:47:51.713457 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6e1a928f-acc4-4d48-9017-c2fbcea7def2-calico-apiserver-certs\") pod \"calico-apiserver-579696bc9c-5btc9\" (UID: \"6e1a928f-acc4-4d48-9017-c2fbcea7def2\") " pod="calico-system/calico-apiserver-579696bc9c-5btc9" Mar 3 12:47:51.869917 containerd[2007]: time="2026-03-03T12:47:51.868068798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-b8v6l,Uid:355afa55-697e-446f-9dc6-af783b23ea1e,Namespace:kube-system,Attempt:0,}" Mar 3 12:47:51.888393 containerd[2007]: time="2026-03-03T12:47:51.888346014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wjtnn,Uid:1be38b67-1840-4e80-88ec-b6d5bc54edee,Namespace:calico-system,Attempt:0,}" Mar 3 12:47:51.910628 containerd[2007]: time="2026-03-03T12:47:51.910577382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58b67fdd54-f2w47,Uid:30f7a08d-9fea-401e-84d5-e66e236079a5,Namespace:calico-system,Attempt:0,}" Mar 3 12:47:51.926810 containerd[2007]: time="2026-03-03T12:47:51.926554122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795844594-88vdk,Uid:ec879cb1-4c4f-4c95-8084-9fe73f99a62f,Namespace:calico-system,Attempt:0,}" Mar 3 12:47:51.946527 containerd[2007]: time="2026-03-03T12:47:51.946430898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579696bc9c-ppbr5,Uid:6a0ee5eb-a971-47d3-b4fb-2d02d9887304,Namespace:calico-system,Attempt:0,}" Mar 3 12:47:51.967839 containerd[2007]: time="2026-03-03T12:47:51.966967242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gnfdj,Uid:83fbd844-9c20-4ee5-9da3-482034c2125b,Namespace:kube-system,Attempt:0,}" Mar 3 12:47:51.987085 containerd[2007]: time="2026-03-03T12:47:51.987031338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579696bc9c-5btc9,Uid:6e1a928f-acc4-4d48-9017-c2fbcea7def2,Namespace:calico-system,Attempt:0,}" Mar 3 12:47:52.185609 containerd[2007]: time="2026-03-03T12:47:52.185271975Z" level=info msg="CreateContainer within sandbox \"7ea07ca292564cc640cd3c7bf1e9fea46e534d28e5a1d246daf5e026c2c746e9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 3 12:47:52.276082 containerd[2007]: time="2026-03-03T12:47:52.275971852Z" level=info msg="Container f9d4c070119c31749a1d7aa1896d150a978938e9ebd11cae74eaf0d19664e64f: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:52.346305 containerd[2007]: time="2026-03-03T12:47:52.346189636Z" level=info msg="CreateContainer within sandbox \"7ea07ca292564cc640cd3c7bf1e9fea46e534d28e5a1d246daf5e026c2c746e9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f9d4c070119c31749a1d7aa1896d150a978938e9ebd11cae74eaf0d19664e64f\"" Mar 3 12:47:52.349410 containerd[2007]: time="2026-03-03T12:47:52.349053964Z" level=info msg="StartContainer for \"f9d4c070119c31749a1d7aa1896d150a978938e9ebd11cae74eaf0d19664e64f\"" Mar 3 12:47:52.371175 containerd[2007]: time="2026-03-03T12:47:52.371071708Z" level=info msg="connecting to shim f9d4c070119c31749a1d7aa1896d150a978938e9ebd11cae74eaf0d19664e64f" address="unix:///run/containerd/s/9e9f8b54994b688185b30af38e3c2f30294645c1401c54f251ae09ce5b265f57" protocol=ttrpc version=3 Mar 3 12:47:52.409256 containerd[2007]: time="2026-03-03T12:47:52.409170520Z" level=error msg="Failed to destroy network for sandbox \"ee105446161ac8ccefdec45cd5488e01e548c5f08728f6ee0c3a491cab5082f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.416992 containerd[2007]: time="2026-03-03T12:47:52.416899912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-b8v6l,Uid:355afa55-697e-446f-9dc6-af783b23ea1e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee105446161ac8ccefdec45cd5488e01e548c5f08728f6ee0c3a491cab5082f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.417715 kubelet[3342]: E0303 12:47:52.417476 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee105446161ac8ccefdec45cd5488e01e548c5f08728f6ee0c3a491cab5082f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.417715 kubelet[3342]: E0303 12:47:52.417586 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee105446161ac8ccefdec45cd5488e01e548c5f08728f6ee0c3a491cab5082f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-b8v6l" Mar 3 12:47:52.417715 kubelet[3342]: E0303 12:47:52.417620 3342 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee105446161ac8ccefdec45cd5488e01e548c5f08728f6ee0c3a491cab5082f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-b8v6l" Mar 3 12:47:52.421366 kubelet[3342]: E0303 12:47:52.418223 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-b8v6l_kube-system(355afa55-697e-446f-9dc6-af783b23ea1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-b8v6l_kube-system(355afa55-697e-446f-9dc6-af783b23ea1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee105446161ac8ccefdec45cd5488e01e548c5f08728f6ee0c3a491cab5082f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-b8v6l" podUID="355afa55-697e-446f-9dc6-af783b23ea1e" Mar 3 12:47:52.427412 containerd[2007]: time="2026-03-03T12:47:52.427290856Z" level=error msg="Failed to destroy network for sandbox \"de4225ae0270c60bb2afe2bcd09d1a1396ace47da0487a9d72b3c03cd0415d14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.437045 containerd[2007]: time="2026-03-03T12:47:52.436378408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579696bc9c-ppbr5,Uid:6a0ee5eb-a971-47d3-b4fb-2d02d9887304,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de4225ae0270c60bb2afe2bcd09d1a1396ace47da0487a9d72b3c03cd0415d14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.437968 kubelet[3342]: E0303 12:47:52.437868 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de4225ae0270c60bb2afe2bcd09d1a1396ace47da0487a9d72b3c03cd0415d14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.439728 kubelet[3342]: E0303 12:47:52.438338 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de4225ae0270c60bb2afe2bcd09d1a1396ace47da0487a9d72b3c03cd0415d14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-579696bc9c-ppbr5" Mar 3 12:47:52.440354 kubelet[3342]: E0303 12:47:52.439921 3342 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de4225ae0270c60bb2afe2bcd09d1a1396ace47da0487a9d72b3c03cd0415d14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-579696bc9c-ppbr5" Mar 3 12:47:52.440354 kubelet[3342]: E0303 12:47:52.440275 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-579696bc9c-ppbr5_calico-system(6a0ee5eb-a971-47d3-b4fb-2d02d9887304)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-579696bc9c-ppbr5_calico-system(6a0ee5eb-a971-47d3-b4fb-2d02d9887304)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de4225ae0270c60bb2afe2bcd09d1a1396ace47da0487a9d72b3c03cd0415d14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-579696bc9c-ppbr5" podUID="6a0ee5eb-a971-47d3-b4fb-2d02d9887304" Mar 3 12:47:52.484860 containerd[2007]: time="2026-03-03T12:47:52.484124345Z" level=error msg="Failed to destroy network for sandbox \"f5c4d43c86852f21b7425bb57b13d5971369ef895ecc302be04fd79b6c919607\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.495195 systemd[1]: run-netns-cni\x2d1c08af46\x2d4f5e\x2dde32\x2de064\x2d57c97a5742f9.mount: Deactivated successfully. Mar 3 12:47:52.532917 containerd[2007]: time="2026-03-03T12:47:52.532784909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58b67fdd54-f2w47,Uid:30f7a08d-9fea-401e-84d5-e66e236079a5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5c4d43c86852f21b7425bb57b13d5971369ef895ecc302be04fd79b6c919607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.533589 kubelet[3342]: E0303 12:47:52.533542 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5c4d43c86852f21b7425bb57b13d5971369ef895ecc302be04fd79b6c919607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.533862 kubelet[3342]: E0303 12:47:52.533825 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5c4d43c86852f21b7425bb57b13d5971369ef895ecc302be04fd79b6c919607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58b67fdd54-f2w47" Mar 3 12:47:52.534116 kubelet[3342]: E0303 12:47:52.534078 3342 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5c4d43c86852f21b7425bb57b13d5971369ef895ecc302be04fd79b6c919607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58b67fdd54-f2w47" Mar 3 12:47:52.534377 kubelet[3342]: E0303 12:47:52.534292 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58b67fdd54-f2w47_calico-system(30f7a08d-9fea-401e-84d5-e66e236079a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58b67fdd54-f2w47_calico-system(30f7a08d-9fea-401e-84d5-e66e236079a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5c4d43c86852f21b7425bb57b13d5971369ef895ecc302be04fd79b6c919607\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58b67fdd54-f2w47" podUID="30f7a08d-9fea-401e-84d5-e66e236079a5" Mar 3 12:47:52.540316 systemd[1]: Started cri-containerd-f9d4c070119c31749a1d7aa1896d150a978938e9ebd11cae74eaf0d19664e64f.scope - libcontainer container f9d4c070119c31749a1d7aa1896d150a978938e9ebd11cae74eaf0d19664e64f. Mar 3 12:47:52.560209 containerd[2007]: time="2026-03-03T12:47:52.559946981Z" level=error msg="Failed to destroy network for sandbox \"b5851344409f2b5989f7bf44b36bfc3b3e2eb37724b7a6d1b2061d9706d91719\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.566624 systemd[1]: run-netns-cni\x2d46183bdb\x2d32af\x2d7b06\x2d4ddf\x2d894e6cc7629a.mount: Deactivated successfully. Mar 3 12:47:52.569681 containerd[2007]: time="2026-03-03T12:47:52.569510201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795844594-88vdk,Uid:ec879cb1-4c4f-4c95-8084-9fe73f99a62f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5851344409f2b5989f7bf44b36bfc3b3e2eb37724b7a6d1b2061d9706d91719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.570424 kubelet[3342]: E0303 12:47:52.570134 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5851344409f2b5989f7bf44b36bfc3b3e2eb37724b7a6d1b2061d9706d91719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.570424 kubelet[3342]: E0303 12:47:52.570226 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5851344409f2b5989f7bf44b36bfc3b3e2eb37724b7a6d1b2061d9706d91719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-795844594-88vdk" Mar 3 12:47:52.570424 kubelet[3342]: E0303 12:47:52.570262 3342 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5851344409f2b5989f7bf44b36bfc3b3e2eb37724b7a6d1b2061d9706d91719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-795844594-88vdk" Mar 3 12:47:52.570608 kubelet[3342]: E0303 12:47:52.570341 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-795844594-88vdk_calico-system(ec879cb1-4c4f-4c95-8084-9fe73f99a62f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-795844594-88vdk_calico-system(ec879cb1-4c4f-4c95-8084-9fe73f99a62f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5851344409f2b5989f7bf44b36bfc3b3e2eb37724b7a6d1b2061d9706d91719\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-795844594-88vdk" podUID="ec879cb1-4c4f-4c95-8084-9fe73f99a62f" Mar 3 12:47:52.574599 containerd[2007]: time="2026-03-03T12:47:52.574542137Z" level=error msg="Failed to destroy network for sandbox \"642f1ebf7a207af79a50a042c3b1084f71c95fec4fc075349f3cb71fcd43641a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.578910 containerd[2007]: time="2026-03-03T12:47:52.578840585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gnfdj,Uid:83fbd844-9c20-4ee5-9da3-482034c2125b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"642f1ebf7a207af79a50a042c3b1084f71c95fec4fc075349f3cb71fcd43641a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.580372 kubelet[3342]: E0303 12:47:52.579481 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"642f1ebf7a207af79a50a042c3b1084f71c95fec4fc075349f3cb71fcd43641a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.581165 kubelet[3342]: E0303 12:47:52.579615 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"642f1ebf7a207af79a50a042c3b1084f71c95fec4fc075349f3cb71fcd43641a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-gnfdj" Mar 3 12:47:52.581165 kubelet[3342]: E0303 12:47:52.580793 3342 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"642f1ebf7a207af79a50a042c3b1084f71c95fec4fc075349f3cb71fcd43641a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-gnfdj" Mar 3 12:47:52.581165 kubelet[3342]: E0303 12:47:52.580889 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-gnfdj_kube-system(83fbd844-9c20-4ee5-9da3-482034c2125b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-gnfdj_kube-system(83fbd844-9c20-4ee5-9da3-482034c2125b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"642f1ebf7a207af79a50a042c3b1084f71c95fec4fc075349f3cb71fcd43641a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-gnfdj" podUID="83fbd844-9c20-4ee5-9da3-482034c2125b" Mar 3 12:47:52.582966 systemd[1]: run-netns-cni\x2d675b68f6\x2df3a9\x2dcc10\x2d85b0\x2d1f6b5db62a1d.mount: Deactivated successfully. Mar 3 12:47:52.586305 containerd[2007]: time="2026-03-03T12:47:52.586249637Z" level=error msg="Failed to destroy network for sandbox \"bec96f06406802e1e8cb0bd0541b9aea92d7394de77b1211920f4693cca0aa68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.593224 systemd[1]: run-netns-cni\x2dc14cd464\x2da2c2\x2d1bde\x2d2d9c\x2df6e8cc78baf0.mount: Deactivated successfully. Mar 3 12:47:52.594077 containerd[2007]: time="2026-03-03T12:47:52.593760245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wjtnn,Uid:1be38b67-1840-4e80-88ec-b6d5bc54edee,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bec96f06406802e1e8cb0bd0541b9aea92d7394de77b1211920f4693cca0aa68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.598014 kubelet[3342]: E0303 12:47:52.597520 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bec96f06406802e1e8cb0bd0541b9aea92d7394de77b1211920f4693cca0aa68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.600089 kubelet[3342]: E0303 12:47:52.598084 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bec96f06406802e1e8cb0bd0541b9aea92d7394de77b1211920f4693cca0aa68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-wjtnn" Mar 3 12:47:52.600089 kubelet[3342]: E0303 12:47:52.598133 3342 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bec96f06406802e1e8cb0bd0541b9aea92d7394de77b1211920f4693cca0aa68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-wjtnn" Mar 3 12:47:52.600089 kubelet[3342]: E0303 12:47:52.598461 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-wjtnn_calico-system(1be38b67-1840-4e80-88ec-b6d5bc54edee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-wjtnn_calico-system(1be38b67-1840-4e80-88ec-b6d5bc54edee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bec96f06406802e1e8cb0bd0541b9aea92d7394de77b1211920f4693cca0aa68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-wjtnn" podUID="1be38b67-1840-4e80-88ec-b6d5bc54edee" Mar 3 12:47:52.617051 containerd[2007]: time="2026-03-03T12:47:52.616880477Z" level=error msg="Failed to destroy network for sandbox \"59d361b8aec4dec9f7fc3e10e8222ba4dd73bd1eb19568410c5d71bd5d9373d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.620086 containerd[2007]: time="2026-03-03T12:47:52.619976333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579696bc9c-5btc9,Uid:6e1a928f-acc4-4d48-9017-c2fbcea7def2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d361b8aec4dec9f7fc3e10e8222ba4dd73bd1eb19568410c5d71bd5d9373d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.620684 kubelet[3342]: E0303 12:47:52.620533 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d361b8aec4dec9f7fc3e10e8222ba4dd73bd1eb19568410c5d71bd5d9373d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.620684 kubelet[3342]: E0303 12:47:52.620619 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d361b8aec4dec9f7fc3e10e8222ba4dd73bd1eb19568410c5d71bd5d9373d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-579696bc9c-5btc9" Mar 3 12:47:52.620684 kubelet[3342]: E0303 12:47:52.620659 3342 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d361b8aec4dec9f7fc3e10e8222ba4dd73bd1eb19568410c5d71bd5d9373d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-579696bc9c-5btc9" Mar 3 12:47:52.620937 kubelet[3342]: E0303 12:47:52.620812 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-579696bc9c-5btc9_calico-system(6e1a928f-acc4-4d48-9017-c2fbcea7def2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-579696bc9c-5btc9_calico-system(6e1a928f-acc4-4d48-9017-c2fbcea7def2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59d361b8aec4dec9f7fc3e10e8222ba4dd73bd1eb19568410c5d71bd5d9373d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-579696bc9c-5btc9" podUID="6e1a928f-acc4-4d48-9017-c2fbcea7def2" Mar 3 12:47:52.717826 containerd[2007]: time="2026-03-03T12:47:52.717513150Z" level=info msg="StartContainer for \"f9d4c070119c31749a1d7aa1896d150a978938e9ebd11cae74eaf0d19664e64f\" returns successfully" Mar 3 12:47:52.775382 systemd[1]: Created slice kubepods-besteffort-pod7cdb4c7c_a3da_4294_8141_e0392c7cd04f.slice - libcontainer container kubepods-besteffort-pod7cdb4c7c_a3da_4294_8141_e0392c7cd04f.slice. Mar 3 12:47:52.787410 containerd[2007]: time="2026-03-03T12:47:52.787267002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-89x9d,Uid:7cdb4c7c-a3da-4294-8141-e0392c7cd04f,Namespace:calico-system,Attempt:0,}" Mar 3 12:47:52.922273 containerd[2007]: time="2026-03-03T12:47:52.922187959Z" level=error msg="Failed to destroy network for sandbox \"cf873a7a9de6b008aa64bf7c1c983bd0a1347bcaeaf8dfa5d57146cf6b71c005\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.926480 containerd[2007]: time="2026-03-03T12:47:52.926343187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-89x9d,Uid:7cdb4c7c-a3da-4294-8141-e0392c7cd04f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf873a7a9de6b008aa64bf7c1c983bd0a1347bcaeaf8dfa5d57146cf6b71c005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.927804 kubelet[3342]: E0303 12:47:52.927341 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf873a7a9de6b008aa64bf7c1c983bd0a1347bcaeaf8dfa5d57146cf6b71c005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 12:47:52.928061 kubelet[3342]: E0303 12:47:52.928007 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf873a7a9de6b008aa64bf7c1c983bd0a1347bcaeaf8dfa5d57146cf6b71c005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-89x9d" Mar 3 12:47:52.928178 kubelet[3342]: E0303 12:47:52.928149 3342 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf873a7a9de6b008aa64bf7c1c983bd0a1347bcaeaf8dfa5d57146cf6b71c005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-89x9d" Mar 3 12:47:52.929257 kubelet[3342]: E0303 12:47:52.928372 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-89x9d_calico-system(7cdb4c7c-a3da-4294-8141-e0392c7cd04f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-89x9d_calico-system(7cdb4c7c-a3da-4294-8141-e0392c7cd04f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf873a7a9de6b008aa64bf7c1c983bd0a1347bcaeaf8dfa5d57146cf6b71c005\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-89x9d" podUID="7cdb4c7c-a3da-4294-8141-e0392c7cd04f" Mar 3 12:47:53.243281 kubelet[3342]: I0303 12:47:53.243136 3342 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-2bcqj" podStartSLOduration=2.226501327 podStartE2EDuration="24.243107836s" podCreationTimestamp="2026-03-03 12:47:29 +0000 UTC" firstStartedPulling="2026-03-03 12:47:30.12975465 +0000 UTC m=+27.681459967" lastFinishedPulling="2026-03-03 12:47:52.146361135 +0000 UTC m=+49.698066476" observedRunningTime="2026-03-03 12:47:53.2379826 +0000 UTC m=+50.789687929" watchObservedRunningTime="2026-03-03 12:47:53.243107836 +0000 UTC m=+50.794813165" Mar 3 12:47:53.326942 kubelet[3342]: I0303 12:47:53.326889 3342 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/30f7a08d-9fea-401e-84d5-e66e236079a5-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30f7a08d-9fea-401e-84d5-e66e236079a5-whisker-backend-key-pair\") pod \"30f7a08d-9fea-401e-84d5-e66e236079a5\" (UID: \"30f7a08d-9fea-401e-84d5-e66e236079a5\") " Mar 3 12:47:53.327125 kubelet[3342]: I0303 12:47:53.326976 3342 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/30f7a08d-9fea-401e-84d5-e66e236079a5-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30f7a08d-9fea-401e-84d5-e66e236079a5-whisker-ca-bundle\") pod \"30f7a08d-9fea-401e-84d5-e66e236079a5\" (UID: \"30f7a08d-9fea-401e-84d5-e66e236079a5\") " Mar 3 12:47:53.327125 kubelet[3342]: I0303 12:47:53.327028 3342 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/30f7a08d-9fea-401e-84d5-e66e236079a5-kube-api-access-vfbg6\" (UniqueName: \"kubernetes.io/projected/30f7a08d-9fea-401e-84d5-e66e236079a5-kube-api-access-vfbg6\") pod \"30f7a08d-9fea-401e-84d5-e66e236079a5\" (UID: \"30f7a08d-9fea-401e-84d5-e66e236079a5\") " Mar 3 12:47:53.327125 kubelet[3342]: I0303 12:47:53.327077 3342 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/30f7a08d-9fea-401e-84d5-e66e236079a5-nginx-config\" (UniqueName: \"kubernetes.io/configmap/30f7a08d-9fea-401e-84d5-e66e236079a5-nginx-config\") pod \"30f7a08d-9fea-401e-84d5-e66e236079a5\" (UID: \"30f7a08d-9fea-401e-84d5-e66e236079a5\") " Mar 3 12:47:53.329059 kubelet[3342]: I0303 12:47:53.328988 3342 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30f7a08d-9fea-401e-84d5-e66e236079a5-nginx-config" pod "30f7a08d-9fea-401e-84d5-e66e236079a5" (UID: "30f7a08d-9fea-401e-84d5-e66e236079a5"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 12:47:53.330733 kubelet[3342]: I0303 12:47:53.328677 3342 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30f7a08d-9fea-401e-84d5-e66e236079a5-whisker-ca-bundle" pod "30f7a08d-9fea-401e-84d5-e66e236079a5" (UID: "30f7a08d-9fea-401e-84d5-e66e236079a5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 12:47:53.338014 kubelet[3342]: I0303 12:47:53.337647 3342 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f7a08d-9fea-401e-84d5-e66e236079a5-kube-api-access-vfbg6" pod "30f7a08d-9fea-401e-84d5-e66e236079a5" (UID: "30f7a08d-9fea-401e-84d5-e66e236079a5"). InnerVolumeSpecName "kube-api-access-vfbg6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 12:47:53.341607 kubelet[3342]: I0303 12:47:53.341531 3342 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f7a08d-9fea-401e-84d5-e66e236079a5-whisker-backend-key-pair" pod "30f7a08d-9fea-401e-84d5-e66e236079a5" (UID: "30f7a08d-9fea-401e-84d5-e66e236079a5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 3 12:47:53.429100 kubelet[3342]: I0303 12:47:53.429042 3342 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30f7a08d-9fea-401e-84d5-e66e236079a5-whisker-ca-bundle\") on node \"ip-172-31-25-173\" DevicePath \"\"" Mar 3 12:47:53.429100 kubelet[3342]: I0303 12:47:53.429097 3342 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vfbg6\" (UniqueName: \"kubernetes.io/projected/30f7a08d-9fea-401e-84d5-e66e236079a5-kube-api-access-vfbg6\") on node \"ip-172-31-25-173\" DevicePath \"\"" Mar 3 12:47:53.429726 kubelet[3342]: I0303 12:47:53.429125 3342 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/30f7a08d-9fea-401e-84d5-e66e236079a5-nginx-config\") on node \"ip-172-31-25-173\" DevicePath \"\"" Mar 3 12:47:53.429726 kubelet[3342]: I0303 12:47:53.429147 3342 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30f7a08d-9fea-401e-84d5-e66e236079a5-whisker-backend-key-pair\") on node \"ip-172-31-25-173\" DevicePath \"\"" Mar 3 12:47:53.445484 systemd[1]: run-netns-cni\x2d651a04bc\x2d6062\x2d803b\x2d174d\x2deb1a03f655e9.mount: Deactivated successfully. Mar 3 12:47:53.445678 systemd[1]: var-lib-kubelet-pods-30f7a08d\x2d9fea\x2d401e\x2d84d5\x2de66e236079a5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvfbg6.mount: Deactivated successfully. Mar 3 12:47:53.445857 systemd[1]: var-lib-kubelet-pods-30f7a08d\x2d9fea\x2d401e\x2d84d5\x2de66e236079a5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 3 12:47:54.175406 systemd[1]: Removed slice kubepods-besteffort-pod30f7a08d_9fea_401e_84d5_e66e236079a5.slice - libcontainer container kubepods-besteffort-pod30f7a08d_9fea_401e_84d5_e66e236079a5.slice. Mar 3 12:47:54.279965 systemd[1]: Created slice kubepods-besteffort-pod49914c72_56a2_4a88_aef6_bf6703375616.slice - libcontainer container kubepods-besteffort-pod49914c72_56a2_4a88_aef6_bf6703375616.slice. Mar 3 12:47:54.336979 kubelet[3342]: I0303 12:47:54.336918 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49914c72-56a2-4a88-aef6-bf6703375616-whisker-ca-bundle\") pod \"whisker-6865674dc5-dhm4r\" (UID: \"49914c72-56a2-4a88-aef6-bf6703375616\") " pod="calico-system/whisker-6865674dc5-dhm4r" Mar 3 12:47:54.337271 kubelet[3342]: I0303 12:47:54.337243 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/49914c72-56a2-4a88-aef6-bf6703375616-nginx-config\") pod \"whisker-6865674dc5-dhm4r\" (UID: \"49914c72-56a2-4a88-aef6-bf6703375616\") " pod="calico-system/whisker-6865674dc5-dhm4r" Mar 3 12:47:54.337450 kubelet[3342]: I0303 12:47:54.337407 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/49914c72-56a2-4a88-aef6-bf6703375616-whisker-backend-key-pair\") pod \"whisker-6865674dc5-dhm4r\" (UID: \"49914c72-56a2-4a88-aef6-bf6703375616\") " pod="calico-system/whisker-6865674dc5-dhm4r" Mar 3 12:47:54.337650 kubelet[3342]: I0303 12:47:54.337607 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f4k9\" (UniqueName: \"kubernetes.io/projected/49914c72-56a2-4a88-aef6-bf6703375616-kube-api-access-6f4k9\") pod \"whisker-6865674dc5-dhm4r\" (UID: \"49914c72-56a2-4a88-aef6-bf6703375616\") " pod="calico-system/whisker-6865674dc5-dhm4r" Mar 3 12:47:54.593903 containerd[2007]: time="2026-03-03T12:47:54.593831047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6865674dc5-dhm4r,Uid:49914c72-56a2-4a88-aef6-bf6703375616,Namespace:calico-system,Attempt:0,}" Mar 3 12:47:54.771779 kubelet[3342]: I0303 12:47:54.771643 3342 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="30f7a08d-9fea-401e-84d5-e66e236079a5" path="/var/lib/kubelet/pods/30f7a08d-9fea-401e-84d5-e66e236079a5/volumes" Mar 3 12:47:55.058777 systemd-networkd[1853]: calie0994aaab41: Link UP Mar 3 12:47:55.062465 systemd-networkd[1853]: calie0994aaab41: Gained carrier Mar 3 12:47:55.079240 (udev-worker)[4816]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:54.679 [ERROR][4761] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:54.738 [INFO][4761] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--173-k8s-whisker--6865674dc5--dhm4r-eth0 whisker-6865674dc5- calico-system 49914c72-56a2-4a88-aef6-bf6703375616 951 0 2026-03-03 12:47:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6865674dc5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-25-173 whisker-6865674dc5-dhm4r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie0994aaab41 [] [] }} ContainerID="f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" Namespace="calico-system" Pod="whisker-6865674dc5-dhm4r" WorkloadEndpoint="ip--172--31--25--173-k8s-whisker--6865674dc5--dhm4r-" Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:54.739 [INFO][4761] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" Namespace="calico-system" Pod="whisker-6865674dc5-dhm4r" WorkloadEndpoint="ip--172--31--25--173-k8s-whisker--6865674dc5--dhm4r-eth0" Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:54.926 [INFO][4804] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" HandleID="k8s-pod-network.f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" Workload="ip--172--31--25--173-k8s-whisker--6865674dc5--dhm4r-eth0" Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:54.961 [INFO][4804] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" HandleID="k8s-pod-network.f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" Workload="ip--172--31--25--173-k8s-whisker--6865674dc5--dhm4r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000367de0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-173", "pod":"whisker-6865674dc5-dhm4r", "timestamp":"2026-03-03 12:47:54.926644329 +0000 UTC"}, Hostname:"ip-172-31-25-173", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004e0580)} Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:54.962 [INFO][4804] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:54.962 [INFO][4804] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:54.963 [INFO][4804] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-173' Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:54.969 [INFO][4804] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" host="ip-172-31-25-173" Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:54.979 [INFO][4804] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-25-173" Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:54.990 [INFO][4804] ipam/ipam.go 526: Trying affinity for 192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:54.994 [INFO][4804] ipam/ipam.go 160: Attempting to load block cidr=192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:54.999 [INFO][4804] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:54.999 [INFO][4804] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" host="ip-172-31-25-173" Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:55.002 [INFO][4804] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00 Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:55.011 [INFO][4804] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" host="ip-172-31-25-173" Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:55.021 [INFO][4804] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.103.1/26] block=192.168.103.0/26 handle="k8s-pod-network.f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" host="ip-172-31-25-173" Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:55.021 [INFO][4804] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.103.1/26] handle="k8s-pod-network.f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" host="ip-172-31-25-173" Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:55.021 [INFO][4804] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 12:47:55.114256 containerd[2007]: 2026-03-03 12:47:55.022 [INFO][4804] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.103.1/26] IPv6=[] ContainerID="f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" HandleID="k8s-pod-network.f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" Workload="ip--172--31--25--173-k8s-whisker--6865674dc5--dhm4r-eth0" Mar 3 12:47:55.118269 containerd[2007]: 2026-03-03 12:47:55.035 [INFO][4761] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" Namespace="calico-system" Pod="whisker-6865674dc5-dhm4r" WorkloadEndpoint="ip--172--31--25--173-k8s-whisker--6865674dc5--dhm4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--173-k8s-whisker--6865674dc5--dhm4r-eth0", GenerateName:"whisker-6865674dc5-", Namespace:"calico-system", SelfLink:"", UID:"49914c72-56a2-4a88-aef6-bf6703375616", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 12, 47, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6865674dc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-173", ContainerID:"", Pod:"whisker-6865674dc5-dhm4r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.103.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie0994aaab41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 12:47:55.118269 containerd[2007]: 2026-03-03 12:47:55.035 [INFO][4761] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.1/32] ContainerID="f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" Namespace="calico-system" Pod="whisker-6865674dc5-dhm4r" WorkloadEndpoint="ip--172--31--25--173-k8s-whisker--6865674dc5--dhm4r-eth0" Mar 3 12:47:55.118269 containerd[2007]: 2026-03-03 12:47:55.035 [INFO][4761] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0994aaab41 ContainerID="f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" Namespace="calico-system" Pod="whisker-6865674dc5-dhm4r" WorkloadEndpoint="ip--172--31--25--173-k8s-whisker--6865674dc5--dhm4r-eth0" Mar 3 12:47:55.118269 containerd[2007]: 2026-03-03 12:47:55.069 [INFO][4761] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" Namespace="calico-system" Pod="whisker-6865674dc5-dhm4r" WorkloadEndpoint="ip--172--31--25--173-k8s-whisker--6865674dc5--dhm4r-eth0" Mar 3 12:47:55.118269 containerd[2007]: 2026-03-03 12:47:55.072 [INFO][4761] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" Namespace="calico-system" Pod="whisker-6865674dc5-dhm4r" WorkloadEndpoint="ip--172--31--25--173-k8s-whisker--6865674dc5--dhm4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--173-k8s-whisker--6865674dc5--dhm4r-eth0", GenerateName:"whisker-6865674dc5-", Namespace:"calico-system", SelfLink:"", UID:"49914c72-56a2-4a88-aef6-bf6703375616", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 12, 47, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6865674dc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-173", ContainerID:"f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00", Pod:"whisker-6865674dc5-dhm4r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.103.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie0994aaab41", MAC:"b6:b2:04:10:cc:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 12:47:55.118269 containerd[2007]: 2026-03-03 12:47:55.095 [INFO][4761] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" Namespace="calico-system" Pod="whisker-6865674dc5-dhm4r" WorkloadEndpoint="ip--172--31--25--173-k8s-whisker--6865674dc5--dhm4r-eth0" Mar 3 12:47:55.237089 containerd[2007]: time="2026-03-03T12:47:55.237031506Z" level=info msg="connecting to shim f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00" address="unix:///run/containerd/s/e49758dadfcac7af3a65573f56a703ac21fec709b8e173b5ee7a5612b49ca5ff" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:47:55.330631 systemd[1]: Started cri-containerd-f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00.scope - libcontainer container f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00. Mar 3 12:47:55.501023 containerd[2007]: time="2026-03-03T12:47:55.500887628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6865674dc5-dhm4r,Uid:49914c72-56a2-4a88-aef6-bf6703375616,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00\"" Mar 3 12:47:55.511462 containerd[2007]: time="2026-03-03T12:47:55.511101608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 3 12:47:56.373614 systemd-networkd[1853]: vxlan.calico: Link UP Mar 3 12:47:56.373633 systemd-networkd[1853]: vxlan.calico: Gained carrier Mar 3 12:47:56.412902 systemd-networkd[1853]: calie0994aaab41: Gained IPv6LL Mar 3 12:47:56.431114 (udev-worker)[4815]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:47:57.125797 containerd[2007]: time="2026-03-03T12:47:57.125102708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:57.127835 containerd[2007]: time="2026-03-03T12:47:57.127564544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 3 12:47:57.130180 containerd[2007]: time="2026-03-03T12:47:57.130100444Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:57.136253 containerd[2007]: time="2026-03-03T12:47:57.136167368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:57.137614 containerd[2007]: time="2026-03-03T12:47:57.137412788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.626239636s" Mar 3 12:47:57.137614 containerd[2007]: time="2026-03-03T12:47:57.137470328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 3 12:47:57.149107 containerd[2007]: time="2026-03-03T12:47:57.149019248Z" level=info msg="CreateContainer within sandbox \"f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 3 12:47:57.166949 containerd[2007]: time="2026-03-03T12:47:57.166873976Z" level=info msg="Container b6c33ed8c20b4cc2ee30045f9bb35174dc8f386ce25b0e5c3a0b91521042ea34: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:57.184578 containerd[2007]: time="2026-03-03T12:47:57.184497740Z" level=info msg="CreateContainer within sandbox \"f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b6c33ed8c20b4cc2ee30045f9bb35174dc8f386ce25b0e5c3a0b91521042ea34\"" Mar 3 12:47:57.186344 containerd[2007]: time="2026-03-03T12:47:57.185820848Z" level=info msg="StartContainer for \"b6c33ed8c20b4cc2ee30045f9bb35174dc8f386ce25b0e5c3a0b91521042ea34\"" Mar 3 12:47:57.188551 containerd[2007]: time="2026-03-03T12:47:57.188499128Z" level=info msg="connecting to shim b6c33ed8c20b4cc2ee30045f9bb35174dc8f386ce25b0e5c3a0b91521042ea34" address="unix:///run/containerd/s/e49758dadfcac7af3a65573f56a703ac21fec709b8e173b5ee7a5612b49ca5ff" protocol=ttrpc version=3 Mar 3 12:47:57.231067 systemd[1]: Started cri-containerd-b6c33ed8c20b4cc2ee30045f9bb35174dc8f386ce25b0e5c3a0b91521042ea34.scope - libcontainer container b6c33ed8c20b4cc2ee30045f9bb35174dc8f386ce25b0e5c3a0b91521042ea34. Mar 3 12:47:57.319255 containerd[2007]: time="2026-03-03T12:47:57.319152477Z" level=info msg="StartContainer for \"b6c33ed8c20b4cc2ee30045f9bb35174dc8f386ce25b0e5c3a0b91521042ea34\" returns successfully" Mar 3 12:47:57.322354 containerd[2007]: time="2026-03-03T12:47:57.322240245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 3 12:47:58.461170 systemd-networkd[1853]: vxlan.calico: Gained IPv6LL Mar 3 12:47:59.193505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831836186.mount: Deactivated successfully. Mar 3 12:47:59.214158 containerd[2007]: time="2026-03-03T12:47:59.214082494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:59.215790 containerd[2007]: time="2026-03-03T12:47:59.215668114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 3 12:47:59.218721 containerd[2007]: time="2026-03-03T12:47:59.217200430Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:59.221136 containerd[2007]: time="2026-03-03T12:47:59.221053486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:47:59.222778 containerd[2007]: time="2026-03-03T12:47:59.222733306Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.900432497s" Mar 3 12:47:59.222942 containerd[2007]: time="2026-03-03T12:47:59.222913438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 3 12:47:59.233026 containerd[2007]: time="2026-03-03T12:47:59.232853590Z" level=info msg="CreateContainer within sandbox \"f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 3 12:47:59.249724 containerd[2007]: time="2026-03-03T12:47:59.247934050Z" level=info msg="Container 3e5670157c79aded281bf5709fad19382ff8cca43601e2fc592fbc9dcdbdf768: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:47:59.274954 containerd[2007]: time="2026-03-03T12:47:59.274893058Z" level=info msg="CreateContainer within sandbox \"f7c02f7e71f0be4d9bdb70b657bcb65ecc867a4ba51f4da2074593561bec4c00\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3e5670157c79aded281bf5709fad19382ff8cca43601e2fc592fbc9dcdbdf768\"" Mar 3 12:47:59.278432 containerd[2007]: time="2026-03-03T12:47:59.278369842Z" level=info msg="StartContainer for \"3e5670157c79aded281bf5709fad19382ff8cca43601e2fc592fbc9dcdbdf768\"" Mar 3 12:47:59.281202 containerd[2007]: time="2026-03-03T12:47:59.281134534Z" level=info msg="connecting to shim 3e5670157c79aded281bf5709fad19382ff8cca43601e2fc592fbc9dcdbdf768" address="unix:///run/containerd/s/e49758dadfcac7af3a65573f56a703ac21fec709b8e173b5ee7a5612b49ca5ff" protocol=ttrpc version=3 Mar 3 12:47:59.324052 systemd[1]: Started cri-containerd-3e5670157c79aded281bf5709fad19382ff8cca43601e2fc592fbc9dcdbdf768.scope - libcontainer container 3e5670157c79aded281bf5709fad19382ff8cca43601e2fc592fbc9dcdbdf768. Mar 3 12:47:59.412372 containerd[2007]: time="2026-03-03T12:47:59.412220435Z" level=info msg="StartContainer for \"3e5670157c79aded281bf5709fad19382ff8cca43601e2fc592fbc9dcdbdf768\" returns successfully" Mar 3 12:48:00.227751 kubelet[3342]: I0303 12:48:00.227400 3342 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-6865674dc5-dhm4r" podStartSLOduration=2.5112650370000003 podStartE2EDuration="6.227333651s" podCreationTimestamp="2026-03-03 12:47:54 +0000 UTC" firstStartedPulling="2026-03-03 12:47:55.508081904 +0000 UTC m=+53.059787233" lastFinishedPulling="2026-03-03 12:47:59.224150494 +0000 UTC m=+56.775855847" observedRunningTime="2026-03-03 12:48:00.225449987 +0000 UTC m=+57.777155328" watchObservedRunningTime="2026-03-03 12:48:00.227333651 +0000 UTC m=+57.779038980" Mar 3 12:48:00.725461 ntpd[2201]: Listen normally on 6 vxlan.calico 192.168.103.0:123 Mar 3 12:48:00.726242 ntpd[2201]: 3 Mar 12:48:00 ntpd[2201]: Listen normally on 6 vxlan.calico 192.168.103.0:123 Mar 3 12:48:00.726242 ntpd[2201]: 3 Mar 12:48:00 ntpd[2201]: Listen normally on 7 calie0994aaab41 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 3 12:48:00.726242 ntpd[2201]: 3 Mar 12:48:00 ntpd[2201]: Listen normally on 8 vxlan.calico [fe80::6465:75ff:fe92:2569%5]:123 Mar 3 12:48:00.725557 ntpd[2201]: Listen normally on 7 calie0994aaab41 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 3 12:48:00.725604 ntpd[2201]: Listen normally on 8 vxlan.calico [fe80::6465:75ff:fe92:2569%5]:123 Mar 3 12:48:02.766060 containerd[2007]: time="2026-03-03T12:48:02.765813604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gnfdj,Uid:83fbd844-9c20-4ee5-9da3-482034c2125b,Namespace:kube-system,Attempt:0,}" Mar 3 12:48:03.007095 systemd-networkd[1853]: cali970bb67423d: Link UP Mar 3 12:48:03.011160 (udev-worker)[5110]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:48:03.012391 systemd-networkd[1853]: cali970bb67423d: Gained carrier Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.852 [INFO][5090] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--173-k8s-coredns--7d764666f9--gnfdj-eth0 coredns-7d764666f9- kube-system 83fbd844-9c20-4ee5-9da3-482034c2125b 894 0 2026-03-03 12:47:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-173 coredns-7d764666f9-gnfdj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali970bb67423d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" Namespace="kube-system" Pod="coredns-7d764666f9-gnfdj" WorkloadEndpoint="ip--172--31--25--173-k8s-coredns--7d764666f9--gnfdj-" Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.852 [INFO][5090] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" Namespace="kube-system" Pod="coredns-7d764666f9-gnfdj" WorkloadEndpoint="ip--172--31--25--173-k8s-coredns--7d764666f9--gnfdj-eth0" Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.914 [INFO][5103] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" HandleID="k8s-pod-network.0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" Workload="ip--172--31--25--173-k8s-coredns--7d764666f9--gnfdj-eth0" Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.930 [INFO][5103] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" HandleID="k8s-pod-network.0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" Workload="ip--172--31--25--173-k8s-coredns--7d764666f9--gnfdj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed510), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-173", "pod":"coredns-7d764666f9-gnfdj", "timestamp":"2026-03-03 12:48:02.914195296 +0000 UTC"}, Hostname:"ip-172-31-25-173", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b51e0)} Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.930 [INFO][5103] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.930 [INFO][5103] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.930 [INFO][5103] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-173' Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.934 [INFO][5103] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" host="ip-172-31-25-173" Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.941 [INFO][5103] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-25-173" Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.954 [INFO][5103] ipam/ipam.go 526: Trying affinity for 192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.958 [INFO][5103] ipam/ipam.go 160: Attempting to load block cidr=192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.965 [INFO][5103] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.965 [INFO][5103] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" host="ip-172-31-25-173" Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.969 [INFO][5103] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991 Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.977 [INFO][5103] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" host="ip-172-31-25-173" Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.988 [INFO][5103] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.103.2/26] block=192.168.103.0/26 handle="k8s-pod-network.0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" host="ip-172-31-25-173" Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.988 [INFO][5103] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.103.2/26] handle="k8s-pod-network.0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" host="ip-172-31-25-173" Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.988 [INFO][5103] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 12:48:03.047215 containerd[2007]: 2026-03-03 12:48:02.988 [INFO][5103] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.103.2/26] IPv6=[] ContainerID="0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" HandleID="k8s-pod-network.0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" Workload="ip--172--31--25--173-k8s-coredns--7d764666f9--gnfdj-eth0" Mar 3 12:48:03.050534 containerd[2007]: 2026-03-03 12:48:02.994 [INFO][5090] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" Namespace="kube-system" Pod="coredns-7d764666f9-gnfdj" WorkloadEndpoint="ip--172--31--25--173-k8s-coredns--7d764666f9--gnfdj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--173-k8s-coredns--7d764666f9--gnfdj-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"83fbd844-9c20-4ee5-9da3-482034c2125b", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 12, 47, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-173", ContainerID:"", Pod:"coredns-7d764666f9-gnfdj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali970bb67423d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 12:48:03.050534 containerd[2007]: 2026-03-03 12:48:02.995 [INFO][5090] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.2/32] ContainerID="0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" Namespace="kube-system" Pod="coredns-7d764666f9-gnfdj" WorkloadEndpoint="ip--172--31--25--173-k8s-coredns--7d764666f9--gnfdj-eth0" Mar 3 12:48:03.050534 containerd[2007]: 2026-03-03 12:48:02.995 [INFO][5090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali970bb67423d ContainerID="0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" Namespace="kube-system" Pod="coredns-7d764666f9-gnfdj" WorkloadEndpoint="ip--172--31--25--173-k8s-coredns--7d764666f9--gnfdj-eth0" Mar 3 12:48:03.050534 containerd[2007]: 2026-03-03 12:48:03.014 [INFO][5090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" Namespace="kube-system" Pod="coredns-7d764666f9-gnfdj" WorkloadEndpoint="ip--172--31--25--173-k8s-coredns--7d764666f9--gnfdj-eth0" Mar 3 12:48:03.051206 containerd[2007]: 2026-03-03 12:48:03.015 [INFO][5090] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" Namespace="kube-system" Pod="coredns-7d764666f9-gnfdj" WorkloadEndpoint="ip--172--31--25--173-k8s-coredns--7d764666f9--gnfdj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--173-k8s-coredns--7d764666f9--gnfdj-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"83fbd844-9c20-4ee5-9da3-482034c2125b", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 12, 47, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-173", ContainerID:"0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991", Pod:"coredns-7d764666f9-gnfdj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali970bb67423d", MAC:"96:d9:ac:93:42:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 12:48:03.051206 containerd[2007]: 2026-03-03 12:48:03.035 [INFO][5090] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" Namespace="kube-system" Pod="coredns-7d764666f9-gnfdj" WorkloadEndpoint="ip--172--31--25--173-k8s-coredns--7d764666f9--gnfdj-eth0" Mar 3 12:48:03.115960 containerd[2007]: time="2026-03-03T12:48:03.115335361Z" level=info msg="connecting to shim 0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991" address="unix:///run/containerd/s/243dc3b0cc17335e17af27a6a7146b4c811f62ea48abddca534c3bed98ff60e1" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:48:03.184400 systemd[1]: Started cri-containerd-0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991.scope - libcontainer container 0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991. Mar 3 12:48:03.281260 containerd[2007]: time="2026-03-03T12:48:03.281207594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gnfdj,Uid:83fbd844-9c20-4ee5-9da3-482034c2125b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991\"" Mar 3 12:48:03.292547 containerd[2007]: time="2026-03-03T12:48:03.292432694Z" level=info msg="CreateContainer within sandbox \"0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 12:48:03.325843 containerd[2007]: time="2026-03-03T12:48:03.324109394Z" level=info msg="Container 7303b4dba0dbd63216852ec72b4ede9580fe33e4fd748e894b34f6adfa52cff6: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:48:03.355861 containerd[2007]: time="2026-03-03T12:48:03.355762059Z" level=info msg="CreateContainer within sandbox \"0dd31eeb3e8fc15609842984cdd18747adac3b4351a1974bbe33635fa5c9a991\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7303b4dba0dbd63216852ec72b4ede9580fe33e4fd748e894b34f6adfa52cff6\"" Mar 3 12:48:03.358619 containerd[2007]: time="2026-03-03T12:48:03.358018419Z" level=info msg="StartContainer for \"7303b4dba0dbd63216852ec72b4ede9580fe33e4fd748e894b34f6adfa52cff6\"" Mar 3 12:48:03.362615 containerd[2007]: time="2026-03-03T12:48:03.362470383Z" level=info msg="connecting to shim 7303b4dba0dbd63216852ec72b4ede9580fe33e4fd748e894b34f6adfa52cff6" address="unix:///run/containerd/s/243dc3b0cc17335e17af27a6a7146b4c811f62ea48abddca534c3bed98ff60e1" protocol=ttrpc version=3 Mar 3 12:48:03.397002 systemd[1]: Started cri-containerd-7303b4dba0dbd63216852ec72b4ede9580fe33e4fd748e894b34f6adfa52cff6.scope - libcontainer container 7303b4dba0dbd63216852ec72b4ede9580fe33e4fd748e894b34f6adfa52cff6. Mar 3 12:48:03.461939 containerd[2007]: time="2026-03-03T12:48:03.461883135Z" level=info msg="StartContainer for \"7303b4dba0dbd63216852ec72b4ede9580fe33e4fd748e894b34f6adfa52cff6\" returns successfully" Mar 3 12:48:04.251058 kubelet[3342]: I0303 12:48:04.250965 3342 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-gnfdj" podStartSLOduration=57.250943559 podStartE2EDuration="57.250943559s" podCreationTimestamp="2026-03-03 12:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:48:04.248595471 +0000 UTC m=+61.800300824" watchObservedRunningTime="2026-03-03 12:48:04.250943559 +0000 UTC m=+61.802648900" Mar 3 12:48:04.412060 systemd-networkd[1853]: cali970bb67423d: Gained IPv6LL Mar 3 12:48:05.764225 containerd[2007]: time="2026-03-03T12:48:05.763753471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wjtnn,Uid:1be38b67-1840-4e80-88ec-b6d5bc54edee,Namespace:calico-system,Attempt:0,}" Mar 3 12:48:05.771029 containerd[2007]: time="2026-03-03T12:48:05.770051239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579696bc9c-ppbr5,Uid:6a0ee5eb-a971-47d3-b4fb-2d02d9887304,Namespace:calico-system,Attempt:0,}" Mar 3 12:48:05.774234 containerd[2007]: time="2026-03-03T12:48:05.774131695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579696bc9c-5btc9,Uid:6e1a928f-acc4-4d48-9017-c2fbcea7def2,Namespace:calico-system,Attempt:0,}" Mar 3 12:48:06.145877 systemd-networkd[1853]: cali307f2b757b2: Link UP Mar 3 12:48:06.148235 systemd-networkd[1853]: cali307f2b757b2: Gained carrier Mar 3 12:48:06.166372 (udev-worker)[5280]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:05.890 [INFO][5221] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--173-k8s-goldmane--9f7667bb8--wjtnn-eth0 goldmane-9f7667bb8- calico-system 1be38b67-1840-4e80-88ec-b6d5bc54edee 896 0 2026-03-03 12:47:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-25-173 goldmane-9f7667bb8-wjtnn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali307f2b757b2 [] [] }} ContainerID="3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" Namespace="calico-system" Pod="goldmane-9f7667bb8-wjtnn" WorkloadEndpoint="ip--172--31--25--173-k8s-goldmane--9f7667bb8--wjtnn-" Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:05.890 [INFO][5221] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" Namespace="calico-system" Pod="goldmane-9f7667bb8-wjtnn" WorkloadEndpoint="ip--172--31--25--173-k8s-goldmane--9f7667bb8--wjtnn-eth0" Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.004 [INFO][5253] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" HandleID="k8s-pod-network.3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" Workload="ip--172--31--25--173-k8s-goldmane--9f7667bb8--wjtnn-eth0" Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.059 [INFO][5253] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" HandleID="k8s-pod-network.3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" Workload="ip--172--31--25--173-k8s-goldmane--9f7667bb8--wjtnn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000102310), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-173", "pod":"goldmane-9f7667bb8-wjtnn", "timestamp":"2026-03-03 12:48:06.00422176 +0000 UTC"}, Hostname:"ip-172-31-25-173", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400012cb00)} Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.059 [INFO][5253] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.061 [INFO][5253] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.061 [INFO][5253] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-173' Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.066 [INFO][5253] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" host="ip-172-31-25-173" Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.079 [INFO][5253] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-25-173" Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.091 [INFO][5253] ipam/ipam.go 526: Trying affinity for 192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.097 [INFO][5253] ipam/ipam.go 160: Attempting to load block cidr=192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.102 [INFO][5253] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.103 [INFO][5253] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" host="ip-172-31-25-173" Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.108 [INFO][5253] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013 Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.118 [INFO][5253] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" host="ip-172-31-25-173" Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.132 [INFO][5253] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.103.3/26] block=192.168.103.0/26 handle="k8s-pod-network.3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" host="ip-172-31-25-173" Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.133 [INFO][5253] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.103.3/26] handle="k8s-pod-network.3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" host="ip-172-31-25-173" Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.133 [INFO][5253] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 12:48:06.183429 containerd[2007]: 2026-03-03 12:48:06.133 [INFO][5253] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.103.3/26] IPv6=[] ContainerID="3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" HandleID="k8s-pod-network.3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" Workload="ip--172--31--25--173-k8s-goldmane--9f7667bb8--wjtnn-eth0" Mar 3 12:48:06.186910 containerd[2007]: 2026-03-03 12:48:06.140 [INFO][5221] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" Namespace="calico-system" Pod="goldmane-9f7667bb8-wjtnn" WorkloadEndpoint="ip--172--31--25--173-k8s-goldmane--9f7667bb8--wjtnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--173-k8s-goldmane--9f7667bb8--wjtnn-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1be38b67-1840-4e80-88ec-b6d5bc54edee", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 12, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-173", ContainerID:"", Pod:"goldmane-9f7667bb8-wjtnn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.103.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali307f2b757b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 12:48:06.186910 containerd[2007]: 2026-03-03 12:48:06.140 [INFO][5221] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.3/32] ContainerID="3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" Namespace="calico-system" Pod="goldmane-9f7667bb8-wjtnn" WorkloadEndpoint="ip--172--31--25--173-k8s-goldmane--9f7667bb8--wjtnn-eth0" Mar 3 12:48:06.186910 containerd[2007]: 2026-03-03 12:48:06.140 [INFO][5221] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali307f2b757b2 ContainerID="3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" Namespace="calico-system" Pod="goldmane-9f7667bb8-wjtnn" WorkloadEndpoint="ip--172--31--25--173-k8s-goldmane--9f7667bb8--wjtnn-eth0" Mar 3 12:48:06.186910 containerd[2007]: 2026-03-03 12:48:06.149 [INFO][5221] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" Namespace="calico-system" Pod="goldmane-9f7667bb8-wjtnn" WorkloadEndpoint="ip--172--31--25--173-k8s-goldmane--9f7667bb8--wjtnn-eth0" Mar 3 12:48:06.186910 containerd[2007]: 2026-03-03 12:48:06.149 [INFO][5221] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" Namespace="calico-system" Pod="goldmane-9f7667bb8-wjtnn" WorkloadEndpoint="ip--172--31--25--173-k8s-goldmane--9f7667bb8--wjtnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--173-k8s-goldmane--9f7667bb8--wjtnn-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1be38b67-1840-4e80-88ec-b6d5bc54edee", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 12, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-173", ContainerID:"3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013", Pod:"goldmane-9f7667bb8-wjtnn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.103.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali307f2b757b2", MAC:"02:f1:b2:78:1b:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 12:48:06.186910 containerd[2007]: 2026-03-03 12:48:06.171 [INFO][5221] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" Namespace="calico-system" Pod="goldmane-9f7667bb8-wjtnn" WorkloadEndpoint="ip--172--31--25--173-k8s-goldmane--9f7667bb8--wjtnn-eth0" Mar 3 12:48:06.293186 (udev-worker)[5286]: Network interface NamePolicy= disabled on kernel command line. Mar 3 12:48:06.308050 systemd-networkd[1853]: cali8d3c405f263: Link UP Mar 3 12:48:06.316468 systemd-networkd[1853]: cali8d3c405f263: Gained carrier Mar 3 12:48:06.346260 containerd[2007]: time="2026-03-03T12:48:06.346177937Z" level=info msg="connecting to shim 3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013" address="unix:///run/containerd/s/46f699735e402ca2feb4bb313a47eaac9309891269cb8ebebd1a0b9a31be4d0c" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:05.922 [INFO][5232] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--5btc9-eth0 calico-apiserver-579696bc9c- calico-system 6e1a928f-acc4-4d48-9017-c2fbcea7def2 899 0 2026-03-03 12:47:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:579696bc9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-173 calico-apiserver-579696bc9c-5btc9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali8d3c405f263 [] [] }} ContainerID="0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" Namespace="calico-system" Pod="calico-apiserver-579696bc9c-5btc9" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--5btc9-" Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:05.923 [INFO][5232] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" Namespace="calico-system" Pod="calico-apiserver-579696bc9c-5btc9" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--5btc9-eth0" Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.041 [INFO][5261] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" HandleID="k8s-pod-network.0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" Workload="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--5btc9-eth0" Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.079 [INFO][5261] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" HandleID="k8s-pod-network.0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" Workload="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--5btc9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000402020), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-173", "pod":"calico-apiserver-579696bc9c-5btc9", "timestamp":"2026-03-03 12:48:06.041100604 +0000 UTC"}, Hostname:"ip-172-31-25-173", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400038a420)} Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.080 [INFO][5261] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.133 [INFO][5261] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.133 [INFO][5261] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-173' Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.170 [INFO][5261] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" host="ip-172-31-25-173" Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.190 [INFO][5261] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-25-173" Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.208 [INFO][5261] ipam/ipam.go 526: Trying affinity for 192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.213 [INFO][5261] ipam/ipam.go 160: Attempting to load block cidr=192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.221 [INFO][5261] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.221 [INFO][5261] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" host="ip-172-31-25-173" Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.229 [INFO][5261] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0 Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.242 [INFO][5261] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" host="ip-172-31-25-173" Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.264 [INFO][5261] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.103.4/26] block=192.168.103.0/26 handle="k8s-pod-network.0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" host="ip-172-31-25-173" Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.265 [INFO][5261] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.103.4/26] handle="k8s-pod-network.0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" host="ip-172-31-25-173" Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.266 [INFO][5261] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 12:48:06.398465 containerd[2007]: 2026-03-03 12:48:06.266 [INFO][5261] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.103.4/26] IPv6=[] ContainerID="0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" HandleID="k8s-pod-network.0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" Workload="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--5btc9-eth0" Mar 3 12:48:06.401149 containerd[2007]: 2026-03-03 12:48:06.283 [INFO][5232] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" Namespace="calico-system" Pod="calico-apiserver-579696bc9c-5btc9" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--5btc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--5btc9-eth0", GenerateName:"calico-apiserver-579696bc9c-", Namespace:"calico-system", SelfLink:"", UID:"6e1a928f-acc4-4d48-9017-c2fbcea7def2", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 12, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579696bc9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-173", ContainerID:"", Pod:"calico-apiserver-579696bc9c-5btc9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8d3c405f263", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 12:48:06.401149 containerd[2007]: 2026-03-03 12:48:06.283 [INFO][5232] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.4/32] ContainerID="0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" Namespace="calico-system" Pod="calico-apiserver-579696bc9c-5btc9" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--5btc9-eth0" Mar 3 12:48:06.401149 containerd[2007]: 2026-03-03 12:48:06.284 [INFO][5232] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d3c405f263 ContainerID="0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" Namespace="calico-system" Pod="calico-apiserver-579696bc9c-5btc9" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--5btc9-eth0" Mar 3 12:48:06.401149 containerd[2007]: 2026-03-03 12:48:06.318 [INFO][5232] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" Namespace="calico-system" Pod="calico-apiserver-579696bc9c-5btc9" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--5btc9-eth0" Mar 3 12:48:06.401149 containerd[2007]: 2026-03-03 12:48:06.327 [INFO][5232] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" Namespace="calico-system" Pod="calico-apiserver-579696bc9c-5btc9" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--5btc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--5btc9-eth0", GenerateName:"calico-apiserver-579696bc9c-", Namespace:"calico-system", SelfLink:"", UID:"6e1a928f-acc4-4d48-9017-c2fbcea7def2", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 12, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579696bc9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-173", ContainerID:"0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0", Pod:"calico-apiserver-579696bc9c-5btc9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8d3c405f263", MAC:"e2:e9:e5:35:e9:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 12:48:06.401149 containerd[2007]: 2026-03-03 12:48:06.376 [INFO][5232] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" Namespace="calico-system" Pod="calico-apiserver-579696bc9c-5btc9" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--5btc9-eth0" Mar 3 12:48:06.464049 systemd[1]: Started cri-containerd-3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013.scope - libcontainer container 3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013. Mar 3 12:48:06.493597 containerd[2007]: time="2026-03-03T12:48:06.493536618Z" level=info msg="connecting to shim 0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0" address="unix:///run/containerd/s/0addbab4869a2b83e2d4f7ddbd2d2c93e2e42ad02e4ed2410b16ba7e521bbd08" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:48:06.540346 systemd-networkd[1853]: cali13088f67c91: Link UP Mar 3 12:48:06.545961 systemd-networkd[1853]: cali13088f67c91: Gained carrier Mar 3 12:48:06.559091 systemd[1]: Started cri-containerd-0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0.scope - libcontainer container 0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0. Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:05.967 [INFO][5231] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--ppbr5-eth0 calico-apiserver-579696bc9c- calico-system 6a0ee5eb-a971-47d3-b4fb-2d02d9887304 891 0 2026-03-03 12:47:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:579696bc9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-173 calico-apiserver-579696bc9c-ppbr5 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali13088f67c91 [] [] }} ContainerID="019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" Namespace="calico-system" Pod="calico-apiserver-579696bc9c-ppbr5" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--ppbr5-" Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:05.968 [INFO][5231] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" Namespace="calico-system" Pod="calico-apiserver-579696bc9c-ppbr5" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--ppbr5-eth0" Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.077 [INFO][5267] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" HandleID="k8s-pod-network.019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" Workload="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--ppbr5-eth0" Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.101 [INFO][5267] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" HandleID="k8s-pod-network.019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" Workload="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--ppbr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c180), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-173", "pod":"calico-apiserver-579696bc9c-ppbr5", "timestamp":"2026-03-03 12:48:06.077355688 +0000 UTC"}, Hostname:"ip-172-31-25-173", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186c60)} Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.101 [INFO][5267] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.266 [INFO][5267] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.267 [INFO][5267] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-173' Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.277 [INFO][5267] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" host="ip-172-31-25-173" Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.304 [INFO][5267] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-25-173" Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.337 [INFO][5267] ipam/ipam.go 526: Trying affinity for 192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.362 [INFO][5267] ipam/ipam.go 160: Attempting to load block cidr=192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.409 [INFO][5267] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.409 [INFO][5267] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" host="ip-172-31-25-173" Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.428 [INFO][5267] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.469 [INFO][5267] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" host="ip-172-31-25-173" Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.524 [INFO][5267] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.103.5/26] block=192.168.103.0/26 handle="k8s-pod-network.019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" host="ip-172-31-25-173" Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.525 [INFO][5267] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.103.5/26] handle="k8s-pod-network.019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" host="ip-172-31-25-173" Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.525 [INFO][5267] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 12:48:06.612079 containerd[2007]: 2026-03-03 12:48:06.525 [INFO][5267] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.103.5/26] IPv6=[] ContainerID="019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" HandleID="k8s-pod-network.019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" Workload="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--ppbr5-eth0" Mar 3 12:48:06.614430 containerd[2007]: 2026-03-03 12:48:06.536 [INFO][5231] cni-plugin/k8s.go 418: Populated endpoint ContainerID="019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" Namespace="calico-system" Pod="calico-apiserver-579696bc9c-ppbr5" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--ppbr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--ppbr5-eth0", GenerateName:"calico-apiserver-579696bc9c-", Namespace:"calico-system", SelfLink:"", UID:"6a0ee5eb-a971-47d3-b4fb-2d02d9887304", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 12, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579696bc9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-173", ContainerID:"", Pod:"calico-apiserver-579696bc9c-ppbr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali13088f67c91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 12:48:06.614430 containerd[2007]: 2026-03-03 12:48:06.536 [INFO][5231] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.5/32] ContainerID="019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" Namespace="calico-system" Pod="calico-apiserver-579696bc9c-ppbr5" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--ppbr5-eth0" Mar 3 12:48:06.614430 containerd[2007]: 2026-03-03 12:48:06.536 [INFO][5231] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13088f67c91 ContainerID="019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" Namespace="calico-system" Pod="calico-apiserver-579696bc9c-ppbr5" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--ppbr5-eth0" Mar 3 12:48:06.614430 containerd[2007]: 2026-03-03 12:48:06.541 [INFO][5231] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" Namespace="calico-system" Pod="calico-apiserver-579696bc9c-ppbr5" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--ppbr5-eth0" Mar 3 12:48:06.614430 containerd[2007]: 2026-03-03 12:48:06.542 [INFO][5231] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" Namespace="calico-system" Pod="calico-apiserver-579696bc9c-ppbr5" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--ppbr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--ppbr5-eth0", GenerateName:"calico-apiserver-579696bc9c-", Namespace:"calico-system", SelfLink:"", UID:"6a0ee5eb-a971-47d3-b4fb-2d02d9887304", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 12, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579696bc9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-173", ContainerID:"019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda", Pod:"calico-apiserver-579696bc9c-ppbr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali13088f67c91", MAC:"ee:e2:f2:25:29:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 12:48:06.614430 containerd[2007]: 2026-03-03 12:48:06.605 [INFO][5231] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" Namespace="calico-system" Pod="calico-apiserver-579696bc9c-ppbr5" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--apiserver--579696bc9c--ppbr5-eth0" Mar 3 12:48:06.661746 containerd[2007]: time="2026-03-03T12:48:06.660158371Z" level=info msg="connecting to shim 019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda" address="unix:///run/containerd/s/b3352cebef6f0534e955f180bb5a5178b26a2421913fbe52189071691d644ded" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:48:06.747046 systemd[1]: Started cri-containerd-019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda.scope - libcontainer container 019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda. Mar 3 12:48:06.777418 containerd[2007]: time="2026-03-03T12:48:06.777364052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795844594-88vdk,Uid:ec879cb1-4c4f-4c95-8084-9fe73f99a62f,Namespace:calico-system,Attempt:0,}" Mar 3 12:48:06.981015 containerd[2007]: time="2026-03-03T12:48:06.980577945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579696bc9c-5btc9,Uid:6e1a928f-acc4-4d48-9017-c2fbcea7def2,Namespace:calico-system,Attempt:0,} returns sandbox id \"0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0\"" Mar 3 12:48:06.992668 containerd[2007]: time="2026-03-03T12:48:06.992374689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 3 12:48:07.082260 containerd[2007]: time="2026-03-03T12:48:07.081468341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wjtnn,Uid:1be38b67-1840-4e80-88ec-b6d5bc54edee,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013\"" Mar 3 12:48:07.108100 containerd[2007]: time="2026-03-03T12:48:07.107774513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579696bc9c-ppbr5,Uid:6a0ee5eb-a971-47d3-b4fb-2d02d9887304,Namespace:calico-system,Attempt:0,} returns sandbox id \"019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda\"" Mar 3 12:48:07.264509 systemd-networkd[1853]: cali74b83cabc4e: Link UP Mar 3 12:48:07.270333 systemd-networkd[1853]: cali74b83cabc4e: Gained carrier Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:06.951 [INFO][5430] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--173-k8s-calico--kube--controllers--795844594--88vdk-eth0 calico-kube-controllers-795844594- calico-system ec879cb1-4c4f-4c95-8084-9fe73f99a62f 898 0 2026-03-03 12:47:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:795844594 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-25-173 calico-kube-controllers-795844594-88vdk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali74b83cabc4e [] [] }} ContainerID="aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" Namespace="calico-system" Pod="calico-kube-controllers-795844594-88vdk" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--kube--controllers--795844594--88vdk-" Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:06.952 [INFO][5430] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" Namespace="calico-system" Pod="calico-kube-controllers-795844594-88vdk" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--kube--controllers--795844594--88vdk-eth0" Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.152 [INFO][5448] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" HandleID="k8s-pod-network.aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" Workload="ip--172--31--25--173-k8s-calico--kube--controllers--795844594--88vdk-eth0" Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.172 [INFO][5448] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" HandleID="k8s-pod-network.aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" Workload="ip--172--31--25--173-k8s-calico--kube--controllers--795844594--88vdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003bc8b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-173", "pod":"calico-kube-controllers-795844594-88vdk", "timestamp":"2026-03-03 12:48:07.152002397 +0000 UTC"}, Hostname:"ip-172-31-25-173", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002a8420)} Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.172 [INFO][5448] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.172 [INFO][5448] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.173 [INFO][5448] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-173' Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.177 [INFO][5448] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" host="ip-172-31-25-173" Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.187 [INFO][5448] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-25-173" Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.197 [INFO][5448] ipam/ipam.go 526: Trying affinity for 192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.201 [INFO][5448] ipam/ipam.go 160: Attempting to load block cidr=192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.207 [INFO][5448] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.207 [INFO][5448] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" host="ip-172-31-25-173" Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.211 [INFO][5448] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36 Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.219 [INFO][5448] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" host="ip-172-31-25-173" Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.236 [INFO][5448] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.103.6/26] block=192.168.103.0/26 handle="k8s-pod-network.aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" host="ip-172-31-25-173" Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.238 [INFO][5448] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.103.6/26] handle="k8s-pod-network.aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" host="ip-172-31-25-173" Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.239 [INFO][5448] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 12:48:07.306776 containerd[2007]: 2026-03-03 12:48:07.239 [INFO][5448] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.103.6/26] IPv6=[] ContainerID="aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" HandleID="k8s-pod-network.aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" Workload="ip--172--31--25--173-k8s-calico--kube--controllers--795844594--88vdk-eth0" Mar 3 12:48:07.309430 containerd[2007]: 2026-03-03 12:48:07.250 [INFO][5430] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" Namespace="calico-system" Pod="calico-kube-controllers-795844594-88vdk" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--kube--controllers--795844594--88vdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--173-k8s-calico--kube--controllers--795844594--88vdk-eth0", GenerateName:"calico-kube-controllers-795844594-", Namespace:"calico-system", SelfLink:"", UID:"ec879cb1-4c4f-4c95-8084-9fe73f99a62f", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 12, 47, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"795844594", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-173", ContainerID:"", Pod:"calico-kube-controllers-795844594-88vdk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali74b83cabc4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 12:48:07.309430 containerd[2007]: 2026-03-03 12:48:07.251 [INFO][5430] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.6/32] ContainerID="aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" Namespace="calico-system" Pod="calico-kube-controllers-795844594-88vdk" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--kube--controllers--795844594--88vdk-eth0" Mar 3 12:48:07.309430 containerd[2007]: 2026-03-03 12:48:07.251 [INFO][5430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74b83cabc4e ContainerID="aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" Namespace="calico-system" Pod="calico-kube-controllers-795844594-88vdk" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--kube--controllers--795844594--88vdk-eth0" Mar 3 12:48:07.309430 containerd[2007]: 2026-03-03 12:48:07.275 [INFO][5430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" Namespace="calico-system" Pod="calico-kube-controllers-795844594-88vdk" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--kube--controllers--795844594--88vdk-eth0" Mar 3 12:48:07.309430 containerd[2007]: 2026-03-03 12:48:07.277 [INFO][5430] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" Namespace="calico-system" Pod="calico-kube-controllers-795844594-88vdk" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--kube--controllers--795844594--88vdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--173-k8s-calico--kube--controllers--795844594--88vdk-eth0", GenerateName:"calico-kube-controllers-795844594-", Namespace:"calico-system", SelfLink:"", UID:"ec879cb1-4c4f-4c95-8084-9fe73f99a62f", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 12, 47, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"795844594", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-173", ContainerID:"aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36", Pod:"calico-kube-controllers-795844594-88vdk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali74b83cabc4e", MAC:"6e:be:6d:a0:db:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 12:48:07.309430 containerd[2007]: 2026-03-03 12:48:07.299 [INFO][5430] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" Namespace="calico-system" Pod="calico-kube-controllers-795844594-88vdk" WorkloadEndpoint="ip--172--31--25--173-k8s-calico--kube--controllers--795844594--88vdk-eth0" Mar 3 12:48:07.395741 containerd[2007]: time="2026-03-03T12:48:07.395014987Z" level=info msg="connecting to shim aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36" address="unix:///run/containerd/s/5554d44ff10aea33b79ced95e794dda954755cd2e070c8040f789baef281e4af" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:48:07.460040 systemd[1]: Started cri-containerd-aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36.scope - libcontainer container aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36. Mar 3 12:48:07.483995 systemd-networkd[1853]: cali8d3c405f263: Gained IPv6LL Mar 3 12:48:07.563458 containerd[2007]: time="2026-03-03T12:48:07.563369468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795844594-88vdk,Uid:ec879cb1-4c4f-4c95-8084-9fe73f99a62f,Namespace:calico-system,Attempt:0,} returns sandbox id \"aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36\"" Mar 3 12:48:07.676789 systemd-networkd[1853]: cali307f2b757b2: Gained IPv6LL Mar 3 12:48:07.763581 containerd[2007]: time="2026-03-03T12:48:07.763508301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-89x9d,Uid:7cdb4c7c-a3da-4294-8141-e0392c7cd04f,Namespace:calico-system,Attempt:0,}" Mar 3 12:48:07.772683 containerd[2007]: time="2026-03-03T12:48:07.770779113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-b8v6l,Uid:355afa55-697e-446f-9dc6-af783b23ea1e,Namespace:kube-system,Attempt:0,}" Mar 3 12:48:08.105986 systemd-networkd[1853]: cali38fad329b8b: Link UP Mar 3 12:48:08.108074 systemd-networkd[1853]: cali38fad329b8b: Gained carrier Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:07.895 [INFO][5558] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--173-k8s-csi--node--driver--89x9d-eth0 csi-node-driver- calico-system 7cdb4c7c-a3da-4294-8141-e0392c7cd04f 756 0 2026-03-03 12:47:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-25-173 csi-node-driver-89x9d eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali38fad329b8b [] [] }} ContainerID="860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" Namespace="calico-system" Pod="csi-node-driver-89x9d" WorkloadEndpoint="ip--172--31--25--173-k8s-csi--node--driver--89x9d-" Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:07.895 [INFO][5558] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" Namespace="calico-system" Pod="csi-node-driver-89x9d" WorkloadEndpoint="ip--172--31--25--173-k8s-csi--node--driver--89x9d-eth0" Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:07.993 [INFO][5584] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" HandleID="k8s-pod-network.860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" Workload="ip--172--31--25--173-k8s-csi--node--driver--89x9d-eth0" Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:08.028 [INFO][5584] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" HandleID="k8s-pod-network.860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" Workload="ip--172--31--25--173-k8s-csi--node--driver--89x9d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103940), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-173", "pod":"csi-node-driver-89x9d", "timestamp":"2026-03-03 12:48:07.993528862 +0000 UTC"}, Hostname:"ip-172-31-25-173", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003bb1e0)} Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:08.030 [INFO][5584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:08.033 [INFO][5584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:08.034 [INFO][5584] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-173' Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:08.039 [INFO][5584] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" host="ip-172-31-25-173" Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:08.051 [INFO][5584] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-25-173" Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:08.059 [INFO][5584] ipam/ipam.go 526: Trying affinity for 192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:08.063 [INFO][5584] ipam/ipam.go 160: Attempting to load block cidr=192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:08.068 [INFO][5584] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:08.068 [INFO][5584] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" host="ip-172-31-25-173" Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:08.071 [INFO][5584] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:08.080 [INFO][5584] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" host="ip-172-31-25-173" Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:08.093 [INFO][5584] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.103.7/26] block=192.168.103.0/26 handle="k8s-pod-network.860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" host="ip-172-31-25-173" Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:08.093 [INFO][5584] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.103.7/26] handle="k8s-pod-network.860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" host="ip-172-31-25-173" Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:08.094 [INFO][5584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 12:48:08.152879 containerd[2007]: 2026-03-03 12:48:08.094 [INFO][5584] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.103.7/26] IPv6=[] ContainerID="860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" HandleID="k8s-pod-network.860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" Workload="ip--172--31--25--173-k8s-csi--node--driver--89x9d-eth0" Mar 3 12:48:08.158035 containerd[2007]: 2026-03-03 12:48:08.099 [INFO][5558] cni-plugin/k8s.go 418: Populated endpoint ContainerID="860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" Namespace="calico-system" Pod="csi-node-driver-89x9d" WorkloadEndpoint="ip--172--31--25--173-k8s-csi--node--driver--89x9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--173-k8s-csi--node--driver--89x9d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cdb4c7c-a3da-4294-8141-e0392c7cd04f", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 12, 47, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-173", ContainerID:"", Pod:"csi-node-driver-89x9d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38fad329b8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 12:48:08.158035 containerd[2007]: 2026-03-03 12:48:08.099 [INFO][5558] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.7/32] ContainerID="860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" Namespace="calico-system" Pod="csi-node-driver-89x9d" WorkloadEndpoint="ip--172--31--25--173-k8s-csi--node--driver--89x9d-eth0" Mar 3 12:48:08.158035 containerd[2007]: 2026-03-03 12:48:08.099 [INFO][5558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38fad329b8b ContainerID="860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" Namespace="calico-system" Pod="csi-node-driver-89x9d" WorkloadEndpoint="ip--172--31--25--173-k8s-csi--node--driver--89x9d-eth0" Mar 3 12:48:08.158035 containerd[2007]: 2026-03-03 12:48:08.108 [INFO][5558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" Namespace="calico-system" Pod="csi-node-driver-89x9d" WorkloadEndpoint="ip--172--31--25--173-k8s-csi--node--driver--89x9d-eth0" Mar 3 12:48:08.158035 containerd[2007]: 2026-03-03 12:48:08.110 [INFO][5558] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" Namespace="calico-system" Pod="csi-node-driver-89x9d" WorkloadEndpoint="ip--172--31--25--173-k8s-csi--node--driver--89x9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--173-k8s-csi--node--driver--89x9d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cdb4c7c-a3da-4294-8141-e0392c7cd04f", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 12, 47, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-173", ContainerID:"860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d", Pod:"csi-node-driver-89x9d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38fad329b8b", MAC:"3a:d8:c3:e1:4b:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 12:48:08.158035 containerd[2007]: 2026-03-03 12:48:08.142 [INFO][5558] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" Namespace="calico-system" Pod="csi-node-driver-89x9d" WorkloadEndpoint="ip--172--31--25--173-k8s-csi--node--driver--89x9d-eth0" Mar 3 12:48:08.188437 systemd-networkd[1853]: cali13088f67c91: Gained IPv6LL Mar 3 12:48:08.231010 containerd[2007]: time="2026-03-03T12:48:08.230855407Z" level=info msg="connecting to shim 860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d" address="unix:///run/containerd/s/9922e5187fe8b2763ee39a7fd45f30bdc537796f151c15015ca99501f45d0d4d" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:48:08.283658 systemd-networkd[1853]: cali5490444b7e6: Link UP Mar 3 12:48:08.285773 systemd-networkd[1853]: cali5490444b7e6: Gained carrier Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:07.914 [INFO][5563] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--173-k8s-coredns--7d764666f9--b8v6l-eth0 coredns-7d764666f9- kube-system 355afa55-697e-446f-9dc6-af783b23ea1e 887 0 2026-03-03 12:47:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-173 coredns-7d764666f9-b8v6l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5490444b7e6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" Namespace="kube-system" Pod="coredns-7d764666f9-b8v6l" WorkloadEndpoint="ip--172--31--25--173-k8s-coredns--7d764666f9--b8v6l-" Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:07.916 [INFO][5563] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" Namespace="kube-system" Pod="coredns-7d764666f9-b8v6l" WorkloadEndpoint="ip--172--31--25--173-k8s-coredns--7d764666f9--b8v6l-eth0" Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.005 [INFO][5589] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" HandleID="k8s-pod-network.562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" Workload="ip--172--31--25--173-k8s-coredns--7d764666f9--b8v6l-eth0" Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.033 [INFO][5589] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" HandleID="k8s-pod-network.562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" Workload="ip--172--31--25--173-k8s-coredns--7d764666f9--b8v6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400068c040), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-173", "pod":"coredns-7d764666f9-b8v6l", "timestamp":"2026-03-03 12:48:08.005605362 +0000 UTC"}, Hostname:"ip-172-31-25-173", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000310f20)} Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.033 [INFO][5589] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.093 [INFO][5589] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.093 [INFO][5589] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-173' Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.139 [INFO][5589] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" host="ip-172-31-25-173" Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.159 [INFO][5589] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-25-173" Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.171 [INFO][5589] ipam/ipam.go 526: Trying affinity for 192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.176 [INFO][5589] ipam/ipam.go 160: Attempting to load block cidr=192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.183 [INFO][5589] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ip-172-31-25-173" Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.184 [INFO][5589] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" host="ip-172-31-25-173" Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.187 [INFO][5589] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4 Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.212 [INFO][5589] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" host="ip-172-31-25-173" Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.237 [INFO][5589] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.103.8/26] block=192.168.103.0/26 handle="k8s-pod-network.562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" host="ip-172-31-25-173" Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.237 [INFO][5589] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.103.8/26] handle="k8s-pod-network.562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" host="ip-172-31-25-173" Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.237 [INFO][5589] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 12:48:08.354443 containerd[2007]: 2026-03-03 12:48:08.237 [INFO][5589] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.103.8/26] IPv6=[] ContainerID="562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" HandleID="k8s-pod-network.562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" Workload="ip--172--31--25--173-k8s-coredns--7d764666f9--b8v6l-eth0" Mar 3 12:48:08.356014 containerd[2007]: 2026-03-03 12:48:08.275 [INFO][5563] cni-plugin/k8s.go 418: Populated endpoint ContainerID="562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" Namespace="kube-system" Pod="coredns-7d764666f9-b8v6l" WorkloadEndpoint="ip--172--31--25--173-k8s-coredns--7d764666f9--b8v6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--173-k8s-coredns--7d764666f9--b8v6l-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"355afa55-697e-446f-9dc6-af783b23ea1e", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 12, 47, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-173", ContainerID:"", Pod:"coredns-7d764666f9-b8v6l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5490444b7e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 12:48:08.356014 containerd[2007]: 2026-03-03 12:48:08.276 [INFO][5563] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.8/32] ContainerID="562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" Namespace="kube-system" Pod="coredns-7d764666f9-b8v6l" WorkloadEndpoint="ip--172--31--25--173-k8s-coredns--7d764666f9--b8v6l-eth0" Mar 3 12:48:08.356014 containerd[2007]: 2026-03-03 12:48:08.276 [INFO][5563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5490444b7e6 ContainerID="562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" Namespace="kube-system" Pod="coredns-7d764666f9-b8v6l" WorkloadEndpoint="ip--172--31--25--173-k8s-coredns--7d764666f9--b8v6l-eth0" Mar 3 12:48:08.356014 containerd[2007]: 2026-03-03 12:48:08.283 [INFO][5563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" Namespace="kube-system" Pod="coredns-7d764666f9-b8v6l" WorkloadEndpoint="ip--172--31--25--173-k8s-coredns--7d764666f9--b8v6l-eth0" Mar 3 12:48:08.356787 containerd[2007]: 2026-03-03 12:48:08.285 [INFO][5563] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" Namespace="kube-system" Pod="coredns-7d764666f9-b8v6l" WorkloadEndpoint="ip--172--31--25--173-k8s-coredns--7d764666f9--b8v6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--173-k8s-coredns--7d764666f9--b8v6l-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"355afa55-697e-446f-9dc6-af783b23ea1e", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 12, 47, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-173", ContainerID:"562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4", Pod:"coredns-7d764666f9-b8v6l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5490444b7e6", MAC:"02:36:d2:ba:a0:4e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 12:48:08.356787 containerd[2007]: 2026-03-03 12:48:08.314 [INFO][5563] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" Namespace="kube-system" Pod="coredns-7d764666f9-b8v6l" WorkloadEndpoint="ip--172--31--25--173-k8s-coredns--7d764666f9--b8v6l-eth0" Mar 3 12:48:08.370137 systemd[1]: Started cri-containerd-860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d.scope - libcontainer container 860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d. Mar 3 12:48:08.429597 containerd[2007]: time="2026-03-03T12:48:08.429515084Z" level=info msg="connecting to shim 562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4" address="unix:///run/containerd/s/c211eccec8f17a25d908c8e29a78eb5273dcaba87ec3019ae25918e960431903" namespace=k8s.io protocol=ttrpc version=3 Mar 3 12:48:08.560058 systemd[1]: Started cri-containerd-562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4.scope - libcontainer container 562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4. Mar 3 12:48:08.576209 systemd-networkd[1853]: cali74b83cabc4e: Gained IPv6LL Mar 3 12:48:08.631006 containerd[2007]: time="2026-03-03T12:48:08.630832245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-89x9d,Uid:7cdb4c7c-a3da-4294-8141-e0392c7cd04f,Namespace:calico-system,Attempt:0,} returns sandbox id \"860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d\"" Mar 3 12:48:08.936771 containerd[2007]: time="2026-03-03T12:48:08.936439042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-b8v6l,Uid:355afa55-697e-446f-9dc6-af783b23ea1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4\"" Mar 3 12:48:08.954463 containerd[2007]: time="2026-03-03T12:48:08.954314926Z" level=info msg="CreateContainer within sandbox \"562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 12:48:08.994748 containerd[2007]: time="2026-03-03T12:48:08.988216127Z" level=info msg="Container fb0e8e8ef75b02b3956b32cf025ff9529bdc29a24506708226378d42c70e8e43: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:48:09.031375 containerd[2007]: time="2026-03-03T12:48:09.031305187Z" level=info msg="CreateContainer within sandbox \"562f5e0a1b02eb7c2212a8b463672381e8bdae7ffd2f815fabe5e1fa274c8ba4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fb0e8e8ef75b02b3956b32cf025ff9529bdc29a24506708226378d42c70e8e43\"" Mar 3 12:48:09.034557 containerd[2007]: time="2026-03-03T12:48:09.034152427Z" level=info msg="StartContainer for \"fb0e8e8ef75b02b3956b32cf025ff9529bdc29a24506708226378d42c70e8e43\"" Mar 3 12:48:09.042579 containerd[2007]: time="2026-03-03T12:48:09.042479647Z" level=info msg="connecting to shim fb0e8e8ef75b02b3956b32cf025ff9529bdc29a24506708226378d42c70e8e43" address="unix:///run/containerd/s/c211eccec8f17a25d908c8e29a78eb5273dcaba87ec3019ae25918e960431903" protocol=ttrpc version=3 Mar 3 12:48:09.096467 systemd[1]: Started sshd@7-172.31.25.173:22-20.161.92.111:37716.service - OpenSSH per-connection server daemon (20.161.92.111:37716). Mar 3 12:48:09.214188 systemd[1]: Started cri-containerd-fb0e8e8ef75b02b3956b32cf025ff9529bdc29a24506708226378d42c70e8e43.scope - libcontainer container fb0e8e8ef75b02b3956b32cf025ff9529bdc29a24506708226378d42c70e8e43. Mar 3 12:48:09.366037 containerd[2007]: time="2026-03-03T12:48:09.365942432Z" level=info msg="StartContainer for \"fb0e8e8ef75b02b3956b32cf025ff9529bdc29a24506708226378d42c70e8e43\" returns successfully" Mar 3 12:48:09.675515 sshd[5726]: Accepted publickey for core from 20.161.92.111 port 37716 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:09.683148 sshd-session[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:09.697770 systemd-logind[1978]: New session 8 of user core. Mar 3 12:48:09.707133 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 3 12:48:09.981771 systemd-networkd[1853]: cali38fad329b8b: Gained IPv6LL Mar 3 12:48:10.172148 systemd-networkd[1853]: cali5490444b7e6: Gained IPv6LL Mar 3 12:48:10.178804 sshd[5778]: Connection closed by 20.161.92.111 port 37716 Mar 3 12:48:10.177368 sshd-session[5726]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:10.190500 systemd[1]: sshd@7-172.31.25.173:22-20.161.92.111:37716.service: Deactivated successfully. Mar 3 12:48:10.202736 systemd[1]: session-8.scope: Deactivated successfully. Mar 3 12:48:10.209281 systemd-logind[1978]: Session 8 logged out. Waiting for processes to exit. Mar 3 12:48:10.213919 systemd-logind[1978]: Removed session 8. Mar 3 12:48:10.418597 kubelet[3342]: I0303 12:48:10.417506 3342 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-b8v6l" podStartSLOduration=63.417484006 podStartE2EDuration="1m3.417484006s" podCreationTimestamp="2026-03-03 12:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 12:48:10.365236989 +0000 UTC m=+67.916942318" watchObservedRunningTime="2026-03-03 12:48:10.417484006 +0000 UTC m=+67.969189335" Mar 3 12:48:10.925129 containerd[2007]: time="2026-03-03T12:48:10.925072500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:48:10.927227 containerd[2007]: time="2026-03-03T12:48:10.927170676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 3 12:48:10.929456 containerd[2007]: time="2026-03-03T12:48:10.929378388Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:48:10.935737 containerd[2007]: time="2026-03-03T12:48:10.935637084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:48:10.937266 containerd[2007]: time="2026-03-03T12:48:10.937221468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.944453587s" Mar 3 12:48:10.937448 containerd[2007]: time="2026-03-03T12:48:10.937418016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 3 12:48:10.939949 containerd[2007]: time="2026-03-03T12:48:10.939886836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 3 12:48:10.951594 containerd[2007]: time="2026-03-03T12:48:10.950677608Z" level=info msg="CreateContainer within sandbox \"0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 3 12:48:10.971724 containerd[2007]: time="2026-03-03T12:48:10.969565956Z" level=info msg="Container 652230cfbcaacd822ef8e1a0446b1a7a00fc7646ae1c9191797483a493a4f164: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:48:10.992331 containerd[2007]: time="2026-03-03T12:48:10.992255437Z" level=info msg="CreateContainer within sandbox \"0226602d76a598e4a7be88231db5482bdde905fcb484e1e1bbf72875d52884e0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"652230cfbcaacd822ef8e1a0446b1a7a00fc7646ae1c9191797483a493a4f164\"" Mar 3 12:48:10.994395 containerd[2007]: time="2026-03-03T12:48:10.994222513Z" level=info msg="StartContainer for \"652230cfbcaacd822ef8e1a0446b1a7a00fc7646ae1c9191797483a493a4f164\"" Mar 3 12:48:10.998012 containerd[2007]: time="2026-03-03T12:48:10.997860481Z" level=info msg="connecting to shim 652230cfbcaacd822ef8e1a0446b1a7a00fc7646ae1c9191797483a493a4f164" address="unix:///run/containerd/s/0addbab4869a2b83e2d4f7ddbd2d2c93e2e42ad02e4ed2410b16ba7e521bbd08" protocol=ttrpc version=3 Mar 3 12:48:11.042296 systemd[1]: Started cri-containerd-652230cfbcaacd822ef8e1a0446b1a7a00fc7646ae1c9191797483a493a4f164.scope - libcontainer container 652230cfbcaacd822ef8e1a0446b1a7a00fc7646ae1c9191797483a493a4f164. Mar 3 12:48:11.164830 containerd[2007]: time="2026-03-03T12:48:11.164773593Z" level=info msg="StartContainer for \"652230cfbcaacd822ef8e1a0446b1a7a00fc7646ae1c9191797483a493a4f164\" returns successfully" Mar 3 12:48:12.725635 ntpd[2201]: Listen normally on 9 cali970bb67423d [fe80::ecee:eeff:feee:eeee%8]:123 Mar 3 12:48:12.725757 ntpd[2201]: Listen normally on 10 cali307f2b757b2 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 3 12:48:12.726612 ntpd[2201]: 3 Mar 12:48:12 ntpd[2201]: Listen normally on 9 cali970bb67423d [fe80::ecee:eeff:feee:eeee%8]:123 Mar 3 12:48:12.726612 ntpd[2201]: 3 Mar 12:48:12 ntpd[2201]: Listen normally on 10 cali307f2b757b2 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 3 12:48:12.726612 ntpd[2201]: 3 Mar 12:48:12 ntpd[2201]: Listen normally on 11 cali8d3c405f263 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 3 12:48:12.726612 ntpd[2201]: 3 Mar 12:48:12 ntpd[2201]: Listen normally on 12 cali13088f67c91 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 3 12:48:12.726612 ntpd[2201]: 3 Mar 12:48:12 ntpd[2201]: Listen normally on 13 cali74b83cabc4e [fe80::ecee:eeff:feee:eeee%12]:123 Mar 3 12:48:12.725805 ntpd[2201]: Listen normally on 11 cali8d3c405f263 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 3 12:48:12.726953 ntpd[2201]: 3 Mar 12:48:12 ntpd[2201]: Listen normally on 14 cali38fad329b8b [fe80::ecee:eeff:feee:eeee%13]:123 Mar 3 12:48:12.726953 ntpd[2201]: 3 Mar 12:48:12 ntpd[2201]: Listen normally on 15 cali5490444b7e6 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 3 12:48:12.725850 ntpd[2201]: Listen normally on 12 cali13088f67c91 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 3 12:48:12.725895 ntpd[2201]: Listen normally on 13 cali74b83cabc4e [fe80::ecee:eeff:feee:eeee%12]:123 Mar 3 12:48:12.726736 ntpd[2201]: Listen normally on 14 cali38fad329b8b [fe80::ecee:eeff:feee:eeee%13]:123 Mar 3 12:48:12.726808 ntpd[2201]: Listen normally on 15 cali5490444b7e6 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 3 12:48:13.728120 kubelet[3342]: I0303 12:48:13.727012 3342 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-579696bc9c-5btc9" podStartSLOduration=44.774076019 podStartE2EDuration="48.726989186s" podCreationTimestamp="2026-03-03 12:47:25 +0000 UTC" firstStartedPulling="2026-03-03 12:48:06.986660289 +0000 UTC m=+64.538365618" lastFinishedPulling="2026-03-03 12:48:10.93957336 +0000 UTC m=+68.491278785" observedRunningTime="2026-03-03 12:48:11.364893598 +0000 UTC m=+68.916598939" watchObservedRunningTime="2026-03-03 12:48:13.726989186 +0000 UTC m=+71.278694503" Mar 3 12:48:13.805416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1180773851.mount: Deactivated successfully. Mar 3 12:48:14.632461 containerd[2007]: time="2026-03-03T12:48:14.632382771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:48:14.634532 containerd[2007]: time="2026-03-03T12:48:14.634225707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 3 12:48:14.636675 containerd[2007]: time="2026-03-03T12:48:14.636616407Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:48:14.642090 containerd[2007]: time="2026-03-03T12:48:14.642037083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:48:14.643599 containerd[2007]: time="2026-03-03T12:48:14.643398003Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.703188019s" Mar 3 12:48:14.643599 containerd[2007]: time="2026-03-03T12:48:14.643453767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 3 12:48:14.646663 containerd[2007]: time="2026-03-03T12:48:14.646065291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 3 12:48:14.654933 containerd[2007]: time="2026-03-03T12:48:14.654876855Z" level=info msg="CreateContainer within sandbox \"3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 3 12:48:14.675791 containerd[2007]: time="2026-03-03T12:48:14.675435219Z" level=info msg="Container 689bb6c8f7720a2c225b88f2699f9b07962bf8800d396d683718e4639cd68953: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:48:14.693892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740247614.mount: Deactivated successfully. Mar 3 12:48:14.710478 containerd[2007]: time="2026-03-03T12:48:14.710420055Z" level=info msg="CreateContainer within sandbox \"3d7e6596f6176d7af1e62e206c608d148726782be1028b5adf00528c371ac013\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"689bb6c8f7720a2c225b88f2699f9b07962bf8800d396d683718e4639cd68953\"" Mar 3 12:48:14.712346 containerd[2007]: time="2026-03-03T12:48:14.712281591Z" level=info msg="StartContainer for \"689bb6c8f7720a2c225b88f2699f9b07962bf8800d396d683718e4639cd68953\"" Mar 3 12:48:14.715243 containerd[2007]: time="2026-03-03T12:48:14.715176183Z" level=info msg="connecting to shim 689bb6c8f7720a2c225b88f2699f9b07962bf8800d396d683718e4639cd68953" address="unix:///run/containerd/s/46f699735e402ca2feb4bb313a47eaac9309891269cb8ebebd1a0b9a31be4d0c" protocol=ttrpc version=3 Mar 3 12:48:14.764036 systemd[1]: Started cri-containerd-689bb6c8f7720a2c225b88f2699f9b07962bf8800d396d683718e4639cd68953.scope - libcontainer container 689bb6c8f7720a2c225b88f2699f9b07962bf8800d396d683718e4639cd68953. Mar 3 12:48:14.958111 containerd[2007]: time="2026-03-03T12:48:14.957944812Z" level=info msg="StartContainer for \"689bb6c8f7720a2c225b88f2699f9b07962bf8800d396d683718e4639cd68953\" returns successfully" Mar 3 12:48:15.072361 containerd[2007]: time="2026-03-03T12:48:15.072282505Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:48:15.074352 containerd[2007]: time="2026-03-03T12:48:15.074288425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 3 12:48:15.078708 containerd[2007]: time="2026-03-03T12:48:15.078627097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 432.50723ms" Mar 3 12:48:15.078708 containerd[2007]: time="2026-03-03T12:48:15.078688753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 3 12:48:15.081282 containerd[2007]: time="2026-03-03T12:48:15.081227341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 3 12:48:15.092924 containerd[2007]: time="2026-03-03T12:48:15.092831761Z" level=info msg="CreateContainer within sandbox \"019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 3 12:48:15.111685 containerd[2007]: time="2026-03-03T12:48:15.111293293Z" level=info msg="Container 6379ecd372e896197d4bd56c5f63e0b699bddb2b1697ad4a12d5d5bd527d6443: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:48:15.131007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176637250.mount: Deactivated successfully. Mar 3 12:48:15.144912 containerd[2007]: time="2026-03-03T12:48:15.144833173Z" level=info msg="CreateContainer within sandbox \"019575aaacc06da59e9dd37b5dc45607a537f13c3fde1c3f310f64fbacc12cda\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6379ecd372e896197d4bd56c5f63e0b699bddb2b1697ad4a12d5d5bd527d6443\"" Mar 3 12:48:15.146346 containerd[2007]: time="2026-03-03T12:48:15.146248309Z" level=info msg="StartContainer for \"6379ecd372e896197d4bd56c5f63e0b699bddb2b1697ad4a12d5d5bd527d6443\"" Mar 3 12:48:15.150168 containerd[2007]: time="2026-03-03T12:48:15.150024133Z" level=info msg="connecting to shim 6379ecd372e896197d4bd56c5f63e0b699bddb2b1697ad4a12d5d5bd527d6443" address="unix:///run/containerd/s/b3352cebef6f0534e955f180bb5a5178b26a2421913fbe52189071691d644ded" protocol=ttrpc version=3 Mar 3 12:48:15.205028 systemd[1]: Started cri-containerd-6379ecd372e896197d4bd56c5f63e0b699bddb2b1697ad4a12d5d5bd527d6443.scope - libcontainer container 6379ecd372e896197d4bd56c5f63e0b699bddb2b1697ad4a12d5d5bd527d6443. Mar 3 12:48:15.270062 systemd[1]: Started sshd@8-172.31.25.173:22-20.161.92.111:34890.service - OpenSSH per-connection server daemon (20.161.92.111:34890). Mar 3 12:48:15.320719 containerd[2007]: time="2026-03-03T12:48:15.320574158Z" level=info msg="StartContainer for \"6379ecd372e896197d4bd56c5f63e0b699bddb2b1697ad4a12d5d5bd527d6443\" returns successfully" Mar 3 12:48:15.443764 kubelet[3342]: I0303 12:48:15.440782 3342 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-579696bc9c-ppbr5" podStartSLOduration=42.477606967 podStartE2EDuration="50.440759799s" podCreationTimestamp="2026-03-03 12:47:25 +0000 UTC" firstStartedPulling="2026-03-03 12:48:07.117645161 +0000 UTC m=+64.669350502" lastFinishedPulling="2026-03-03 12:48:15.080797933 +0000 UTC m=+72.632503334" observedRunningTime="2026-03-03 12:48:15.402140642 +0000 UTC m=+72.953845983" watchObservedRunningTime="2026-03-03 12:48:15.440759799 +0000 UTC m=+72.992465152" Mar 3 12:48:15.676688 kubelet[3342]: I0303 12:48:15.676578 3342 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-wjtnn" podStartSLOduration=43.12006375 podStartE2EDuration="50.676561504s" podCreationTimestamp="2026-03-03 12:47:25 +0000 UTC" firstStartedPulling="2026-03-03 12:48:07.089186465 +0000 UTC m=+64.640891794" lastFinishedPulling="2026-03-03 12:48:14.645684231 +0000 UTC m=+72.197389548" observedRunningTime="2026-03-03 12:48:15.440384391 +0000 UTC m=+72.992089732" watchObservedRunningTime="2026-03-03 12:48:15.676561504 +0000 UTC m=+73.228266833" Mar 3 12:48:15.782143 sshd[5921]: Accepted publickey for core from 20.161.92.111 port 34890 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:15.785952 sshd-session[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:15.797167 systemd-logind[1978]: New session 9 of user core. Mar 3 12:48:15.802948 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 3 12:48:16.206746 sshd[5967]: Connection closed by 20.161.92.111 port 34890 Mar 3 12:48:16.207478 sshd-session[5921]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:16.217614 systemd[1]: sshd@8-172.31.25.173:22-20.161.92.111:34890.service: Deactivated successfully. Mar 3 12:48:16.224457 systemd[1]: session-9.scope: Deactivated successfully. Mar 3 12:48:16.227049 systemd-logind[1978]: Session 9 logged out. Waiting for processes to exit. Mar 3 12:48:16.232242 systemd-logind[1978]: Removed session 9. Mar 3 12:48:19.400146 containerd[2007]: time="2026-03-03T12:48:19.400067142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:48:19.401665 containerd[2007]: time="2026-03-03T12:48:19.401446638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 3 12:48:19.403588 containerd[2007]: time="2026-03-03T12:48:19.403261158Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:48:19.412504 containerd[2007]: time="2026-03-03T12:48:19.412441338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:48:19.417425 containerd[2007]: time="2026-03-03T12:48:19.417274470Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 4.335985917s" Mar 3 12:48:19.418792 containerd[2007]: time="2026-03-03T12:48:19.418735746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 3 12:48:19.424023 containerd[2007]: time="2026-03-03T12:48:19.423958566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 3 12:48:19.491611 containerd[2007]: time="2026-03-03T12:48:19.491538931Z" level=info msg="CreateContainer within sandbox \"aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 3 12:48:19.516514 containerd[2007]: time="2026-03-03T12:48:19.516441919Z" level=info msg="Container 2891fb291c6136e482ac3565a2175df0085e78eacb04e14697a4d81803923efc: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:48:19.528670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4030236004.mount: Deactivated successfully. Mar 3 12:48:19.555624 containerd[2007]: time="2026-03-03T12:48:19.555570067Z" level=info msg="CreateContainer within sandbox \"aec73f05eb98c7e65f46a05cece84b234d65515bb7eaa1f9c964470c39b62e36\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2891fb291c6136e482ac3565a2175df0085e78eacb04e14697a4d81803923efc\"" Mar 3 12:48:19.558819 containerd[2007]: time="2026-03-03T12:48:19.558246163Z" level=info msg="StartContainer for \"2891fb291c6136e482ac3565a2175df0085e78eacb04e14697a4d81803923efc\"" Mar 3 12:48:19.563762 containerd[2007]: time="2026-03-03T12:48:19.563586547Z" level=info msg="connecting to shim 2891fb291c6136e482ac3565a2175df0085e78eacb04e14697a4d81803923efc" address="unix:///run/containerd/s/5554d44ff10aea33b79ced95e794dda954755cd2e070c8040f789baef281e4af" protocol=ttrpc version=3 Mar 3 12:48:19.614031 systemd[1]: Started cri-containerd-2891fb291c6136e482ac3565a2175df0085e78eacb04e14697a4d81803923efc.scope - libcontainer container 2891fb291c6136e482ac3565a2175df0085e78eacb04e14697a4d81803923efc. Mar 3 12:48:19.707809 containerd[2007]: time="2026-03-03T12:48:19.707032688Z" level=info msg="StartContainer for \"2891fb291c6136e482ac3565a2175df0085e78eacb04e14697a4d81803923efc\" returns successfully" Mar 3 12:48:20.445954 kubelet[3342]: I0303 12:48:20.445573 3342 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-795844594-88vdk" podStartSLOduration=39.589078426 podStartE2EDuration="51.4455482s" podCreationTimestamp="2026-03-03 12:47:29 +0000 UTC" firstStartedPulling="2026-03-03 12:48:07.566464244 +0000 UTC m=+65.118169573" lastFinishedPulling="2026-03-03 12:48:19.422934006 +0000 UTC m=+76.974639347" observedRunningTime="2026-03-03 12:48:20.44261522 +0000 UTC m=+77.994320561" watchObservedRunningTime="2026-03-03 12:48:20.4455482 +0000 UTC m=+77.997253541" Mar 3 12:48:21.298745 systemd[1]: Started sshd@9-172.31.25.173:22-20.161.92.111:45452.service - OpenSSH per-connection server daemon (20.161.92.111:45452). Mar 3 12:48:21.774513 sshd[6070]: Accepted publickey for core from 20.161.92.111 port 45452 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:21.779941 sshd-session[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:21.792431 systemd-logind[1978]: New session 10 of user core. Mar 3 12:48:21.800138 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 3 12:48:21.818241 containerd[2007]: time="2026-03-03T12:48:21.818152510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:48:21.821121 containerd[2007]: time="2026-03-03T12:48:21.820867894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 3 12:48:21.835935 containerd[2007]: time="2026-03-03T12:48:21.835861258Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:48:21.840887 containerd[2007]: time="2026-03-03T12:48:21.840770194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:48:21.842744 containerd[2007]: time="2026-03-03T12:48:21.842242090Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 2.418220008s" Mar 3 12:48:21.842744 containerd[2007]: time="2026-03-03T12:48:21.842298058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 3 12:48:21.868540 containerd[2007]: time="2026-03-03T12:48:21.868491131Z" level=info msg="CreateContainer within sandbox \"860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 3 12:48:21.888749 containerd[2007]: time="2026-03-03T12:48:21.888677879Z" level=info msg="Container 919fcdc168783e527e1a4cacbd5bf6e6823d680e8c9b98c1e441890196ed1e2d: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:48:21.912745 containerd[2007]: time="2026-03-03T12:48:21.912634367Z" level=info msg="CreateContainer within sandbox \"860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"919fcdc168783e527e1a4cacbd5bf6e6823d680e8c9b98c1e441890196ed1e2d\"" Mar 3 12:48:21.913862 containerd[2007]: time="2026-03-03T12:48:21.913786079Z" level=info msg="StartContainer for \"919fcdc168783e527e1a4cacbd5bf6e6823d680e8c9b98c1e441890196ed1e2d\"" Mar 3 12:48:21.919012 containerd[2007]: time="2026-03-03T12:48:21.918851627Z" level=info msg="connecting to shim 919fcdc168783e527e1a4cacbd5bf6e6823d680e8c9b98c1e441890196ed1e2d" address="unix:///run/containerd/s/9922e5187fe8b2763ee39a7fd45f30bdc537796f151c15015ca99501f45d0d4d" protocol=ttrpc version=3 Mar 3 12:48:21.991900 systemd[1]: Started cri-containerd-919fcdc168783e527e1a4cacbd5bf6e6823d680e8c9b98c1e441890196ed1e2d.scope - libcontainer container 919fcdc168783e527e1a4cacbd5bf6e6823d680e8c9b98c1e441890196ed1e2d. Mar 3 12:48:22.150721 containerd[2007]: time="2026-03-03T12:48:22.150418100Z" level=info msg="StartContainer for \"919fcdc168783e527e1a4cacbd5bf6e6823d680e8c9b98c1e441890196ed1e2d\" returns successfully" Mar 3 12:48:22.155860 containerd[2007]: time="2026-03-03T12:48:22.155789096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 3 12:48:22.258963 sshd[6077]: Connection closed by 20.161.92.111 port 45452 Mar 3 12:48:22.260005 sshd-session[6070]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:22.270279 systemd-logind[1978]: Session 10 logged out. Waiting for processes to exit. Mar 3 12:48:22.272570 systemd[1]: sshd@9-172.31.25.173:22-20.161.92.111:45452.service: Deactivated successfully. Mar 3 12:48:22.284584 systemd[1]: session-10.scope: Deactivated successfully. Mar 3 12:48:22.293764 systemd-logind[1978]: Removed session 10. Mar 3 12:48:24.354872 containerd[2007]: time="2026-03-03T12:48:24.353827415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:48:24.359948 containerd[2007]: time="2026-03-03T12:48:24.359759435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 3 12:48:24.361882 containerd[2007]: time="2026-03-03T12:48:24.361798895Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:48:24.368039 containerd[2007]: time="2026-03-03T12:48:24.367954043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 12:48:24.371126 containerd[2007]: time="2026-03-03T12:48:24.371079287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.215211603s" Mar 3 12:48:24.371410 containerd[2007]: time="2026-03-03T12:48:24.371274191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 3 12:48:24.380840 containerd[2007]: time="2026-03-03T12:48:24.380747243Z" level=info msg="CreateContainer within sandbox \"860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 3 12:48:24.405526 containerd[2007]: time="2026-03-03T12:48:24.405409115Z" level=info msg="Container 060f36370e7e7d655096eeda8ab1872b3a99cf4d5f92f1eff9f6eb7eae50f98f: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:48:24.439155 containerd[2007]: time="2026-03-03T12:48:24.438981647Z" level=info msg="CreateContainer within sandbox \"860a22cc527877dd52f8a0ef3360a0726e28654d431cea6ed155c23fb52ae17d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"060f36370e7e7d655096eeda8ab1872b3a99cf4d5f92f1eff9f6eb7eae50f98f\"" Mar 3 12:48:24.440929 containerd[2007]: time="2026-03-03T12:48:24.440819795Z" level=info msg="StartContainer for \"060f36370e7e7d655096eeda8ab1872b3a99cf4d5f92f1eff9f6eb7eae50f98f\"" Mar 3 12:48:24.444596 containerd[2007]: time="2026-03-03T12:48:24.444296339Z" level=info msg="connecting to shim 060f36370e7e7d655096eeda8ab1872b3a99cf4d5f92f1eff9f6eb7eae50f98f" address="unix:///run/containerd/s/9922e5187fe8b2763ee39a7fd45f30bdc537796f151c15015ca99501f45d0d4d" protocol=ttrpc version=3 Mar 3 12:48:24.489037 systemd[1]: Started cri-containerd-060f36370e7e7d655096eeda8ab1872b3a99cf4d5f92f1eff9f6eb7eae50f98f.scope - libcontainer container 060f36370e7e7d655096eeda8ab1872b3a99cf4d5f92f1eff9f6eb7eae50f98f. Mar 3 12:48:24.613088 containerd[2007]: time="2026-03-03T12:48:24.612917808Z" level=info msg="StartContainer for \"060f36370e7e7d655096eeda8ab1872b3a99cf4d5f92f1eff9f6eb7eae50f98f\" returns successfully" Mar 3 12:48:24.930148 kubelet[3342]: I0303 12:48:24.929572 3342 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 3 12:48:24.930148 kubelet[3342]: I0303 12:48:24.929665 3342 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 3 12:48:27.366920 systemd[1]: Started sshd@10-172.31.25.173:22-20.161.92.111:45454.service - OpenSSH per-connection server daemon (20.161.92.111:45454). Mar 3 12:48:27.866647 sshd[6204]: Accepted publickey for core from 20.161.92.111 port 45454 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:27.869930 sshd-session[6204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:27.880360 systemd-logind[1978]: New session 11 of user core. Mar 3 12:48:27.888016 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 3 12:48:28.279106 sshd[6207]: Connection closed by 20.161.92.111 port 45454 Mar 3 12:48:28.280021 sshd-session[6204]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:28.287565 systemd[1]: sshd@10-172.31.25.173:22-20.161.92.111:45454.service: Deactivated successfully. Mar 3 12:48:28.291901 systemd[1]: session-11.scope: Deactivated successfully. Mar 3 12:48:28.294660 systemd-logind[1978]: Session 11 logged out. Waiting for processes to exit. Mar 3 12:48:28.298963 systemd-logind[1978]: Removed session 11. Mar 3 12:48:28.908588 update_engine[1979]: I20260303 12:48:28.908478 1979 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 3 12:48:28.908588 update_engine[1979]: I20260303 12:48:28.908560 1979 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 3 12:48:28.909251 update_engine[1979]: I20260303 12:48:28.909109 1979 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 3 12:48:28.911666 update_engine[1979]: I20260303 12:48:28.911489 1979 omaha_request_params.cc:62] Current group set to stable Mar 3 12:48:28.912467 update_engine[1979]: I20260303 12:48:28.911664 1979 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 3 12:48:28.912467 update_engine[1979]: I20260303 12:48:28.911685 1979 update_attempter.cc:643] Scheduling an action processor start. Mar 3 12:48:28.912467 update_engine[1979]: I20260303 12:48:28.912261 1979 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 3 12:48:28.912467 update_engine[1979]: I20260303 12:48:28.912336 1979 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 3 12:48:28.912467 update_engine[1979]: I20260303 12:48:28.912446 1979 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 3 12:48:28.912467 update_engine[1979]: I20260303 12:48:28.912465 1979 omaha_request_action.cc:272] Request: Mar 3 12:48:28.912467 update_engine[1979]: Mar 3 12:48:28.912467 update_engine[1979]: Mar 3 12:48:28.912467 update_engine[1979]: Mar 3 12:48:28.912467 update_engine[1979]: Mar 3 12:48:28.912467 update_engine[1979]: Mar 3 12:48:28.912467 update_engine[1979]: Mar 3 12:48:28.912467 update_engine[1979]: Mar 3 12:48:28.912467 update_engine[1979]: Mar 3 12:48:28.912467 update_engine[1979]: I20260303 12:48:28.912480 1979 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 3 12:48:28.921554 update_engine[1979]: I20260303 12:48:28.919732 1979 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 3 12:48:28.923103 update_engine[1979]: I20260303 12:48:28.922660 1979 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 3 12:48:28.925268 locksmithd[2028]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 3 12:48:28.952783 update_engine[1979]: E20260303 12:48:28.950909 1979 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 3 12:48:28.952950 update_engine[1979]: I20260303 12:48:28.952883 1979 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 3 12:48:33.366348 systemd[1]: Started sshd@11-172.31.25.173:22-20.161.92.111:42320.service - OpenSSH per-connection server daemon (20.161.92.111:42320). Mar 3 12:48:33.830744 sshd[6250]: Accepted publickey for core from 20.161.92.111 port 42320 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:33.834199 sshd-session[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:33.844548 systemd-logind[1978]: New session 12 of user core. Mar 3 12:48:33.849997 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 3 12:48:34.255967 sshd[6253]: Connection closed by 20.161.92.111 port 42320 Mar 3 12:48:34.256962 sshd-session[6250]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:34.265208 systemd[1]: sshd@11-172.31.25.173:22-20.161.92.111:42320.service: Deactivated successfully. Mar 3 12:48:34.270646 systemd[1]: session-12.scope: Deactivated successfully. Mar 3 12:48:34.273381 systemd-logind[1978]: Session 12 logged out. Waiting for processes to exit. Mar 3 12:48:34.276269 systemd-logind[1978]: Removed session 12. Mar 3 12:48:34.348130 systemd[1]: Started sshd@12-172.31.25.173:22-20.161.92.111:42324.service - OpenSSH per-connection server daemon (20.161.92.111:42324). Mar 3 12:48:34.809432 sshd[6266]: Accepted publickey for core from 20.161.92.111 port 42324 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:34.812455 sshd-session[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:34.823304 systemd-logind[1978]: New session 13 of user core. Mar 3 12:48:34.831968 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 3 12:48:35.336547 sshd[6269]: Connection closed by 20.161.92.111 port 42324 Mar 3 12:48:35.336430 sshd-session[6266]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:35.343585 systemd[1]: sshd@12-172.31.25.173:22-20.161.92.111:42324.service: Deactivated successfully. Mar 3 12:48:35.348684 systemd[1]: session-13.scope: Deactivated successfully. Mar 3 12:48:35.351265 systemd-logind[1978]: Session 13 logged out. Waiting for processes to exit. Mar 3 12:48:35.355680 systemd-logind[1978]: Removed session 13. Mar 3 12:48:35.427085 systemd[1]: Started sshd@13-172.31.25.173:22-20.161.92.111:42326.service - OpenSSH per-connection server daemon (20.161.92.111:42326). Mar 3 12:48:35.902758 sshd[6301]: Accepted publickey for core from 20.161.92.111 port 42326 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:35.904925 sshd-session[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:35.918018 systemd-logind[1978]: New session 14 of user core. Mar 3 12:48:35.926015 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 3 12:48:36.314516 sshd[6304]: Connection closed by 20.161.92.111 port 42326 Mar 3 12:48:36.315084 sshd-session[6301]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:36.323992 systemd-logind[1978]: Session 14 logged out. Waiting for processes to exit. Mar 3 12:48:36.325783 systemd[1]: sshd@13-172.31.25.173:22-20.161.92.111:42326.service: Deactivated successfully. Mar 3 12:48:36.330414 systemd[1]: session-14.scope: Deactivated successfully. Mar 3 12:48:36.333624 systemd-logind[1978]: Removed session 14. Mar 3 12:48:38.908514 update_engine[1979]: I20260303 12:48:38.907759 1979 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 3 12:48:38.908514 update_engine[1979]: I20260303 12:48:38.907873 1979 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 3 12:48:38.908514 update_engine[1979]: I20260303 12:48:38.908398 1979 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 3 12:48:38.910496 update_engine[1979]: E20260303 12:48:38.910342 1979 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 3 12:48:38.910496 update_engine[1979]: I20260303 12:48:38.910455 1979 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 3 12:48:41.411124 systemd[1]: Started sshd@14-172.31.25.173:22-20.161.92.111:40984.service - OpenSSH per-connection server daemon (20.161.92.111:40984). Mar 3 12:48:41.872469 sshd[6331]: Accepted publickey for core from 20.161.92.111 port 40984 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:41.875380 sshd-session[6331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:41.884670 systemd-logind[1978]: New session 15 of user core. Mar 3 12:48:41.891997 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 3 12:48:42.247882 sshd[6334]: Connection closed by 20.161.92.111 port 40984 Mar 3 12:48:42.246959 sshd-session[6331]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:42.253149 systemd[1]: sshd@14-172.31.25.173:22-20.161.92.111:40984.service: Deactivated successfully. Mar 3 12:48:42.257578 systemd[1]: session-15.scope: Deactivated successfully. Mar 3 12:48:42.263869 systemd-logind[1978]: Session 15 logged out. Waiting for processes to exit. Mar 3 12:48:42.265974 systemd-logind[1978]: Removed session 15. Mar 3 12:48:42.351266 systemd[1]: Started sshd@15-172.31.25.173:22-20.161.92.111:40986.service - OpenSSH per-connection server daemon (20.161.92.111:40986). Mar 3 12:48:42.840208 sshd[6345]: Accepted publickey for core from 20.161.92.111 port 40986 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:42.842316 sshd-session[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:42.851063 systemd-logind[1978]: New session 16 of user core. Mar 3 12:48:42.861982 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 3 12:48:43.559928 sshd[6348]: Connection closed by 20.161.92.111 port 40986 Mar 3 12:48:43.561479 sshd-session[6345]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:43.568868 systemd[1]: sshd@15-172.31.25.173:22-20.161.92.111:40986.service: Deactivated successfully. Mar 3 12:48:43.573958 systemd[1]: session-16.scope: Deactivated successfully. Mar 3 12:48:43.576826 systemd-logind[1978]: Session 16 logged out. Waiting for processes to exit. Mar 3 12:48:43.580182 systemd-logind[1978]: Removed session 16. Mar 3 12:48:43.649062 systemd[1]: Started sshd@16-172.31.25.173:22-20.161.92.111:40994.service - OpenSSH per-connection server daemon (20.161.92.111:40994). Mar 3 12:48:44.134859 sshd[6358]: Accepted publickey for core from 20.161.92.111 port 40994 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:44.137202 sshd-session[6358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:44.147252 systemd-logind[1978]: New session 17 of user core. Mar 3 12:48:44.151960 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 3 12:48:45.316922 sshd[6361]: Connection closed by 20.161.92.111 port 40994 Mar 3 12:48:45.318040 sshd-session[6358]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:45.331244 systemd[1]: sshd@16-172.31.25.173:22-20.161.92.111:40994.service: Deactivated successfully. Mar 3 12:48:45.335943 systemd[1]: session-17.scope: Deactivated successfully. Mar 3 12:48:45.339868 systemd-logind[1978]: Session 17 logged out. Waiting for processes to exit. Mar 3 12:48:45.344072 systemd-logind[1978]: Removed session 17. Mar 3 12:48:45.412940 systemd[1]: Started sshd@17-172.31.25.173:22-20.161.92.111:41008.service - OpenSSH per-connection server daemon (20.161.92.111:41008). Mar 3 12:48:45.884386 sshd[6398]: Accepted publickey for core from 20.161.92.111 port 41008 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:45.886925 sshd-session[6398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:45.894593 systemd-logind[1978]: New session 18 of user core. Mar 3 12:48:45.908986 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 3 12:48:46.556339 sshd[6408]: Connection closed by 20.161.92.111 port 41008 Mar 3 12:48:46.556905 sshd-session[6398]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:46.567198 systemd-logind[1978]: Session 18 logged out. Waiting for processes to exit. Mar 3 12:48:46.567377 systemd[1]: sshd@17-172.31.25.173:22-20.161.92.111:41008.service: Deactivated successfully. Mar 3 12:48:46.573283 systemd[1]: session-18.scope: Deactivated successfully. Mar 3 12:48:46.576587 systemd-logind[1978]: Removed session 18. Mar 3 12:48:46.650602 systemd[1]: Started sshd@18-172.31.25.173:22-20.161.92.111:41014.service - OpenSSH per-connection server daemon (20.161.92.111:41014). Mar 3 12:48:47.116104 sshd[6420]: Accepted publickey for core from 20.161.92.111 port 41014 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:47.119508 sshd-session[6420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:47.128598 systemd-logind[1978]: New session 19 of user core. Mar 3 12:48:47.136966 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 3 12:48:47.483819 sshd[6423]: Connection closed by 20.161.92.111 port 41014 Mar 3 12:48:47.483291 sshd-session[6420]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:47.489402 systemd[1]: sshd@18-172.31.25.173:22-20.161.92.111:41014.service: Deactivated successfully. Mar 3 12:48:47.494822 systemd[1]: session-19.scope: Deactivated successfully. Mar 3 12:48:47.498056 systemd-logind[1978]: Session 19 logged out. Waiting for processes to exit. Mar 3 12:48:47.501751 systemd-logind[1978]: Removed session 19. Mar 3 12:48:48.908392 update_engine[1979]: I20260303 12:48:48.908300 1979 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 3 12:48:48.908975 update_engine[1979]: I20260303 12:48:48.908416 1979 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 3 12:48:48.909073 update_engine[1979]: I20260303 12:48:48.909019 1979 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 3 12:48:48.909873 update_engine[1979]: E20260303 12:48:48.909816 1979 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 3 12:48:48.909959 update_engine[1979]: I20260303 12:48:48.909930 1979 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 3 12:48:52.577119 systemd[1]: Started sshd@19-172.31.25.173:22-20.161.92.111:54964.service - OpenSSH per-connection server daemon (20.161.92.111:54964). Mar 3 12:48:53.035612 sshd[6464]: Accepted publickey for core from 20.161.92.111 port 54964 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:53.038573 sshd-session[6464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:53.046992 systemd-logind[1978]: New session 20 of user core. Mar 3 12:48:53.058950 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 3 12:48:53.420191 sshd[6467]: Connection closed by 20.161.92.111 port 54964 Mar 3 12:48:53.421356 sshd-session[6464]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:53.432247 systemd-logind[1978]: Session 20 logged out. Waiting for processes to exit. Mar 3 12:48:53.434073 systemd[1]: sshd@19-172.31.25.173:22-20.161.92.111:54964.service: Deactivated successfully. Mar 3 12:48:53.441537 systemd[1]: session-20.scope: Deactivated successfully. Mar 3 12:48:53.449159 systemd-logind[1978]: Removed session 20. Mar 3 12:48:58.518146 systemd[1]: Started sshd@20-172.31.25.173:22-20.161.92.111:54980.service - OpenSSH per-connection server daemon (20.161.92.111:54980). Mar 3 12:48:58.912519 update_engine[1979]: I20260303 12:48:58.911825 1979 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 3 12:48:58.912519 update_engine[1979]: I20260303 12:48:58.911926 1979 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 3 12:48:58.912519 update_engine[1979]: I20260303 12:48:58.912412 1979 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 3 12:48:58.913625 update_engine[1979]: E20260303 12:48:58.913506 1979 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 3 12:48:58.914777 update_engine[1979]: I20260303 12:48:58.913863 1979 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 3 12:48:58.914777 update_engine[1979]: I20260303 12:48:58.913892 1979 omaha_request_action.cc:617] Omaha request response: Mar 3 12:48:58.914777 update_engine[1979]: E20260303 12:48:58.914005 1979 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 3 12:48:58.914777 update_engine[1979]: I20260303 12:48:58.914041 1979 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 3 12:48:58.914777 update_engine[1979]: I20260303 12:48:58.914057 1979 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 3 12:48:58.914777 update_engine[1979]: I20260303 12:48:58.914069 1979 update_attempter.cc:306] Processing Done. Mar 3 12:48:58.914777 update_engine[1979]: E20260303 12:48:58.914095 1979 update_attempter.cc:619] Update failed. Mar 3 12:48:58.914777 update_engine[1979]: I20260303 12:48:58.914112 1979 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 3 12:48:58.914777 update_engine[1979]: I20260303 12:48:58.914126 1979 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 3 12:48:58.914777 update_engine[1979]: I20260303 12:48:58.914140 1979 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 3 12:48:58.914777 update_engine[1979]: I20260303 12:48:58.914247 1979 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 3 12:48:58.914777 update_engine[1979]: I20260303 12:48:58.914290 1979 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 3 12:48:58.914777 update_engine[1979]: I20260303 12:48:58.914306 1979 omaha_request_action.cc:272] Request: Mar 3 12:48:58.914777 update_engine[1979]: Mar 3 12:48:58.914777 update_engine[1979]: Mar 3 12:48:58.914777 update_engine[1979]: Mar 3 12:48:58.914777 update_engine[1979]: Mar 3 12:48:58.914777 update_engine[1979]: Mar 3 12:48:58.914777 update_engine[1979]: Mar 3 12:48:58.915615 update_engine[1979]: I20260303 12:48:58.914321 1979 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 3 12:48:58.915615 update_engine[1979]: I20260303 12:48:58.914364 1979 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 3 12:48:58.916418 locksmithd[2028]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 3 12:48:58.917122 update_engine[1979]: I20260303 12:48:58.916249 1979 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 3 12:48:58.917639 update_engine[1979]: E20260303 12:48:58.917244 1979 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 3 12:48:58.917639 update_engine[1979]: I20260303 12:48:58.917383 1979 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 3 12:48:58.917639 update_engine[1979]: I20260303 12:48:58.917403 1979 omaha_request_action.cc:617] Omaha request response: Mar 3 12:48:58.917639 update_engine[1979]: I20260303 12:48:58.917421 1979 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 3 12:48:58.917639 update_engine[1979]: I20260303 12:48:58.917435 1979 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 3 12:48:58.917639 update_engine[1979]: I20260303 12:48:58.917448 1979 update_attempter.cc:306] Processing Done. Mar 3 12:48:58.917639 update_engine[1979]: I20260303 12:48:58.917462 1979 update_attempter.cc:310] Error event sent. Mar 3 12:48:58.917639 update_engine[1979]: I20260303 12:48:58.917483 1979 update_check_scheduler.cc:74] Next update check in 45m40s Mar 3 12:48:58.918261 locksmithd[2028]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 3 12:48:58.976765 sshd[6504]: Accepted publickey for core from 20.161.92.111 port 54980 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:48:58.979202 sshd-session[6504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:48:58.987363 systemd-logind[1978]: New session 21 of user core. Mar 3 12:48:58.996938 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 3 12:48:59.341766 sshd[6507]: Connection closed by 20.161.92.111 port 54980 Mar 3 12:48:59.342242 sshd-session[6504]: pam_unix(sshd:session): session closed for user core Mar 3 12:48:59.351876 systemd-logind[1978]: Session 21 logged out. Waiting for processes to exit. Mar 3 12:48:59.353374 systemd[1]: sshd@20-172.31.25.173:22-20.161.92.111:54980.service: Deactivated successfully. Mar 3 12:48:59.357546 systemd[1]: session-21.scope: Deactivated successfully. Mar 3 12:48:59.362226 systemd-logind[1978]: Removed session 21. Mar 3 12:49:04.452474 systemd[1]: Started sshd@21-172.31.25.173:22-20.161.92.111:33220.service - OpenSSH per-connection server daemon (20.161.92.111:33220). Mar 3 12:49:04.940754 sshd[6521]: Accepted publickey for core from 20.161.92.111 port 33220 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:49:04.942837 sshd-session[6521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:49:04.953602 systemd-logind[1978]: New session 22 of user core. Mar 3 12:49:04.962197 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 3 12:49:05.327330 sshd[6524]: Connection closed by 20.161.92.111 port 33220 Mar 3 12:49:05.328362 sshd-session[6521]: pam_unix(sshd:session): session closed for user core Mar 3 12:49:05.336947 systemd[1]: sshd@21-172.31.25.173:22-20.161.92.111:33220.service: Deactivated successfully. Mar 3 12:49:05.341540 systemd[1]: session-22.scope: Deactivated successfully. Mar 3 12:49:05.344192 systemd-logind[1978]: Session 22 logged out. Waiting for processes to exit. Mar 3 12:49:05.348914 systemd-logind[1978]: Removed session 22. Mar 3 12:49:10.414856 systemd[1]: Started sshd@22-172.31.25.173:22-20.161.92.111:41368.service - OpenSSH per-connection server daemon (20.161.92.111:41368). Mar 3 12:49:10.895256 sshd[6540]: Accepted publickey for core from 20.161.92.111 port 41368 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:49:10.898426 sshd-session[6540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:49:10.909020 systemd-logind[1978]: New session 23 of user core. Mar 3 12:49:10.918029 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 3 12:49:11.328181 sshd[6543]: Connection closed by 20.161.92.111 port 41368 Mar 3 12:49:11.330067 sshd-session[6540]: pam_unix(sshd:session): session closed for user core Mar 3 12:49:11.345348 systemd[1]: sshd@22-172.31.25.173:22-20.161.92.111:41368.service: Deactivated successfully. Mar 3 12:49:11.352891 systemd[1]: session-23.scope: Deactivated successfully. Mar 3 12:49:11.356603 systemd-logind[1978]: Session 23 logged out. Waiting for processes to exit. Mar 3 12:49:11.362250 systemd-logind[1978]: Removed session 23. Mar 3 12:49:16.437977 systemd[1]: Started sshd@23-172.31.25.173:22-20.161.92.111:41374.service - OpenSSH per-connection server daemon (20.161.92.111:41374). Mar 3 12:49:16.954797 sshd[6600]: Accepted publickey for core from 20.161.92.111 port 41374 ssh2: RSA SHA256:22ZbIgyaNQczCuvFy6/wgQexuKUTzmKTMN4AWwPPQfw Mar 3 12:49:16.957241 sshd-session[6600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 12:49:16.969430 systemd-logind[1978]: New session 24 of user core. Mar 3 12:49:16.979961 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 3 12:49:17.336609 sshd[6613]: Connection closed by 20.161.92.111 port 41374 Mar 3 12:49:17.337544 sshd-session[6600]: pam_unix(sshd:session): session closed for user core Mar 3 12:49:17.346184 systemd[1]: sshd@23-172.31.25.173:22-20.161.92.111:41374.service: Deactivated successfully. Mar 3 12:49:17.351654 systemd[1]: session-24.scope: Deactivated successfully. Mar 3 12:49:17.355889 systemd-logind[1978]: Session 24 logged out. Waiting for processes to exit. Mar 3 12:49:17.357843 systemd-logind[1978]: Removed session 24. Mar 3 12:49:31.934073 systemd[1]: cri-containerd-eb50e97bf9b4f17ad87436999f7686a0a94b154dfaee124fd94b5d995f7173af.scope: Deactivated successfully. Mar 3 12:49:31.936127 systemd[1]: cri-containerd-eb50e97bf9b4f17ad87436999f7686a0a94b154dfaee124fd94b5d995f7173af.scope: Consumed 5.139s CPU time, 66M memory peak, 64K read from disk. Mar 3 12:49:31.945151 containerd[2007]: time="2026-03-03T12:49:31.944641327Z" level=info msg="received container exit event container_id:\"eb50e97bf9b4f17ad87436999f7686a0a94b154dfaee124fd94b5d995f7173af\" id:\"eb50e97bf9b4f17ad87436999f7686a0a94b154dfaee124fd94b5d995f7173af\" pid:3184 exit_status:1 exited_at:{seconds:1772542171 nanos:943871671}" Mar 3 12:49:32.002568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb50e97bf9b4f17ad87436999f7686a0a94b154dfaee124fd94b5d995f7173af-rootfs.mount: Deactivated successfully. Mar 3 12:49:32.320075 systemd[1]: cri-containerd-a981da9b4dde37fb5f43e873445439375bf52028c47047da944806104b536f04.scope: Deactivated successfully. Mar 3 12:49:32.322959 systemd[1]: cri-containerd-a981da9b4dde37fb5f43e873445439375bf52028c47047da944806104b536f04.scope: Consumed 21.785s CPU time, 109.9M memory peak. Mar 3 12:49:32.330309 containerd[2007]: time="2026-03-03T12:49:32.330144725Z" level=info msg="received container exit event container_id:\"a981da9b4dde37fb5f43e873445439375bf52028c47047da944806104b536f04\" id:\"a981da9b4dde37fb5f43e873445439375bf52028c47047da944806104b536f04\" pid:3937 exit_status:1 exited_at:{seconds:1772542172 nanos:329615909}" Mar 3 12:49:32.374498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a981da9b4dde37fb5f43e873445439375bf52028c47047da944806104b536f04-rootfs.mount: Deactivated successfully. Mar 3 12:49:32.764739 kubelet[3342]: I0303 12:49:32.764315 3342 scope.go:122] "RemoveContainer" containerID="eb50e97bf9b4f17ad87436999f7686a0a94b154dfaee124fd94b5d995f7173af" Mar 3 12:49:32.764739 kubelet[3342]: I0303 12:49:32.764613 3342 scope.go:122] "RemoveContainer" containerID="a981da9b4dde37fb5f43e873445439375bf52028c47047da944806104b536f04" Mar 3 12:49:32.772426 containerd[2007]: time="2026-03-03T12:49:32.772353343Z" level=info msg="CreateContainer within sandbox \"65fce30a070ccbcc931e22e72309187a37a0a61944c6043a228fb009d914ccd6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 3 12:49:32.775762 containerd[2007]: time="2026-03-03T12:49:32.774404911Z" level=info msg="CreateContainer within sandbox \"fd536d6ba4ed69b285dc7075cf022068fe1dc6e25c3d3bc3f55caa955b71d64c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 3 12:49:32.813772 containerd[2007]: time="2026-03-03T12:49:32.812412727Z" level=info msg="Container 7aecf0f599d40d678c96b06df8d040d6a6cd55d93ea4f52d9cf83fd5273c35a1: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:32.828948 containerd[2007]: time="2026-03-03T12:49:32.828877891Z" level=info msg="Container 8c96f376ff1f9f00a72eb734db3757168c5a690a06759ca4dcfa49a36a959bee: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:32.843877 containerd[2007]: time="2026-03-03T12:49:32.843824515Z" level=info msg="CreateContainer within sandbox \"65fce30a070ccbcc931e22e72309187a37a0a61944c6043a228fb009d914ccd6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7aecf0f599d40d678c96b06df8d040d6a6cd55d93ea4f52d9cf83fd5273c35a1\"" Mar 3 12:49:32.846739 containerd[2007]: time="2026-03-03T12:49:32.844816951Z" level=info msg="StartContainer for \"7aecf0f599d40d678c96b06df8d040d6a6cd55d93ea4f52d9cf83fd5273c35a1\"" Mar 3 12:49:32.847059 containerd[2007]: time="2026-03-03T12:49:32.847019299Z" level=info msg="connecting to shim 7aecf0f599d40d678c96b06df8d040d6a6cd55d93ea4f52d9cf83fd5273c35a1" address="unix:///run/containerd/s/9f8fdde935027bb7ab0d5ecba3cc16b2a36b0298bb82517ef7b6a0e8a56822e7" protocol=ttrpc version=3 Mar 3 12:49:32.852578 containerd[2007]: time="2026-03-03T12:49:32.852507691Z" level=info msg="CreateContainer within sandbox \"fd536d6ba4ed69b285dc7075cf022068fe1dc6e25c3d3bc3f55caa955b71d64c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8c96f376ff1f9f00a72eb734db3757168c5a690a06759ca4dcfa49a36a959bee\"" Mar 3 12:49:32.853529 containerd[2007]: time="2026-03-03T12:49:32.853458091Z" level=info msg="StartContainer for \"8c96f376ff1f9f00a72eb734db3757168c5a690a06759ca4dcfa49a36a959bee\"" Mar 3 12:49:32.855798 containerd[2007]: time="2026-03-03T12:49:32.855729667Z" level=info msg="connecting to shim 8c96f376ff1f9f00a72eb734db3757168c5a690a06759ca4dcfa49a36a959bee" address="unix:///run/containerd/s/3ba045487275b4292dfcd2492386548bf48bd68e58f36a5ce8967459a86f86ce" protocol=ttrpc version=3 Mar 3 12:49:32.900989 systemd[1]: Started cri-containerd-7aecf0f599d40d678c96b06df8d040d6a6cd55d93ea4f52d9cf83fd5273c35a1.scope - libcontainer container 7aecf0f599d40d678c96b06df8d040d6a6cd55d93ea4f52d9cf83fd5273c35a1. Mar 3 12:49:32.905428 systemd[1]: Started cri-containerd-8c96f376ff1f9f00a72eb734db3757168c5a690a06759ca4dcfa49a36a959bee.scope - libcontainer container 8c96f376ff1f9f00a72eb734db3757168c5a690a06759ca4dcfa49a36a959bee. Mar 3 12:49:33.019829 containerd[2007]: time="2026-03-03T12:49:33.018888460Z" level=info msg="StartContainer for \"8c96f376ff1f9f00a72eb734db3757168c5a690a06759ca4dcfa49a36a959bee\" returns successfully" Mar 3 12:49:33.075539 containerd[2007]: time="2026-03-03T12:49:33.075356596Z" level=info msg="StartContainer for \"7aecf0f599d40d678c96b06df8d040d6a6cd55d93ea4f52d9cf83fd5273c35a1\" returns successfully" Mar 3 12:49:35.612812 kubelet[3342]: E0303 12:49:35.609554 3342 controller.go:251] "Failed to update lease" err="Put \"https://172.31.25.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-173?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 3 12:49:36.353123 systemd[1]: cri-containerd-a20359060d327f54bb181ee55565337ca2f87633ebc09f62be30b85b3e3a4f56.scope: Deactivated successfully. Mar 3 12:49:36.353654 systemd[1]: cri-containerd-a20359060d327f54bb181ee55565337ca2f87633ebc09f62be30b85b3e3a4f56.scope: Consumed 3.035s CPU time, 22.6M memory peak. Mar 3 12:49:36.360640 containerd[2007]: time="2026-03-03T12:49:36.360463929Z" level=info msg="received container exit event container_id:\"a20359060d327f54bb181ee55565337ca2f87633ebc09f62be30b85b3e3a4f56\" id:\"a20359060d327f54bb181ee55565337ca2f87633ebc09f62be30b85b3e3a4f56\" pid:3177 exit_status:1 exited_at:{seconds:1772542176 nanos:360130533}" Mar 3 12:49:36.407464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a20359060d327f54bb181ee55565337ca2f87633ebc09f62be30b85b3e3a4f56-rootfs.mount: Deactivated successfully. Mar 3 12:49:36.790251 kubelet[3342]: I0303 12:49:36.789883 3342 scope.go:122] "RemoveContainer" containerID="a20359060d327f54bb181ee55565337ca2f87633ebc09f62be30b85b3e3a4f56" Mar 3 12:49:36.794847 containerd[2007]: time="2026-03-03T12:49:36.794720531Z" level=info msg="CreateContainer within sandbox \"11f6459db7022fdb1e626f51cb14946e87444d7de4861031a2784e1c6d47ed69\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 3 12:49:36.814721 containerd[2007]: time="2026-03-03T12:49:36.814086527Z" level=info msg="Container 8bd89ddf28c0ae21d2127fc664cd3b8982a707f248e9ed025cd45791e2967ad6: CDI devices from CRI Config.CDIDevices: []" Mar 3 12:49:36.824980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1205424331.mount: Deactivated successfully. Mar 3 12:49:36.836031 containerd[2007]: time="2026-03-03T12:49:36.835981847Z" level=info msg="CreateContainer within sandbox \"11f6459db7022fdb1e626f51cb14946e87444d7de4861031a2784e1c6d47ed69\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8bd89ddf28c0ae21d2127fc664cd3b8982a707f248e9ed025cd45791e2967ad6\"" Mar 3 12:49:36.837188 containerd[2007]: time="2026-03-03T12:49:36.837119543Z" level=info msg="StartContainer for \"8bd89ddf28c0ae21d2127fc664cd3b8982a707f248e9ed025cd45791e2967ad6\"" Mar 3 12:49:36.839486 containerd[2007]: time="2026-03-03T12:49:36.839416523Z" level=info msg="connecting to shim 8bd89ddf28c0ae21d2127fc664cd3b8982a707f248e9ed025cd45791e2967ad6" address="unix:///run/containerd/s/e5d7f8fa18d511b1a24c75975413fd30b7c1201da6f709ce7fc1da99ef448efb" protocol=ttrpc version=3 Mar 3 12:49:36.878009 systemd[1]: Started cri-containerd-8bd89ddf28c0ae21d2127fc664cd3b8982a707f248e9ed025cd45791e2967ad6.scope - libcontainer container 8bd89ddf28c0ae21d2127fc664cd3b8982a707f248e9ed025cd45791e2967ad6. Mar 3 12:49:36.960753 containerd[2007]: time="2026-03-03T12:49:36.960669468Z" level=info msg="StartContainer for \"8bd89ddf28c0ae21d2127fc664cd3b8982a707f248e9ed025cd45791e2967ad6\" returns successfully" Mar 3 12:49:44.581226 systemd[1]: cri-containerd-8c96f376ff1f9f00a72eb734db3757168c5a690a06759ca4dcfa49a36a959bee.scope: Deactivated successfully. Mar 3 12:49:44.585996 containerd[2007]: time="2026-03-03T12:49:44.585923369Z" level=info msg="received container exit event container_id:\"8c96f376ff1f9f00a72eb734db3757168c5a690a06759ca4dcfa49a36a959bee\" id:\"8c96f376ff1f9f00a72eb734db3757168c5a690a06759ca4dcfa49a36a959bee\" pid:6748 exit_status:1 exited_at:{seconds:1772542184 nanos:585132653}" Mar 3 12:49:44.625626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c96f376ff1f9f00a72eb734db3757168c5a690a06759ca4dcfa49a36a959bee-rootfs.mount: Deactivated successfully. Mar 3 12:49:44.839212 kubelet[3342]: I0303 12:49:44.838610 3342 scope.go:122] "RemoveContainer" containerID="a981da9b4dde37fb5f43e873445439375bf52028c47047da944806104b536f04" Mar 3 12:49:44.839978 kubelet[3342]: I0303 12:49:44.839316 3342 scope.go:122] "RemoveContainer" containerID="8c96f376ff1f9f00a72eb734db3757168c5a690a06759ca4dcfa49a36a959bee" Mar 3 12:49:44.839978 kubelet[3342]: E0303 12:49:44.839533 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-8ng9r_tigera-operator(312e25c8-3ead-41ca-bf81-ddf2c973c8ba)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-8ng9r" podUID="312e25c8-3ead-41ca-bf81-ddf2c973c8ba" Mar 3 12:49:44.843586 containerd[2007]: time="2026-03-03T12:49:44.843501187Z" level=info msg="RemoveContainer for \"a981da9b4dde37fb5f43e873445439375bf52028c47047da944806104b536f04\"" Mar 3 12:49:44.852922 containerd[2007]: time="2026-03-03T12:49:44.852838603Z" level=info msg="RemoveContainer for \"a981da9b4dde37fb5f43e873445439375bf52028c47047da944806104b536f04\" returns successfully" Mar 3 12:49:45.614006 kubelet[3342]: E0303 12:49:45.613926 3342 controller.go:251] "Failed to update lease" err="Put \"https://172.31.25.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-173?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"