Mar 12 23:45:49.142483 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 12 23:45:49.142525 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Mar 12 22:07:21 -00 2026 Mar 12 23:45:49.142549 kernel: KASLR disabled due to lack of seed Mar 12 23:45:49.142565 kernel: efi: EFI v2.7 by EDK II Mar 12 23:45:49.142581 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78557598 Mar 12 23:45:49.142595 kernel: secureboot: Secure boot disabled Mar 12 23:45:49.142612 kernel: ACPI: Early table checksum verification disabled Mar 12 23:45:49.142627 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 12 23:45:49.142642 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 12 23:45:49.142657 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 12 23:45:49.142673 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 12 23:45:49.142691 kernel: ACPI: FACS 0x0000000078630000 000040 Mar 12 23:45:49.142707 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 12 23:45:49.142754 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 12 23:45:49.142773 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 12 23:45:49.142790 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 12 23:45:49.142812 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 12 23:45:49.142828 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 12 23:45:49.142844 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 12 23:45:49.142860 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 12 23:45:49.142876 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 12 23:45:49.142892 kernel: printk: legacy bootconsole [uart0] enabled Mar 12 23:45:49.142907 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 12 23:45:49.142924 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 12 23:45:49.142940 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Mar 12 23:45:49.142956 kernel: Zone ranges: Mar 12 23:45:49.142972 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 12 23:45:49.142992 kernel: DMA32 empty Mar 12 23:45:49.143007 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 12 23:45:49.143023 kernel: Device empty Mar 12 23:45:49.143038 kernel: Movable zone start for each node Mar 12 23:45:49.143054 kernel: Early memory node ranges Mar 12 23:45:49.143070 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 12 23:45:49.143085 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 12 23:45:49.143101 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 12 23:45:49.143116 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 12 23:45:49.143132 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 12 23:45:49.143148 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 12 23:45:49.143164 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 12 23:45:49.143184 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 12 23:45:49.143206 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 12 23:45:49.143223 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 12 23:45:49.143240 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Mar 12 23:45:49.143256 kernel: psci: probing for conduit method from ACPI. Mar 12 23:45:49.143277 kernel: psci: PSCIv1.0 detected in firmware. Mar 12 23:45:49.143293 kernel: psci: Using standard PSCI v0.2 function IDs Mar 12 23:45:49.143310 kernel: psci: Trusted OS migration not required Mar 12 23:45:49.143326 kernel: psci: SMC Calling Convention v1.1 Mar 12 23:45:49.143343 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Mar 12 23:45:49.143360 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 12 23:45:49.143376 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 12 23:45:49.143393 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 12 23:45:49.143410 kernel: Detected PIPT I-cache on CPU0 Mar 12 23:45:49.143426 kernel: CPU features: detected: GIC system register CPU interface Mar 12 23:45:49.143443 kernel: CPU features: detected: Spectre-v2 Mar 12 23:45:49.143463 kernel: CPU features: detected: Spectre-v3a Mar 12 23:45:49.143480 kernel: CPU features: detected: Spectre-BHB Mar 12 23:45:49.143496 kernel: CPU features: detected: ARM erratum 1742098 Mar 12 23:45:49.143513 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 12 23:45:49.143529 kernel: alternatives: applying boot alternatives Mar 12 23:45:49.143548 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9bf054737b516803a47d5bd373cc1c618bc257c93cef3d2e2bc09897e693383d Mar 12 23:45:49.143565 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 23:45:49.143582 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 23:45:49.143599 kernel: Fallback order for Node 0: 0 Mar 12 23:45:49.143615 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Mar 12 23:45:49.143632 kernel: Policy zone: Normal Mar 12 23:45:49.143652 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 23:45:49.143668 kernel: software IO TLB: area num 2. Mar 12 23:45:49.143685 kernel: software IO TLB: mapped [mem 0x0000000074557000-0x0000000078557000] (64MB) Mar 12 23:45:49.143702 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 12 23:45:49.145758 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 23:45:49.145781 kernel: rcu: RCU event tracing is enabled. Mar 12 23:45:49.145798 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 12 23:45:49.145816 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 23:45:49.145833 kernel: Tracing variant of Tasks RCU enabled. Mar 12 23:45:49.145849 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 23:45:49.145866 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 12 23:45:49.145889 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 12 23:45:49.145906 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 12 23:45:49.145940 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 12 23:45:49.145960 kernel: GICv3: 96 SPIs implemented Mar 12 23:45:49.145976 kernel: GICv3: 0 Extended SPIs implemented Mar 12 23:45:49.145993 kernel: Root IRQ handler: gic_handle_irq Mar 12 23:45:49.146009 kernel: GICv3: GICv3 features: 16 PPIs Mar 12 23:45:49.146026 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Mar 12 23:45:49.146042 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 12 23:45:49.146059 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 12 23:45:49.146075 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Mar 12 23:45:49.146092 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Mar 12 23:45:49.146115 kernel: GICv3: using LPI property table @0x0000000400110000 Mar 12 23:45:49.146131 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 12 23:45:49.146148 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Mar 12 23:45:49.146165 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 23:45:49.146181 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 12 23:45:49.146198 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 12 23:45:49.146214 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 12 23:45:49.146231 kernel: Console: colour dummy device 80x25 Mar 12 23:45:49.146248 kernel: printk: legacy console [tty1] enabled Mar 12 23:45:49.146266 kernel: ACPI: Core revision 20240827 Mar 12 23:45:49.146283 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 12 23:45:49.146305 kernel: pid_max: default: 32768 minimum: 301 Mar 12 23:45:49.146322 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 12 23:45:49.146339 kernel: landlock: Up and running. Mar 12 23:45:49.146356 kernel: SELinux: Initializing. Mar 12 23:45:49.146373 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 23:45:49.146390 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 23:45:49.146407 kernel: rcu: Hierarchical SRCU implementation. Mar 12 23:45:49.146424 kernel: rcu: Max phase no-delay instances is 400. Mar 12 23:45:49.146445 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 12 23:45:49.146462 kernel: Remapping and enabling EFI services. Mar 12 23:45:49.146478 kernel: smp: Bringing up secondary CPUs ... Mar 12 23:45:49.146495 kernel: Detected PIPT I-cache on CPU1 Mar 12 23:45:49.146512 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 12 23:45:49.146529 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Mar 12 23:45:49.146546 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 12 23:45:49.146563 kernel: smp: Brought up 1 node, 2 CPUs Mar 12 23:45:49.146580 kernel: SMP: Total of 2 processors activated. Mar 12 23:45:49.146601 kernel: CPU: All CPU(s) started at EL1 Mar 12 23:45:49.146628 kernel: CPU features: detected: 32-bit EL0 Support Mar 12 23:45:49.146646 kernel: CPU features: detected: 32-bit EL1 Support Mar 12 23:45:49.146667 kernel: CPU features: detected: CRC32 instructions Mar 12 23:45:49.146685 kernel: alternatives: applying system-wide alternatives Mar 12 23:45:49.146703 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Mar 12 23:45:49.146744 kernel: devtmpfs: initialized Mar 12 23:45:49.146764 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 23:45:49.146788 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 12 23:45:49.146806 kernel: 16880 pages in range for non-PLT usage Mar 12 23:45:49.146824 kernel: 508400 pages in range for PLT usage Mar 12 23:45:49.146842 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 23:45:49.146859 kernel: SMBIOS 3.0.0 present. Mar 12 23:45:49.146877 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 12 23:45:49.146894 kernel: DMI: Memory slots populated: 0/0 Mar 12 23:45:49.146912 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 23:45:49.146929 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 12 23:45:49.146951 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 12 23:45:49.146969 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 12 23:45:49.146987 kernel: audit: initializing netlink subsys (disabled) Mar 12 23:45:49.147005 kernel: audit: type=2000 audit(0.227:1): state=initialized audit_enabled=0 res=1 Mar 12 23:45:49.147022 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 23:45:49.147040 kernel: cpuidle: using governor menu Mar 12 23:45:49.147058 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 12 23:45:49.147076 kernel: ASID allocator initialised with 65536 entries Mar 12 23:45:49.147094 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 23:45:49.147115 kernel: Serial: AMBA PL011 UART driver Mar 12 23:45:49.147133 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 23:45:49.147150 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 23:45:49.147168 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 12 23:45:49.147186 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 12 23:45:49.147204 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 23:45:49.147222 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 23:45:49.147239 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 12 23:45:49.147257 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 12 23:45:49.147279 kernel: ACPI: Added _OSI(Module Device) Mar 12 23:45:49.147297 kernel: ACPI: Added _OSI(Processor Device) Mar 12 23:45:49.147314 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 23:45:49.147332 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 23:45:49.147349 kernel: ACPI: Interpreter enabled Mar 12 23:45:49.147367 kernel: ACPI: Using GIC for interrupt routing Mar 12 23:45:49.147385 kernel: ACPI: MCFG table detected, 1 entries Mar 12 23:45:49.147402 kernel: ACPI: CPU0 has been hot-added Mar 12 23:45:49.147420 kernel: ACPI: CPU1 has been hot-added Mar 12 23:45:49.147441 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Mar 12 23:45:49.155536 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 12 23:45:49.155807 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 12 23:45:49.155994 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 12 23:45:49.156175 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Mar 12 23:45:49.156356 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Mar 12 23:45:49.156380 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 12 23:45:49.156409 kernel: acpiphp: Slot [1] registered Mar 12 23:45:49.156428 kernel: acpiphp: Slot [2] registered Mar 12 23:45:49.156446 kernel: acpiphp: Slot [3] registered Mar 12 23:45:49.156463 kernel: acpiphp: Slot [4] registered Mar 12 23:45:49.156481 kernel: acpiphp: Slot [5] registered Mar 12 23:45:49.156498 kernel: acpiphp: Slot [6] registered Mar 12 23:45:49.156516 kernel: acpiphp: Slot [7] registered Mar 12 23:45:49.156533 kernel: acpiphp: Slot [8] registered Mar 12 23:45:49.156551 kernel: acpiphp: Slot [9] registered Mar 12 23:45:49.156568 kernel: acpiphp: Slot [10] registered Mar 12 23:45:49.156590 kernel: acpiphp: Slot [11] registered Mar 12 23:45:49.156607 kernel: acpiphp: Slot [12] registered Mar 12 23:45:49.156625 kernel: acpiphp: Slot [13] registered Mar 12 23:45:49.156642 kernel: acpiphp: Slot [14] registered Mar 12 23:45:49.156660 kernel: acpiphp: Slot [15] registered Mar 12 23:45:49.156678 kernel: acpiphp: Slot [16] registered Mar 12 23:45:49.156695 kernel: acpiphp: Slot [17] registered Mar 12 23:45:49.156743 kernel: acpiphp: Slot [18] registered Mar 12 23:45:49.156766 kernel: acpiphp: Slot [19] registered Mar 12 23:45:49.156789 kernel: acpiphp: Slot [20] registered Mar 12 23:45:49.156807 kernel: acpiphp: Slot [21] registered Mar 12 23:45:49.156824 kernel: acpiphp: Slot [22] registered Mar 12 23:45:49.156842 kernel: acpiphp: Slot [23] registered Mar 12 23:45:49.156859 kernel: acpiphp: Slot [24] registered Mar 12 23:45:49.156878 kernel: acpiphp: Slot [25] registered Mar 12 23:45:49.156895 kernel: acpiphp: Slot [26] registered Mar 12 23:45:49.156913 kernel: acpiphp: Slot [27] registered Mar 12 23:45:49.156930 kernel: acpiphp: Slot [28] registered Mar 12 23:45:49.156948 kernel: acpiphp: Slot [29] registered Mar 12 23:45:49.156970 kernel: acpiphp: Slot [30] registered Mar 12 23:45:49.156987 kernel: acpiphp: Slot [31] registered Mar 12 23:45:49.157005 kernel: PCI host bridge to bus 0000:00 Mar 12 23:45:49.157195 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 12 23:45:49.157364 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 12 23:45:49.157531 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 12 23:45:49.157695 kernel: pci_bus 0000:00: root bus resource [bus 00] Mar 12 23:45:49.158479 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Mar 12 23:45:49.158705 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Mar 12 23:45:49.158936 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Mar 12 23:45:49.159159 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Mar 12 23:45:49.159350 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Mar 12 23:45:49.159537 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 12 23:45:49.160813 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Mar 12 23:45:49.161051 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Mar 12 23:45:49.161242 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Mar 12 23:45:49.161432 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Mar 12 23:45:49.161619 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 12 23:45:49.161826 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 12 23:45:49.162023 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 12 23:45:49.162203 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 12 23:45:49.162228 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 12 23:45:49.162247 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 12 23:45:49.162265 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 12 23:45:49.162283 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 12 23:45:49.162301 kernel: iommu: Default domain type: Translated Mar 12 23:45:49.162318 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 12 23:45:49.162336 kernel: efivars: Registered efivars operations Mar 12 23:45:49.162353 kernel: vgaarb: loaded Mar 12 23:45:49.162376 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 12 23:45:49.162394 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 23:45:49.162412 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 23:45:49.162429 kernel: pnp: PnP ACPI init Mar 12 23:45:49.162634 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 12 23:45:49.162660 kernel: pnp: PnP ACPI: found 1 devices Mar 12 23:45:49.162678 kernel: NET: Registered PF_INET protocol family Mar 12 23:45:49.162696 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 23:45:49.162738 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 23:45:49.162759 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 23:45:49.162778 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 23:45:49.162796 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 23:45:49.162814 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 23:45:49.162832 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 23:45:49.162850 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 23:45:49.162868 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 23:45:49.162886 kernel: PCI: CLS 0 bytes, default 64 Mar 12 23:45:49.162909 kernel: kvm [1]: HYP mode not available Mar 12 23:45:49.162927 kernel: Initialise system trusted keyrings Mar 12 23:45:49.162944 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 23:45:49.162962 kernel: Key type asymmetric registered Mar 12 23:45:49.162979 kernel: Asymmetric key parser 'x509' registered Mar 12 23:45:49.162997 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 12 23:45:49.163015 kernel: io scheduler mq-deadline registered Mar 12 23:45:49.163033 kernel: io scheduler kyber registered Mar 12 23:45:49.163051 kernel: io scheduler bfq registered Mar 12 23:45:49.163290 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 12 23:45:49.163319 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 12 23:45:49.163338 kernel: ACPI: button: Power Button [PWRB] Mar 12 23:45:49.163356 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 12 23:45:49.163374 kernel: ACPI: button: Sleep Button [SLPB] Mar 12 23:45:49.163392 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 23:45:49.163433 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 12 23:45:49.163674 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 12 23:45:49.163707 kernel: printk: legacy console [ttyS0] disabled Mar 12 23:45:49.163783 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 12 23:45:49.163802 kernel: printk: legacy console [ttyS0] enabled Mar 12 23:45:49.163819 kernel: printk: legacy bootconsole [uart0] disabled Mar 12 23:45:49.163837 kernel: thunder_xcv, ver 1.0 Mar 12 23:45:49.163855 kernel: thunder_bgx, ver 1.0 Mar 12 23:45:49.163873 kernel: nicpf, ver 1.0 Mar 12 23:45:49.163890 kernel: nicvf, ver 1.0 Mar 12 23:45:49.164098 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 12 23:45:49.164280 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-12T23:45:48 UTC (1773359148) Mar 12 23:45:49.164305 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 12 23:45:49.164324 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Mar 12 23:45:49.164342 kernel: watchdog: NMI not fully supported Mar 12 23:45:49.164359 kernel: NET: Registered PF_INET6 protocol family Mar 12 23:45:49.164377 kernel: watchdog: Hard watchdog permanently disabled Mar 12 23:45:49.164395 kernel: Segment Routing with IPv6 Mar 12 23:45:49.164413 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 23:45:49.164431 kernel: NET: Registered PF_PACKET protocol family Mar 12 23:45:49.164453 kernel: Key type dns_resolver registered Mar 12 23:45:49.164471 kernel: registered taskstats version 1 Mar 12 23:45:49.164488 kernel: Loading compiled-in X.509 certificates Mar 12 23:45:49.164506 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 653709f5ad64856a37b70c07139630123477ee1c' Mar 12 23:45:49.164524 kernel: Demotion targets for Node 0: null Mar 12 23:45:49.164542 kernel: Key type .fscrypt registered Mar 12 23:45:49.164560 kernel: Key type fscrypt-provisioning registered Mar 12 23:45:49.164578 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 23:45:49.167149 kernel: ima: Allocated hash algorithm: sha1 Mar 12 23:45:49.167184 kernel: ima: No architecture policies found Mar 12 23:45:49.167205 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 12 23:45:49.167223 kernel: clk: Disabling unused clocks Mar 12 23:45:49.167241 kernel: PM: genpd: Disabling unused power domains Mar 12 23:45:49.167259 kernel: Warning: unable to open an initial console. Mar 12 23:45:49.167278 kernel: Freeing unused kernel memory: 39552K Mar 12 23:45:49.167296 kernel: Run /init as init process Mar 12 23:45:49.167314 kernel: with arguments: Mar 12 23:45:49.167332 kernel: /init Mar 12 23:45:49.167355 kernel: with environment: Mar 12 23:45:49.167372 kernel: HOME=/ Mar 12 23:45:49.167390 kernel: TERM=linux Mar 12 23:45:49.167410 systemd[1]: Successfully made /usr/ read-only. Mar 12 23:45:49.167435 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 12 23:45:49.167456 systemd[1]: Detected virtualization amazon. Mar 12 23:45:49.167475 systemd[1]: Detected architecture arm64. Mar 12 23:45:49.167498 systemd[1]: Running in initrd. Mar 12 23:45:49.167517 systemd[1]: No hostname configured, using default hostname. Mar 12 23:45:49.167537 systemd[1]: Hostname set to . Mar 12 23:45:49.167556 systemd[1]: Initializing machine ID from VM UUID. Mar 12 23:45:49.167575 systemd[1]: Queued start job for default target initrd.target. Mar 12 23:45:49.167594 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 23:45:49.167613 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 23:45:49.167633 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 23:45:49.167656 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 23:45:49.167676 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 23:45:49.167696 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 23:45:49.169302 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 23:45:49.169336 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 23:45:49.169356 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 23:45:49.169376 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 23:45:49.169404 systemd[1]: Reached target paths.target - Path Units. Mar 12 23:45:49.169424 systemd[1]: Reached target slices.target - Slice Units. Mar 12 23:45:49.169443 systemd[1]: Reached target swap.target - Swaps. Mar 12 23:45:49.169462 systemd[1]: Reached target timers.target - Timer Units. Mar 12 23:45:49.169481 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 23:45:49.169500 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 23:45:49.169520 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 23:45:49.169539 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 12 23:45:49.169558 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 23:45:49.169582 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 23:45:49.169601 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 23:45:49.169620 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 23:45:49.169640 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 23:45:49.169659 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 23:45:49.169678 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 23:45:49.169698 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 12 23:45:49.169762 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 23:45:49.169793 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 23:45:49.169813 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 23:45:49.169833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 23:45:49.169852 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 23:45:49.169872 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 23:45:49.169896 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 23:45:49.169916 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 23:45:49.170005 systemd-journald[258]: Collecting audit messages is disabled. Mar 12 23:45:49.170047 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 23:45:49.170074 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 23:45:49.170093 kernel: Bridge firewalling registered Mar 12 23:45:49.170112 systemd-journald[258]: Journal started Mar 12 23:45:49.170148 systemd-journald[258]: Runtime Journal (/run/log/journal/ec2e8060e672487c927dae1f921fbe3d) is 8M, max 75.3M, 67.3M free. Mar 12 23:45:49.124310 systemd-modules-load[260]: Inserted module 'overlay' Mar 12 23:45:49.168449 systemd-modules-load[260]: Inserted module 'br_netfilter' Mar 12 23:45:49.182623 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 23:45:49.182678 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 23:45:49.189810 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 23:45:49.198897 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 23:45:49.203975 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 23:45:49.208015 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 23:45:49.216070 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 23:45:49.252892 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 23:45:49.259351 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 23:45:49.265488 systemd-tmpfiles[280]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 12 23:45:49.275337 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 23:45:49.278838 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 23:45:49.290101 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 23:45:49.298978 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 23:45:49.336837 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9bf054737b516803a47d5bd373cc1c618bc257c93cef3d2e2bc09897e693383d Mar 12 23:45:49.400932 systemd-resolved[300]: Positive Trust Anchors: Mar 12 23:45:49.400966 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 23:45:49.401029 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 23:45:49.491749 kernel: SCSI subsystem initialized Mar 12 23:45:49.499751 kernel: Loading iSCSI transport class v2.0-870. Mar 12 23:45:49.511748 kernel: iscsi: registered transport (tcp) Mar 12 23:45:49.533753 kernel: iscsi: registered transport (qla4xxx) Mar 12 23:45:49.533826 kernel: QLogic iSCSI HBA Driver Mar 12 23:45:49.567357 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 23:45:49.599091 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 23:45:49.611229 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 23:45:49.691088 kernel: random: crng init done Mar 12 23:45:49.691209 systemd-resolved[300]: Defaulting to hostname 'linux'. Mar 12 23:45:49.695700 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 23:45:49.704605 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 23:45:49.732650 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 23:45:49.738533 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 23:45:49.831760 kernel: raid6: neonx8 gen() 6471 MB/s Mar 12 23:45:49.848757 kernel: raid6: neonx4 gen() 6454 MB/s Mar 12 23:45:49.865756 kernel: raid6: neonx2 gen() 5345 MB/s Mar 12 23:45:49.882752 kernel: raid6: neonx1 gen() 3927 MB/s Mar 12 23:45:49.899763 kernel: raid6: int64x8 gen() 3619 MB/s Mar 12 23:45:49.916747 kernel: raid6: int64x4 gen() 3662 MB/s Mar 12 23:45:49.933771 kernel: raid6: int64x2 gen() 3533 MB/s Mar 12 23:45:49.951856 kernel: raid6: int64x1 gen() 2742 MB/s Mar 12 23:45:49.951943 kernel: raid6: using algorithm neonx8 gen() 6471 MB/s Mar 12 23:45:49.970777 kernel: raid6: .... xor() 4724 MB/s, rmw enabled Mar 12 23:45:49.970874 kernel: raid6: using neon recovery algorithm Mar 12 23:45:49.979999 kernel: xor: measuring software checksum speed Mar 12 23:45:49.980079 kernel: 8regs : 12992 MB/sec Mar 12 23:45:49.981225 kernel: 32regs : 12873 MB/sec Mar 12 23:45:49.982608 kernel: arm64_neon : 9059 MB/sec Mar 12 23:45:49.982664 kernel: xor: using function: 8regs (12992 MB/sec) Mar 12 23:45:50.077768 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 23:45:50.093814 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 23:45:50.103278 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 23:45:50.155806 systemd-udevd[508]: Using default interface naming scheme 'v255'. Mar 12 23:45:50.166403 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 23:45:50.176598 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 23:45:50.211838 dracut-pre-trigger[513]: rd.md=0: removing MD RAID activation Mar 12 23:45:50.256798 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 23:45:50.263469 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 23:45:50.398442 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 23:45:50.409558 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 23:45:50.598634 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 23:45:50.598803 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 23:45:50.607741 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 23:45:50.613045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 23:45:50.618630 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 12 23:45:50.618699 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 12 23:45:50.622295 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 12 23:45:50.635131 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 12 23:45:50.635528 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 12 23:45:50.635804 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 12 23:45:50.638519 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 12 23:45:50.650768 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:b3:43:32:69:51 Mar 12 23:45:50.653760 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 12 23:45:50.665523 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 12 23:45:50.665604 kernel: GPT:9289727 != 33554431 Mar 12 23:45:50.665629 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 12 23:45:50.667410 kernel: GPT:9289727 != 33554431 Mar 12 23:45:50.669359 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 12 23:45:50.670464 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 12 23:45:50.673023 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 23:45:50.684555 (udev-worker)[561]: Network interface NamePolicy= disabled on kernel command line. Mar 12 23:45:50.722757 kernel: nvme nvme0: using unchecked data buffer Mar 12 23:45:50.849245 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 12 23:45:50.890254 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 12 23:45:50.890527 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 12 23:45:50.924135 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 23:45:50.968543 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 12 23:45:50.996458 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 12 23:45:51.002402 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 23:45:51.008434 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 23:45:51.011380 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 23:45:51.020944 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 23:45:51.032940 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 23:45:51.064369 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 12 23:45:51.064439 disk-uuid[686]: Primary Header is updated. Mar 12 23:45:51.064439 disk-uuid[686]: Secondary Entries is updated. Mar 12 23:45:51.064439 disk-uuid[686]: Secondary Header is updated. Mar 12 23:45:51.073986 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 23:45:52.118278 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 12 23:45:52.119241 disk-uuid[693]: The operation has completed successfully. Mar 12 23:45:52.303770 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 23:45:52.304361 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 23:45:52.398035 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 23:45:52.431934 sh[956]: Success Mar 12 23:45:52.460759 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 23:45:52.464004 kernel: device-mapper: uevent: version 1.0.3 Mar 12 23:45:52.464084 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 12 23:45:52.477769 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 12 23:45:52.588343 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 23:45:52.596654 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 23:45:52.619801 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 23:45:52.639792 kernel: BTRFS: device fsid fcbb17b2-5053-44fc-82f0-b24e4919d6d8 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (979) Mar 12 23:45:52.643939 kernel: BTRFS info (device dm-0): first mount of filesystem fcbb17b2-5053-44fc-82f0-b24e4919d6d8 Mar 12 23:45:52.643988 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 12 23:45:52.673198 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 12 23:45:52.673276 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 12 23:45:52.674738 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 12 23:45:52.678168 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 23:45:52.679234 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 12 23:45:52.681041 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 23:45:52.682473 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 23:45:52.687092 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 23:45:52.750006 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1009) Mar 12 23:45:52.750080 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 12 23:45:52.753948 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 12 23:45:52.763780 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 12 23:45:52.763879 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 12 23:45:52.772823 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 12 23:45:52.774487 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 23:45:52.780857 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 23:45:52.921599 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 23:45:52.930640 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 23:45:53.011823 systemd-networkd[1150]: lo: Link UP Mar 12 23:45:53.011845 systemd-networkd[1150]: lo: Gained carrier Mar 12 23:45:53.017725 systemd-networkd[1150]: Enumeration completed Mar 12 23:45:53.017956 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 23:45:53.020534 systemd[1]: Reached target network.target - Network. Mar 12 23:45:53.030385 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 23:45:53.030406 systemd-networkd[1150]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 23:45:53.044227 systemd-networkd[1150]: eth0: Link UP Mar 12 23:45:53.044756 systemd-networkd[1150]: eth0: Gained carrier Mar 12 23:45:53.044782 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 23:45:53.069821 systemd-networkd[1150]: eth0: DHCPv4 address 172.31.24.143/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 12 23:45:53.089550 ignition[1067]: Ignition 2.22.0 Mar 12 23:45:53.089579 ignition[1067]: Stage: fetch-offline Mar 12 23:45:53.097279 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 23:45:53.090499 ignition[1067]: no configs at "/usr/lib/ignition/base.d" Mar 12 23:45:53.090521 ignition[1067]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 12 23:45:53.109393 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 12 23:45:53.090914 ignition[1067]: Ignition finished successfully Mar 12 23:45:53.170046 ignition[1161]: Ignition 2.22.0 Mar 12 23:45:53.170074 ignition[1161]: Stage: fetch Mar 12 23:45:53.171257 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Mar 12 23:45:53.171285 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 12 23:45:53.171417 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 12 23:45:53.186169 ignition[1161]: PUT result: OK Mar 12 23:45:53.189641 ignition[1161]: parsed url from cmdline: "" Mar 12 23:45:53.189819 ignition[1161]: no config URL provided Mar 12 23:45:53.189848 ignition[1161]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 23:45:53.189873 ignition[1161]: no config at "/usr/lib/ignition/user.ign" Mar 12 23:45:53.189927 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 12 23:45:53.194146 ignition[1161]: PUT result: OK Mar 12 23:45:53.196143 ignition[1161]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 12 23:45:53.200357 ignition[1161]: GET result: OK Mar 12 23:45:53.202888 ignition[1161]: parsing config with SHA512: e8743b408f15a0d1d7ac2be6b84c531b69e5599fc31621618697b9e5de41d3aa90b15f1012c2207353491263c6cdc315fd9a0a62c62e139037fd316989922765 Mar 12 23:45:53.220045 unknown[1161]: fetched base config from "system" Mar 12 23:45:53.220065 unknown[1161]: fetched base config from "system" Mar 12 23:45:53.220686 unknown[1161]: fetched user config from "aws" Mar 12 23:45:53.224998 ignition[1161]: fetch: fetch complete Mar 12 23:45:53.225011 ignition[1161]: fetch: fetch passed Mar 12 23:45:53.225300 ignition[1161]: Ignition finished successfully Mar 12 23:45:53.235770 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 12 23:45:53.240258 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 23:45:53.290373 ignition[1168]: Ignition 2.22.0 Mar 12 23:45:53.290932 ignition[1168]: Stage: kargs Mar 12 23:45:53.291474 ignition[1168]: no configs at "/usr/lib/ignition/base.d" Mar 12 23:45:53.291497 ignition[1168]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 12 23:45:53.291620 ignition[1168]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 12 23:45:53.302907 ignition[1168]: PUT result: OK Mar 12 23:45:53.313101 ignition[1168]: kargs: kargs passed Mar 12 23:45:53.313196 ignition[1168]: Ignition finished successfully Mar 12 23:45:53.319203 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 23:45:53.326859 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 23:45:53.371795 ignition[1174]: Ignition 2.22.0 Mar 12 23:45:53.372281 ignition[1174]: Stage: disks Mar 12 23:45:53.372833 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Mar 12 23:45:53.372856 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 12 23:45:53.372977 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 12 23:45:53.383327 ignition[1174]: PUT result: OK Mar 12 23:45:53.387705 ignition[1174]: disks: disks passed Mar 12 23:45:53.387881 ignition[1174]: Ignition finished successfully Mar 12 23:45:53.392043 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 23:45:53.398603 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 23:45:53.401660 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 23:45:53.409827 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 23:45:53.412765 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 23:45:53.419637 systemd[1]: Reached target basic.target - Basic System. Mar 12 23:45:53.426988 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 23:45:53.488952 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 12 23:45:53.493354 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 23:45:53.501964 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 23:45:53.633759 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4b09db19-3beb-48c2-8dcb-3eec5602206c r/w with ordered data mode. Quota mode: none. Mar 12 23:45:53.635670 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 23:45:53.640015 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 23:45:53.647475 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 23:45:53.667125 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 23:45:53.673472 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 12 23:45:53.673611 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 23:45:53.673668 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 23:45:53.701616 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 23:45:53.707748 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Mar 12 23:45:53.707794 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 12 23:45:53.709649 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 12 23:45:53.709974 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 23:45:53.719642 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 12 23:45:53.719740 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 12 23:45:53.722392 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 23:45:53.820654 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 23:45:53.832957 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Mar 12 23:45:53.843257 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 23:45:53.853633 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 23:45:54.038536 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 23:45:54.045665 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 23:45:54.053682 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 23:45:54.082841 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 12 23:45:54.083438 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 23:45:54.128119 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 23:45:54.139559 ignition[1314]: INFO : Ignition 2.22.0 Mar 12 23:45:54.139559 ignition[1314]: INFO : Stage: mount Mar 12 23:45:54.143910 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 23:45:54.143910 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 12 23:45:54.143910 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 12 23:45:54.154905 ignition[1314]: INFO : PUT result: OK Mar 12 23:45:54.154905 ignition[1314]: INFO : mount: mount passed Mar 12 23:45:54.154905 ignition[1314]: INFO : Ignition finished successfully Mar 12 23:45:54.161362 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 23:45:54.168066 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 23:45:54.202811 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 23:45:54.256769 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1327) Mar 12 23:45:54.261286 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 12 23:45:54.261354 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 12 23:45:54.269028 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 12 23:45:54.269100 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 12 23:45:54.273502 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 23:45:54.326112 ignition[1344]: INFO : Ignition 2.22.0 Mar 12 23:45:54.326112 ignition[1344]: INFO : Stage: files Mar 12 23:45:54.330434 ignition[1344]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 23:45:54.330434 ignition[1344]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 12 23:45:54.330434 ignition[1344]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 12 23:45:54.342199 ignition[1344]: INFO : PUT result: OK Mar 12 23:45:54.349276 ignition[1344]: DEBUG : files: compiled without relabeling support, skipping Mar 12 23:45:54.356079 ignition[1344]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 23:45:54.356079 ignition[1344]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 23:45:54.367292 ignition[1344]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 23:45:54.370872 ignition[1344]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 23:45:54.374552 unknown[1344]: wrote ssh authorized keys file for user: core Mar 12 23:45:54.377396 ignition[1344]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 23:45:54.382952 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 12 23:45:54.387761 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 12 23:45:54.485892 systemd-networkd[1150]: eth0: Gained IPv6LL Mar 12 23:45:54.501527 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 23:45:54.738039 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 12 23:45:54.738039 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 12 23:45:54.746687 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 23:45:54.751843 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 23:45:54.751843 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 23:45:54.751843 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 23:45:54.751843 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 23:45:54.751843 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 23:45:54.751843 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 23:45:54.778023 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 23:45:54.778023 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 23:45:54.778023 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 12 23:45:54.778023 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 12 23:45:54.778023 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 12 23:45:54.778023 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Mar 12 23:45:55.125236 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 12 23:45:55.536249 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 12 23:45:55.536249 ignition[1344]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 12 23:45:55.545946 ignition[1344]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 23:45:55.545946 ignition[1344]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 23:45:55.545946 ignition[1344]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 12 23:45:55.545946 ignition[1344]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 12 23:45:55.545946 ignition[1344]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 23:45:55.545946 ignition[1344]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 23:45:55.545946 ignition[1344]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 23:45:55.545946 ignition[1344]: INFO : files: files passed Mar 12 23:45:55.545946 ignition[1344]: INFO : Ignition finished successfully Mar 12 23:45:55.549047 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 23:45:55.568973 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 23:45:55.601161 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 23:45:55.610136 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 23:45:55.612641 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 23:45:55.653629 initrd-setup-root-after-ignition[1378]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 23:45:55.658093 initrd-setup-root-after-ignition[1374]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 23:45:55.658093 initrd-setup-root-after-ignition[1374]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 23:45:55.664329 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 23:45:55.674286 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 23:45:55.681082 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 23:45:55.770035 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 23:45:55.772385 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 23:45:55.776289 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 23:45:55.778865 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 23:45:55.784325 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 23:45:55.786959 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 23:45:55.835030 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 23:45:55.843307 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 23:45:55.878259 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 23:45:55.883834 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 23:45:55.887036 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 23:45:55.894126 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 23:45:55.894536 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 23:45:55.900434 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 23:45:55.904971 systemd[1]: Stopped target basic.target - Basic System. Mar 12 23:45:55.911403 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 23:45:55.916967 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 23:45:55.920605 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 23:45:55.928999 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 12 23:45:55.931954 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 23:45:55.939386 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 23:45:55.945607 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 23:45:55.949431 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 23:45:55.956451 systemd[1]: Stopped target swap.target - Swaps. Mar 12 23:45:55.958422 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 23:45:55.958651 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 23:45:55.966994 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 23:45:55.969560 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 23:45:55.972756 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 23:45:55.979591 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 23:45:55.982477 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 23:45:55.982775 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 23:45:55.988552 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 23:45:55.988828 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 23:45:55.991962 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 23:45:55.992156 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 23:45:55.994249 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 23:45:56.012417 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 23:45:56.016149 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 23:45:56.016479 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 23:45:56.026547 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 23:45:56.026814 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 23:45:56.048636 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 23:45:56.055421 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 23:45:56.082183 ignition[1398]: INFO : Ignition 2.22.0 Mar 12 23:45:56.084782 ignition[1398]: INFO : Stage: umount Mar 12 23:45:56.084782 ignition[1398]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 23:45:56.084782 ignition[1398]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 12 23:45:56.084782 ignition[1398]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 12 23:45:56.094852 ignition[1398]: INFO : PUT result: OK Mar 12 23:45:56.099976 ignition[1398]: INFO : umount: umount passed Mar 12 23:45:56.102063 ignition[1398]: INFO : Ignition finished successfully Mar 12 23:45:56.106554 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 23:45:56.107694 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 23:45:56.107961 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 23:45:56.112506 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 23:45:56.112688 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 23:45:56.115578 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 23:45:56.116451 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 23:45:56.119638 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 12 23:45:56.119760 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 12 23:45:56.123046 systemd[1]: Stopped target network.target - Network. Mar 12 23:45:56.127028 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 23:45:56.127157 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 23:45:56.131823 systemd[1]: Stopped target paths.target - Path Units. Mar 12 23:45:56.133949 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 23:45:56.140333 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 23:45:56.140503 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 23:45:56.147106 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 23:45:56.151843 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 23:45:56.151930 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 23:45:56.155851 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 23:45:56.155934 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 23:45:56.160204 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 23:45:56.160391 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 23:45:56.164561 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 23:45:56.164658 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 23:45:56.169479 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 23:45:56.174093 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 23:45:56.180406 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 23:45:56.180595 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 23:45:56.198103 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 23:45:56.199994 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 23:45:56.207122 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 12 23:45:56.207535 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 23:45:56.207795 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 23:45:56.219174 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 12 23:45:56.222936 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 12 23:45:56.227114 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 23:45:56.227220 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 23:45:56.232128 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 23:45:56.232250 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 23:45:56.249273 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 23:45:56.252931 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 23:45:56.253054 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 23:45:56.256504 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 23:45:56.256612 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 23:45:56.279490 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 23:45:56.279596 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 23:45:56.287598 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 23:45:56.287706 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 23:45:56.307191 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 23:45:56.317454 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 12 23:45:56.318097 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 12 23:45:56.338372 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 23:45:56.343357 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 23:45:56.347872 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 23:45:56.348007 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 23:45:56.352008 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 23:45:56.352084 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 23:45:56.354576 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 23:45:56.354673 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 23:45:56.367589 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 23:45:56.367731 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 23:45:56.372556 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 23:45:56.372662 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 23:45:56.386406 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 23:45:56.391910 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 12 23:45:56.392040 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 23:45:56.404884 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 23:45:56.405172 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 23:45:56.414099 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 12 23:45:56.414203 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 23:45:56.423420 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 23:45:56.423524 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 23:45:56.426617 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 23:45:56.426750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 23:45:56.439511 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 12 23:45:56.440407 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 12 23:45:56.440501 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 12 23:45:56.440587 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 12 23:45:56.444693 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 23:45:56.446562 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 23:45:56.468306 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 23:45:56.469281 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 23:45:56.478186 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 23:45:56.486020 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 23:45:56.524407 systemd[1]: Switching root. Mar 12 23:45:56.571668 systemd-journald[258]: Journal stopped Mar 12 23:45:58.665228 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Mar 12 23:45:58.665368 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 23:45:58.665415 kernel: SELinux: policy capability open_perms=1 Mar 12 23:45:58.665456 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 23:45:58.665487 kernel: SELinux: policy capability always_check_network=0 Mar 12 23:45:58.665517 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 23:45:58.665547 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 23:45:58.665577 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 23:45:58.665610 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 23:45:58.665637 kernel: SELinux: policy capability userspace_initial_context=0 Mar 12 23:45:58.665667 kernel: audit: type=1403 audit(1773359156.936:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 23:45:58.670810 systemd[1]: Successfully loaded SELinux policy in 90.083ms. Mar 12 23:45:58.670899 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.030ms. Mar 12 23:45:58.670936 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 12 23:45:58.670968 systemd[1]: Detected virtualization amazon. Mar 12 23:45:58.670998 systemd[1]: Detected architecture arm64. Mar 12 23:45:58.671030 systemd[1]: Detected first boot. Mar 12 23:45:58.671058 systemd[1]: Initializing machine ID from VM UUID. Mar 12 23:45:58.671089 zram_generator::config[1441]: No configuration found. Mar 12 23:45:58.671121 kernel: NET: Registered PF_VSOCK protocol family Mar 12 23:45:58.671157 systemd[1]: Populated /etc with preset unit settings. Mar 12 23:45:58.671191 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 12 23:45:58.671222 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 23:45:58.671252 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 23:45:58.671282 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 23:45:58.671352 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 23:45:58.671395 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 23:45:58.671428 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 23:45:58.671465 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 23:45:58.674575 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 23:45:58.674643 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 23:45:58.674673 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 23:45:58.674732 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 23:45:58.676848 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 23:45:58.676900 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 23:45:58.676932 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 23:45:58.676964 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 23:45:58.677006 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 23:45:58.677038 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 23:45:58.677071 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 12 23:45:58.677105 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 23:45:58.677147 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 23:45:58.677177 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 23:45:58.677216 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 23:45:58.677248 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 23:45:58.677289 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 23:45:58.677322 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 23:45:58.677353 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 23:45:58.677387 systemd[1]: Reached target slices.target - Slice Units. Mar 12 23:45:58.677418 systemd[1]: Reached target swap.target - Swaps. Mar 12 23:45:58.677450 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 23:45:58.677482 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 23:45:58.677510 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 12 23:45:58.677543 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 23:45:58.677582 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 23:45:58.677613 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 23:45:58.677644 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 23:45:58.677676 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 23:45:58.677706 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 23:45:58.684319 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 23:45:58.684352 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 23:45:58.684386 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 23:45:58.684417 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 23:45:58.684457 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 23:45:58.684487 systemd[1]: Reached target machines.target - Containers. Mar 12 23:45:58.684516 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 23:45:58.684545 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 23:45:58.684575 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 23:45:58.684604 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 23:45:58.684633 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 23:45:58.684661 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 23:45:58.684694 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 23:45:58.684773 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 23:45:58.684809 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 23:45:58.684840 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 23:45:58.684870 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 23:45:58.684903 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 23:45:58.684936 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 23:45:58.684968 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 23:45:58.685000 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 23:45:58.685039 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 23:45:58.685071 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 23:45:58.685100 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 23:45:58.685130 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 23:45:58.685158 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 12 23:45:58.685187 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 23:45:58.685224 systemd[1]: verity-setup.service: Deactivated successfully. Mar 12 23:45:58.685253 systemd[1]: Stopped verity-setup.service. Mar 12 23:45:58.685282 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 23:45:58.685311 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 23:45:58.685352 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 23:45:58.685385 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 23:45:58.685418 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 23:45:58.685449 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 23:45:58.685479 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 23:45:58.685509 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 23:45:58.685538 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 23:45:58.685570 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 23:45:58.685599 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 23:45:58.698595 systemd-journald[1520]: Collecting audit messages is disabled. Mar 12 23:45:58.698746 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 23:45:58.698788 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 23:45:58.698821 kernel: loop: module loaded Mar 12 23:45:58.698850 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 23:45:58.698881 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 23:45:58.698910 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 23:45:58.698940 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 23:45:58.698977 systemd-journald[1520]: Journal started Mar 12 23:45:58.699025 systemd-journald[1520]: Runtime Journal (/run/log/journal/ec2e8060e672487c927dae1f921fbe3d) is 8M, max 75.3M, 67.3M free. Mar 12 23:45:58.089459 systemd[1]: Queued start job for default target multi-user.target. Mar 12 23:45:58.101473 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 12 23:45:58.707878 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 23:45:58.102478 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 23:45:58.733175 kernel: fuse: init (API version 7.41) Mar 12 23:45:58.715126 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 23:45:58.732619 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 23:45:58.736068 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 23:45:58.739971 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 23:45:58.742907 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 23:45:58.746580 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 23:45:58.788548 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 23:45:58.803698 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 23:45:58.807598 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 23:45:58.807650 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 23:45:58.822777 kernel: ACPI: bus type drm_connector registered Mar 12 23:45:58.813499 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 12 23:45:58.824047 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 23:45:58.826861 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 23:45:58.838041 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 23:45:58.846178 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 23:45:58.849007 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 23:45:58.854187 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 23:45:58.856879 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 23:45:58.864903 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 23:45:58.870306 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 23:45:58.870853 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 23:45:58.885619 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 12 23:45:58.934345 systemd-journald[1520]: Time spent on flushing to /var/log/journal/ec2e8060e672487c927dae1f921fbe3d is 193.458ms for 924 entries. Mar 12 23:45:58.934345 systemd-journald[1520]: System Journal (/var/log/journal/ec2e8060e672487c927dae1f921fbe3d) is 8M, max 195.6M, 187.6M free. Mar 12 23:45:59.143662 systemd-journald[1520]: Received client request to flush runtime journal. Mar 12 23:45:59.143800 kernel: loop0: detected capacity change from 0 to 100632 Mar 12 23:45:59.143878 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 23:45:58.943413 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 23:45:58.949543 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 23:45:58.955450 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 12 23:45:59.000817 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 23:45:59.010145 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 23:45:59.045518 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 12 23:45:59.069016 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Mar 12 23:45:59.069042 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Mar 12 23:45:59.105210 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 23:45:59.111920 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 23:45:59.120481 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 23:45:59.133138 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 23:45:59.150889 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 23:45:59.157974 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 23:45:59.192776 kernel: loop1: detected capacity change from 0 to 61264 Mar 12 23:45:59.261815 kernel: loop2: detected capacity change from 0 to 200864 Mar 12 23:45:59.283840 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 23:45:59.287899 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 23:45:59.295673 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 23:45:59.336253 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Mar 12 23:45:59.336843 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Mar 12 23:45:59.344892 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 23:45:59.400171 kernel: loop3: detected capacity change from 0 to 119840 Mar 12 23:45:59.455790 kernel: loop4: detected capacity change from 0 to 100632 Mar 12 23:45:59.482770 kernel: loop5: detected capacity change from 0 to 61264 Mar 12 23:45:59.522240 kernel: loop6: detected capacity change from 0 to 200864 Mar 12 23:45:59.557812 kernel: loop7: detected capacity change from 0 to 119840 Mar 12 23:45:59.591963 (sd-merge)[1605]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 12 23:45:59.593012 (sd-merge)[1605]: Merged extensions into '/usr'. Mar 12 23:45:59.604809 systemd[1]: Reload requested from client PID 1569 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 23:45:59.604842 systemd[1]: Reloading... Mar 12 23:45:59.822008 zram_generator::config[1631]: No configuration found. Mar 12 23:45:59.974793 ldconfig[1560]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 23:46:00.347571 systemd[1]: Reloading finished in 741 ms. Mar 12 23:46:00.375857 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 23:46:00.379345 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 23:46:00.382967 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 23:46:00.410455 systemd[1]: Starting ensure-sysext.service... Mar 12 23:46:00.417085 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 23:46:00.428929 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 23:46:00.458108 systemd[1]: Reload requested from client PID 1685 ('systemctl') (unit ensure-sysext.service)... Mar 12 23:46:00.458273 systemd[1]: Reloading... Mar 12 23:46:00.484876 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 12 23:46:00.485993 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 12 23:46:00.486623 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 23:46:00.489116 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 23:46:00.497264 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 23:46:00.499474 systemd-tmpfiles[1686]: ACLs are not supported, ignoring. Mar 12 23:46:00.501023 systemd-tmpfiles[1686]: ACLs are not supported, ignoring. Mar 12 23:46:00.521860 systemd-tmpfiles[1686]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 23:46:00.522095 systemd-tmpfiles[1686]: Skipping /boot Mar 12 23:46:00.567357 systemd-tmpfiles[1686]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 23:46:00.567385 systemd-tmpfiles[1686]: Skipping /boot Mar 12 23:46:00.613127 systemd-udevd[1687]: Using default interface naming scheme 'v255'. Mar 12 23:46:00.640777 zram_generator::config[1716]: No configuration found. Mar 12 23:46:00.951979 (udev-worker)[1736]: Network interface NamePolicy= disabled on kernel command line. Mar 12 23:46:01.329966 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 12 23:46:01.330181 systemd[1]: Reloading finished in 871 ms. Mar 12 23:46:01.353846 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 23:46:01.358784 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 23:46:01.419201 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 12 23:46:01.425157 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 23:46:01.432105 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 23:46:01.440181 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 23:46:01.451269 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 23:46:01.459222 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 23:46:01.480841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 23:46:01.490317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 23:46:01.496179 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 23:46:01.503297 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 23:46:01.506077 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 23:46:01.506350 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 23:46:01.564841 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 23:46:01.573012 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 23:46:01.573424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 23:46:01.573645 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 23:46:01.585530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 23:46:01.600315 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 23:46:01.603066 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 23:46:01.603338 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 23:46:01.603687 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 23:46:01.633675 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 23:46:01.634200 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 23:46:01.637472 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 23:46:01.653470 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 23:46:01.656057 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 23:46:01.665095 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 23:46:01.667858 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 23:46:01.675316 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 23:46:01.693205 systemd[1]: Finished ensure-sysext.service. Mar 12 23:46:01.761818 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 23:46:01.772960 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 23:46:01.782157 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 23:46:01.784813 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 23:46:01.800372 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 23:46:01.803668 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 23:46:01.811971 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 23:46:01.816391 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 23:46:01.880502 augenrules[1904]: No rules Mar 12 23:46:01.886307 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 23:46:01.888957 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 12 23:46:01.913796 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 23:46:02.079224 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 23:46:02.159044 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 12 23:46:02.166051 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 23:46:02.192076 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 23:46:02.216133 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 23:46:02.332049 systemd-networkd[1821]: lo: Link UP Mar 12 23:46:02.332071 systemd-networkd[1821]: lo: Gained carrier Mar 12 23:46:02.335190 systemd-networkd[1821]: Enumeration completed Mar 12 23:46:02.335424 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 23:46:02.336362 systemd-networkd[1821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 23:46:02.336392 systemd-networkd[1821]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 23:46:02.342640 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 12 23:46:02.349167 systemd-networkd[1821]: eth0: Link UP Mar 12 23:46:02.349811 systemd-networkd[1821]: eth0: Gained carrier Mar 12 23:46:02.350051 systemd-networkd[1821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 23:46:02.353280 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 23:46:02.356403 systemd-resolved[1822]: Positive Trust Anchors: Mar 12 23:46:02.356426 systemd-resolved[1822]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 23:46:02.356491 systemd-resolved[1822]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 23:46:02.372902 systemd-networkd[1821]: eth0: DHCPv4 address 172.31.24.143/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 12 23:46:02.376759 systemd-resolved[1822]: Defaulting to hostname 'linux'. Mar 12 23:46:02.381542 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 23:46:02.387156 systemd[1]: Reached target network.target - Network. Mar 12 23:46:02.390910 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 23:46:02.393926 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 23:46:02.399219 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 23:46:02.402070 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 23:46:02.405278 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 23:46:02.410132 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 23:46:02.416451 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 23:46:02.422674 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 23:46:02.422934 systemd[1]: Reached target paths.target - Path Units. Mar 12 23:46:02.425208 systemd[1]: Reached target timers.target - Timer Units. Mar 12 23:46:02.429238 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 23:46:02.436665 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 23:46:02.443476 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 12 23:46:02.450411 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 12 23:46:02.453426 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 12 23:46:02.462474 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 23:46:02.465759 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 12 23:46:02.471855 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 12 23:46:02.475352 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 23:46:02.479159 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 23:46:02.481678 systemd[1]: Reached target basic.target - Basic System. Mar 12 23:46:02.484153 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 23:46:02.484232 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 23:46:02.486401 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 23:46:02.497000 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 12 23:46:02.510981 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 23:46:02.515273 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 23:46:02.522136 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 23:46:02.530236 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 23:46:02.532873 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 23:46:02.539186 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 23:46:02.550217 systemd[1]: Started ntpd.service - Network Time Service. Mar 12 23:46:02.566337 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 23:46:02.572021 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 12 23:46:02.583197 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 23:46:02.587680 jq[1970]: false Mar 12 23:46:02.592188 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 23:46:02.612840 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 23:46:02.620937 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 23:46:02.629349 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 23:46:02.636152 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 23:46:02.644075 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 23:46:02.658841 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 23:46:02.662704 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 23:46:02.663318 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 23:46:02.674421 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 23:46:02.680874 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 23:46:02.727181 extend-filesystems[1971]: Found /dev/nvme0n1p6 Mar 12 23:46:02.770071 jq[1983]: true Mar 12 23:46:02.773330 extend-filesystems[1971]: Found /dev/nvme0n1p9 Mar 12 23:46:02.800760 extend-filesystems[1971]: Checking size of /dev/nvme0n1p9 Mar 12 23:46:02.852517 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 23:46:02.853896 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 23:46:02.864431 (ntainerd)[2010]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 23:46:02.871002 jq[2009]: true Mar 12 23:46:02.897614 tar[1994]: linux-arm64/LICENSE Mar 12 23:46:02.905405 tar[1994]: linux-arm64/helm Mar 12 23:46:02.905530 update_engine[1981]: I20260312 23:46:02.894066 1981 main.cc:92] Flatcar Update Engine starting Mar 12 23:46:02.900634 dbus-daemon[1968]: [system] SELinux support is enabled Mar 12 23:46:02.909048 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 23:46:02.913360 ntpd[1973]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:41 UTC 2026 (1): Starting Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:41 UTC 2026 (1): Starting Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: ---------------------------------------------------- Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: corporation. Support and training for ntp-4 are Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: available at https://www.nwtime.org/support Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: ---------------------------------------------------- Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: proto: precision = 0.096 usec (-23) Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: basedate set to 2026-02-28 Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: gps base set to 2026-03-01 (week 2408) Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: Listen normally on 3 eth0 172.31.24.143:123 Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: Listen normally on 4 lo [::1]:123 Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: bind(21) AF_INET6 [fe80::4b3:43ff:fe32:6951%2]:123 flags 0x811 failed: Cannot assign requested address Mar 12 23:46:02.926501 ntpd[1973]: 12 Mar 23:46:02 ntpd[1973]: unable to create socket on eth0 (5) for [fe80::4b3:43ff:fe32:6951%2]:123 Mar 12 23:46:02.915552 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 23:46:02.913467 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 12 23:46:02.915612 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 23:46:02.913486 ntpd[1973]: ---------------------------------------------------- Mar 12 23:46:02.920901 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 23:46:02.969983 extend-filesystems[1971]: Resized partition /dev/nvme0n1p9 Mar 12 23:46:02.913502 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Mar 12 23:46:02.920938 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 23:46:02.913518 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 12 23:46:02.934363 systemd-coredump[2025]: Process 1973 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 12 23:46:02.913534 ntpd[1973]: corporation. Support and training for ntp-4 are Mar 12 23:46:02.940611 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Mar 12 23:46:02.913551 ntpd[1973]: available at https://www.nwtime.org/support Mar 12 23:46:02.958210 systemd[1]: Started systemd-coredump@0-2025-0.service - Process Core Dump (PID 2025/UID 0). Mar 12 23:46:02.913567 ntpd[1973]: ---------------------------------------------------- Mar 12 23:46:02.920124 ntpd[1973]: proto: precision = 0.096 usec (-23) Mar 12 23:46:02.922291 ntpd[1973]: basedate set to 2026-02-28 Mar 12 23:46:02.922324 ntpd[1973]: gps base set to 2026-03-01 (week 2408) Mar 12 23:46:02.922520 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Mar 12 23:46:02.922568 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 12 23:46:03.003196 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 12 23:46:03.015158 update_engine[1981]: I20260312 23:46:02.999650 1981 update_check_scheduler.cc:74] Next update check in 4m44s Mar 12 23:46:03.015255 extend-filesystems[2026]: resize2fs 1.47.3 (8-Jul-2025) Mar 12 23:46:02.922913 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Mar 12 23:46:03.006767 systemd[1]: Started update-engine.service - Update Engine. Mar 12 23:46:02.922963 ntpd[1973]: Listen normally on 3 eth0 172.31.24.143:123 Mar 12 23:46:02.923012 ntpd[1973]: Listen normally on 4 lo [::1]:123 Mar 12 23:46:02.923062 ntpd[1973]: bind(21) AF_INET6 [fe80::4b3:43ff:fe32:6951%2]:123 flags 0x811 failed: Cannot assign requested address Mar 12 23:46:02.923101 ntpd[1973]: unable to create socket on eth0 (5) for [fe80::4b3:43ff:fe32:6951%2]:123 Mar 12 23:46:02.986045 dbus-daemon[1968]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1821 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 12 23:46:03.024209 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 23:46:03.027621 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 12 23:46:03.051604 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 12 23:46:03.072332 systemd-logind[1980]: Watching system buttons on /dev/input/event0 (Power Button) Mar 12 23:46:03.072390 systemd-logind[1980]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 12 23:46:03.073068 systemd-logind[1980]: New seat seat0. Mar 12 23:46:03.078451 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 23:46:03.187916 coreos-metadata[1967]: Mar 12 23:46:03.185 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 12 23:46:03.187916 coreos-metadata[1967]: Mar 12 23:46:03.186 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 12 23:46:03.187916 coreos-metadata[1967]: Mar 12 23:46:03.186 INFO Fetch successful Mar 12 23:46:03.187916 coreos-metadata[1967]: Mar 12 23:46:03.186 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 12 23:46:03.187916 coreos-metadata[1967]: Mar 12 23:46:03.186 INFO Fetch successful Mar 12 23:46:03.187916 coreos-metadata[1967]: Mar 12 23:46:03.186 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 12 23:46:03.187916 coreos-metadata[1967]: Mar 12 23:46:03.186 INFO Fetch successful Mar 12 23:46:03.187916 coreos-metadata[1967]: Mar 12 23:46:03.186 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 12 23:46:03.187916 coreos-metadata[1967]: Mar 12 23:46:03.186 INFO Fetch successful Mar 12 23:46:03.187916 coreos-metadata[1967]: Mar 12 23:46:03.186 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 12 23:46:03.194026 coreos-metadata[1967]: Mar 12 23:46:03.190 INFO Fetch failed with 404: resource not found Mar 12 23:46:03.194026 coreos-metadata[1967]: Mar 12 23:46:03.190 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 12 23:46:03.199874 coreos-metadata[1967]: Mar 12 23:46:03.196 INFO Fetch successful Mar 12 23:46:03.199874 coreos-metadata[1967]: Mar 12 23:46:03.196 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 12 23:46:03.199874 coreos-metadata[1967]: Mar 12 23:46:03.196 INFO Fetch successful Mar 12 23:46:03.199874 coreos-metadata[1967]: Mar 12 23:46:03.196 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 12 23:46:03.199874 coreos-metadata[1967]: Mar 12 23:46:03.196 INFO Fetch successful Mar 12 23:46:03.199874 coreos-metadata[1967]: Mar 12 23:46:03.196 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 12 23:46:03.199874 coreos-metadata[1967]: Mar 12 23:46:03.196 INFO Fetch successful Mar 12 23:46:03.199874 coreos-metadata[1967]: Mar 12 23:46:03.197 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 12 23:46:03.199874 coreos-metadata[1967]: Mar 12 23:46:03.198 INFO Fetch successful Mar 12 23:46:03.303759 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 12 23:46:03.306565 bash[2051]: Updated "/home/core/.ssh/authorized_keys" Mar 12 23:46:03.316862 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 23:46:03.331362 systemd[1]: Starting sshkeys.service... Mar 12 23:46:03.337801 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 12 23:46:03.341553 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 23:46:03.364856 extend-filesystems[2026]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 12 23:46:03.364856 extend-filesystems[2026]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 12 23:46:03.364856 extend-filesystems[2026]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 12 23:46:03.362569 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 23:46:03.380400 extend-filesystems[1971]: Resized filesystem in /dev/nvme0n1p9 Mar 12 23:46:03.406170 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 23:46:03.473967 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 12 23:46:03.483496 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 12 23:46:03.660219 containerd[2010]: time="2026-03-12T23:46:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 12 23:46:03.664946 containerd[2010]: time="2026-03-12T23:46:03.662606387Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 12 23:46:03.853754 containerd[2010]: time="2026-03-12T23:46:03.852262320Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.996µs" Mar 12 23:46:03.853754 containerd[2010]: time="2026-03-12T23:46:03.852336192Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 12 23:46:03.853754 containerd[2010]: time="2026-03-12T23:46:03.852376308Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 12 23:46:03.853754 containerd[2010]: time="2026-03-12T23:46:03.853453092Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 12 23:46:03.853754 containerd[2010]: time="2026-03-12T23:46:03.853532208Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 12 23:46:03.853754 containerd[2010]: time="2026-03-12T23:46:03.853602444Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 12 23:46:03.854133 containerd[2010]: time="2026-03-12T23:46:03.853837092Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 12 23:46:03.854133 containerd[2010]: time="2026-03-12T23:46:03.853878588Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 12 23:46:03.855760 containerd[2010]: time="2026-03-12T23:46:03.854316720Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 12 23:46:03.855760 containerd[2010]: time="2026-03-12T23:46:03.854379120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 12 23:46:03.855760 containerd[2010]: time="2026-03-12T23:46:03.854413104Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 12 23:46:03.855760 containerd[2010]: time="2026-03-12T23:46:03.854437680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 12 23:46:03.855760 containerd[2010]: time="2026-03-12T23:46:03.854654856Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 12 23:46:03.866209 containerd[2010]: time="2026-03-12T23:46:03.862357512Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 12 23:46:03.866209 containerd[2010]: time="2026-03-12T23:46:03.862482600Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 12 23:46:03.866209 containerd[2010]: time="2026-03-12T23:46:03.862519668Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 12 23:46:03.866209 containerd[2010]: time="2026-03-12T23:46:03.862583004Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 12 23:46:03.869734 containerd[2010]: time="2026-03-12T23:46:03.869627700Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 12 23:46:03.874368 containerd[2010]: time="2026-03-12T23:46:03.872809872Z" level=info msg="metadata content store policy set" policy=shared Mar 12 23:46:03.891749 containerd[2010]: time="2026-03-12T23:46:03.890919252Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 12 23:46:03.891749 containerd[2010]: time="2026-03-12T23:46:03.891054564Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 12 23:46:03.891749 containerd[2010]: time="2026-03-12T23:46:03.891157752Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 12 23:46:03.891749 containerd[2010]: time="2026-03-12T23:46:03.891194220Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 12 23:46:03.891749 containerd[2010]: time="2026-03-12T23:46:03.891224460Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 12 23:46:03.891749 containerd[2010]: time="2026-03-12T23:46:03.891258000Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 12 23:46:03.891749 containerd[2010]: time="2026-03-12T23:46:03.891287988Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 12 23:46:03.891749 containerd[2010]: time="2026-03-12T23:46:03.891319176Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 12 23:46:03.891749 containerd[2010]: time="2026-03-12T23:46:03.891346416Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 12 23:46:03.891749 containerd[2010]: time="2026-03-12T23:46:03.891372024Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 12 23:46:03.891749 containerd[2010]: time="2026-03-12T23:46:03.891396756Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 12 23:46:03.891749 containerd[2010]: time="2026-03-12T23:46:03.891427080Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 12 23:46:03.891749 containerd[2010]: time="2026-03-12T23:46:03.891674088Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 12 23:46:03.894758 containerd[2010]: time="2026-03-12T23:46:03.892497888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 12 23:46:03.894758 containerd[2010]: time="2026-03-12T23:46:03.892582956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 12 23:46:03.894758 containerd[2010]: time="2026-03-12T23:46:03.892612860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 12 23:46:03.894758 containerd[2010]: time="2026-03-12T23:46:03.892640820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 12 23:46:03.894758 containerd[2010]: time="2026-03-12T23:46:03.892667724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 12 23:46:03.894758 containerd[2010]: time="2026-03-12T23:46:03.892697040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 12 23:46:03.894758 containerd[2010]: time="2026-03-12T23:46:03.892754568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 12 23:46:03.894758 containerd[2010]: time="2026-03-12T23:46:03.892786620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 12 23:46:03.894758 containerd[2010]: time="2026-03-12T23:46:03.892813512Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 12 23:46:03.894758 containerd[2010]: time="2026-03-12T23:46:03.892840320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 12 23:46:03.894758 containerd[2010]: time="2026-03-12T23:46:03.893226228Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 12 23:46:03.894758 containerd[2010]: time="2026-03-12T23:46:03.893268696Z" level=info msg="Start snapshots syncer" Mar 12 23:46:03.899751 containerd[2010]: time="2026-03-12T23:46:03.898234656Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 12 23:46:03.905992 containerd[2010]: time="2026-03-12T23:46:03.902060508Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 12 23:46:03.905992 containerd[2010]: time="2026-03-12T23:46:03.903861876Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 12 23:46:03.906312 containerd[2010]: time="2026-03-12T23:46:03.905133468Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 12 23:46:03.906312 containerd[2010]: time="2026-03-12T23:46:03.905634000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 12 23:46:03.909563 containerd[2010]: time="2026-03-12T23:46:03.905701152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 12 23:46:03.909563 containerd[2010]: time="2026-03-12T23:46:03.909223884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 12 23:46:03.909563 containerd[2010]: time="2026-03-12T23:46:03.909262032Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 12 23:46:03.909563 containerd[2010]: time="2026-03-12T23:46:03.909293196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 12 23:46:03.909563 containerd[2010]: time="2026-03-12T23:46:03.909331260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 12 23:46:03.909563 containerd[2010]: time="2026-03-12T23:46:03.909363312Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 12 23:46:03.909563 containerd[2010]: time="2026-03-12T23:46:03.909420024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 12 23:46:03.909563 containerd[2010]: time="2026-03-12T23:46:03.909448356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 12 23:46:03.909563 containerd[2010]: time="2026-03-12T23:46:03.909476640Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 12 23:46:03.912137 containerd[2010]: time="2026-03-12T23:46:03.911513088Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 12 23:46:03.912137 containerd[2010]: time="2026-03-12T23:46:03.911582748Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 12 23:46:03.912137 containerd[2010]: time="2026-03-12T23:46:03.911609064Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 12 23:46:03.912137 containerd[2010]: time="2026-03-12T23:46:03.911644584Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 12 23:46:03.912137 containerd[2010]: time="2026-03-12T23:46:03.911670852Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 12 23:46:03.914912 containerd[2010]: time="2026-03-12T23:46:03.912513072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 12 23:46:03.914912 containerd[2010]: time="2026-03-12T23:46:03.912594780Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 12 23:46:03.915907 containerd[2010]: time="2026-03-12T23:46:03.915099324Z" level=info msg="runtime interface created" Mar 12 23:46:03.915907 containerd[2010]: time="2026-03-12T23:46:03.915140256Z" level=info msg="created NRI interface" Mar 12 23:46:03.915907 containerd[2010]: time="2026-03-12T23:46:03.915171396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 12 23:46:03.915907 containerd[2010]: time="2026-03-12T23:46:03.915207852Z" level=info msg="Connect containerd service" Mar 12 23:46:03.915907 containerd[2010]: time="2026-03-12T23:46:03.915272904Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 23:46:03.937938 containerd[2010]: time="2026-03-12T23:46:03.932622336Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 23:46:04.011216 coreos-metadata[2095]: Mar 12 23:46:04.010 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 12 23:46:04.017852 coreos-metadata[2095]: Mar 12 23:46:04.017 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 12 23:46:04.022846 coreos-metadata[2095]: Mar 12 23:46:04.022 INFO Fetch successful Mar 12 23:46:04.022846 coreos-metadata[2095]: Mar 12 23:46:04.022 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 12 23:46:04.023381 coreos-metadata[2095]: Mar 12 23:46:04.023 INFO Fetch successful Mar 12 23:46:04.036273 unknown[2095]: wrote ssh authorized keys file for user: core Mar 12 23:46:04.047989 systemd-coredump[2027]: Process 1973 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1973: #0 0x0000aaaadc030b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaadbfdfe60 n/a (ntpd + 0xfe60) #2 0x0000aaaadbfe0240 n/a (ntpd + 0x10240) #3 0x0000aaaadbfdbe14 n/a (ntpd + 0xbe14) #4 0x0000aaaadbfdd3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaadbfe5a38 n/a (ntpd + 0x15a38) #6 0x0000aaaadbfd738c n/a (ntpd + 0x738c) #7 0x0000ffff8c722034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff8c722118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaadbfd73f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Mar 12 23:46:04.062739 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 12 23:46:04.065144 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 12 23:46:04.080306 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 12 23:46:04.091554 systemd[1]: systemd-coredump@0-2025-0.service: Deactivated successfully. Mar 12 23:46:04.118910 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 23:46:04.129055 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 12 23:46:04.141663 dbus-daemon[1968]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2029 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 12 23:46:04.155626 systemd[1]: Starting polkit.service - Authorization Manager... Mar 12 23:46:04.170263 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Mar 12 23:46:04.177786 systemd[1]: Started ntpd.service - Network Time Service. Mar 12 23:46:04.212925 systemd-networkd[1821]: eth0: Gained IPv6LL Mar 12 23:46:04.223491 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 23:46:04.231026 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 23:46:04.242999 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 12 23:46:04.264252 update-ssh-keys[2170]: Updated "/home/core/.ssh/authorized_keys" Mar 12 23:46:04.252264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:46:04.259972 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 23:46:04.267494 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 12 23:46:04.288007 containerd[2010]: time="2026-03-12T23:46:04.285267034Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 23:46:04.288007 containerd[2010]: time="2026-03-12T23:46:04.285392554Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 23:46:04.288007 containerd[2010]: time="2026-03-12T23:46:04.285443014Z" level=info msg="Start subscribing containerd event" Mar 12 23:46:04.288007 containerd[2010]: time="2026-03-12T23:46:04.285516958Z" level=info msg="Start recovering state" Mar 12 23:46:04.288007 containerd[2010]: time="2026-03-12T23:46:04.285653530Z" level=info msg="Start event monitor" Mar 12 23:46:04.288007 containerd[2010]: time="2026-03-12T23:46:04.285680962Z" level=info msg="Start cni network conf syncer for default" Mar 12 23:46:04.288665 systemd[1]: Finished sshkeys.service. Mar 12 23:46:04.301740 containerd[2010]: time="2026-03-12T23:46:04.285699466Z" level=info msg="Start streaming server" Mar 12 23:46:04.322182 containerd[2010]: time="2026-03-12T23:46:04.321398314Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 12 23:46:04.335404 containerd[2010]: time="2026-03-12T23:46:04.331740538Z" level=info msg="runtime interface starting up..." Mar 12 23:46:04.335404 containerd[2010]: time="2026-03-12T23:46:04.331799386Z" level=info msg="starting plugins..." Mar 12 23:46:04.335404 containerd[2010]: time="2026-03-12T23:46:04.331850074Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 12 23:46:04.335404 containerd[2010]: time="2026-03-12T23:46:04.332161918Z" level=info msg="containerd successfully booted in 0.675531s" Mar 12 23:46:04.332303 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 23:46:04.339734 ntpd[2185]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:41 UTC 2026 (1): Starting Mar 12 23:46:04.344341 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:41 UTC 2026 (1): Starting Mar 12 23:46:04.344341 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 12 23:46:04.344341 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: ---------------------------------------------------- Mar 12 23:46:04.344341 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: ntp-4 is maintained by Network Time Foundation, Mar 12 23:46:04.344341 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 12 23:46:04.344341 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: corporation. Support and training for ntp-4 are Mar 12 23:46:04.344341 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: available at https://www.nwtime.org/support Mar 12 23:46:04.344341 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: ---------------------------------------------------- Mar 12 23:46:04.339846 ntpd[2185]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 12 23:46:04.339865 ntpd[2185]: ---------------------------------------------------- Mar 12 23:46:04.353145 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: proto: precision = 0.096 usec (-23) Mar 12 23:46:04.353145 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: basedate set to 2026-02-28 Mar 12 23:46:04.353145 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: gps base set to 2026-03-01 (week 2408) Mar 12 23:46:04.353145 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: Listen and drop on 0 v6wildcard [::]:123 Mar 12 23:46:04.353145 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 12 23:46:04.353145 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: Listen normally on 2 lo 127.0.0.1:123 Mar 12 23:46:04.353145 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: Listen normally on 3 eth0 172.31.24.143:123 Mar 12 23:46:04.353145 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: Listen normally on 4 lo [::1]:123 Mar 12 23:46:04.353145 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: Listen normally on 5 eth0 [fe80::4b3:43ff:fe32:6951%2]:123 Mar 12 23:46:04.353145 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: Listening on routing socket on fd #22 for interface updates Mar 12 23:46:04.339882 ntpd[2185]: ntp-4 is maintained by Network Time Foundation, Mar 12 23:46:04.339898 ntpd[2185]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 12 23:46:04.339915 ntpd[2185]: corporation. Support and training for ntp-4 are Mar 12 23:46:04.339930 ntpd[2185]: available at https://www.nwtime.org/support Mar 12 23:46:04.339946 ntpd[2185]: ---------------------------------------------------- Mar 12 23:46:04.351158 ntpd[2185]: proto: precision = 0.096 usec (-23) Mar 12 23:46:04.351474 ntpd[2185]: basedate set to 2026-02-28 Mar 12 23:46:04.351496 ntpd[2185]: gps base set to 2026-03-01 (week 2408) Mar 12 23:46:04.351623 ntpd[2185]: Listen and drop on 0 v6wildcard [::]:123 Mar 12 23:46:04.351665 ntpd[2185]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 12 23:46:04.351970 ntpd[2185]: Listen normally on 2 lo 127.0.0.1:123 Mar 12 23:46:04.352016 ntpd[2185]: Listen normally on 3 eth0 172.31.24.143:123 Mar 12 23:46:04.352066 ntpd[2185]: Listen normally on 4 lo [::1]:123 Mar 12 23:46:04.352115 ntpd[2185]: Listen normally on 5 eth0 [fe80::4b3:43ff:fe32:6951%2]:123 Mar 12 23:46:04.352173 ntpd[2185]: Listening on routing socket on fd #22 for interface updates Mar 12 23:46:04.405367 ntpd[2185]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 12 23:46:04.405794 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 12 23:46:04.405794 ntpd[2185]: 12 Mar 23:46:04 ntpd[2185]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 12 23:46:04.405425 ntpd[2185]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 12 23:46:04.428101 locksmithd[2031]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 23:46:04.479758 amazon-ssm-agent[2188]: Initializing new seelog logger Mar 12 23:46:04.479758 amazon-ssm-agent[2188]: New Seelog Logger Creation Complete Mar 12 23:46:04.479758 amazon-ssm-agent[2188]: 2026/03/12 23:46:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:46:04.479758 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:46:04.479758 amazon-ssm-agent[2188]: 2026/03/12 23:46:04 processing appconfig overrides Mar 12 23:46:04.479758 amazon-ssm-agent[2188]: 2026/03/12 23:46:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:46:04.479758 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:46:04.479758 amazon-ssm-agent[2188]: 2026/03/12 23:46:04 processing appconfig overrides Mar 12 23:46:04.479758 amazon-ssm-agent[2188]: 2026/03/12 23:46:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:46:04.479758 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:46:04.479758 amazon-ssm-agent[2188]: 2026/03/12 23:46:04 processing appconfig overrides Mar 12 23:46:04.483148 amazon-ssm-agent[2188]: 2026-03-12 23:46:04.4785 INFO Proxy environment variables: Mar 12 23:46:04.488362 amazon-ssm-agent[2188]: 2026/03/12 23:46:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:46:04.488362 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:46:04.488362 amazon-ssm-agent[2188]: 2026/03/12 23:46:04 processing appconfig overrides Mar 12 23:46:04.494312 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 23:46:04.582760 amazon-ssm-agent[2188]: 2026-03-12 23:46:04.4785 INFO https_proxy: Mar 12 23:46:04.685789 amazon-ssm-agent[2188]: 2026-03-12 23:46:04.4785 INFO http_proxy: Mar 12 23:46:04.735106 polkitd[2183]: Started polkitd version 126 Mar 12 23:46:04.769639 polkitd[2183]: Loading rules from directory /etc/polkit-1/rules.d Mar 12 23:46:04.773444 polkitd[2183]: Loading rules from directory /run/polkit-1/rules.d Mar 12 23:46:04.773575 polkitd[2183]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 12 23:46:04.780287 polkitd[2183]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 12 23:46:04.780392 polkitd[2183]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 12 23:46:04.780480 polkitd[2183]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 12 23:46:04.783358 amazon-ssm-agent[2188]: 2026-03-12 23:46:04.4785 INFO no_proxy: Mar 12 23:46:04.788846 polkitd[2183]: Finished loading, compiling and executing 2 rules Mar 12 23:46:04.790175 systemd[1]: Started polkit.service - Authorization Manager. Mar 12 23:46:04.797047 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 12 23:46:04.799913 polkitd[2183]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 12 23:46:04.854310 systemd-hostnamed[2029]: Hostname set to (transient) Mar 12 23:46:04.855350 systemd-resolved[1822]: System hostname changed to 'ip-172-31-24-143'. Mar 12 23:46:04.881892 amazon-ssm-agent[2188]: 2026-03-12 23:46:04.4787 INFO Checking if agent identity type OnPrem can be assumed Mar 12 23:46:04.982618 amazon-ssm-agent[2188]: 2026-03-12 23:46:04.4788 INFO Checking if agent identity type EC2 can be assumed Mar 12 23:46:05.082733 amazon-ssm-agent[2188]: 2026-03-12 23:46:04.6135 INFO Agent will take identity from EC2 Mar 12 23:46:05.181197 amazon-ssm-agent[2188]: 2026-03-12 23:46:04.6159 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Mar 12 23:46:05.222585 amazon-ssm-agent[2188]: 2026/03/12 23:46:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:46:05.222585 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:46:05.224198 amazon-ssm-agent[2188]: 2026/03/12 23:46:05 processing appconfig overrides Mar 12 23:46:05.260091 amazon-ssm-agent[2188]: 2026-03-12 23:46:04.6159 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 12 23:46:05.260091 amazon-ssm-agent[2188]: 2026-03-12 23:46:04.6159 INFO [amazon-ssm-agent] Starting Core Agent Mar 12 23:46:05.260091 amazon-ssm-agent[2188]: 2026-03-12 23:46:04.6159 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Mar 12 23:46:05.260091 amazon-ssm-agent[2188]: 2026-03-12 23:46:04.6159 INFO [Registrar] Starting registrar module Mar 12 23:46:05.260091 amazon-ssm-agent[2188]: 2026-03-12 23:46:04.6252 INFO [EC2Identity] Checking disk for registration info Mar 12 23:46:05.260091 amazon-ssm-agent[2188]: 2026-03-12 23:46:04.6252 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Mar 12 23:46:05.260091 amazon-ssm-agent[2188]: 2026-03-12 23:46:04.6252 INFO [EC2Identity] Generating registration keypair Mar 12 23:46:05.260091 amazon-ssm-agent[2188]: 2026-03-12 23:46:05.1705 INFO [EC2Identity] Checking write access before registering Mar 12 23:46:05.260091 amazon-ssm-agent[2188]: 2026-03-12 23:46:05.1733 INFO [EC2Identity] Registering EC2 instance with Systems Manager Mar 12 23:46:05.260091 amazon-ssm-agent[2188]: 2026-03-12 23:46:05.2222 INFO [EC2Identity] EC2 registration was successful. Mar 12 23:46:05.260091 amazon-ssm-agent[2188]: 2026-03-12 23:46:05.2223 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Mar 12 23:46:05.260091 amazon-ssm-agent[2188]: 2026-03-12 23:46:05.2224 INFO [CredentialRefresher] credentialRefresher has started Mar 12 23:46:05.260091 amazon-ssm-agent[2188]: 2026-03-12 23:46:05.2224 INFO [CredentialRefresher] Starting credentials refresher loop Mar 12 23:46:05.260091 amazon-ssm-agent[2188]: 2026-03-12 23:46:05.2593 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 12 23:46:05.260091 amazon-ssm-agent[2188]: 2026-03-12 23:46:05.2596 INFO [CredentialRefresher] Credentials ready Mar 12 23:46:05.283743 amazon-ssm-agent[2188]: 2026-03-12 23:46:05.2599 INFO [CredentialRefresher] Next credential rotation will be in 29.9999897223 minutes Mar 12 23:46:05.326371 tar[1994]: linux-arm64/README.md Mar 12 23:46:05.358494 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 23:46:06.310093 amazon-ssm-agent[2188]: 2026-03-12 23:46:06.2986 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 12 23:46:06.411072 amazon-ssm-agent[2188]: 2026-03-12 23:46:06.3119 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2230) started Mar 12 23:46:06.511199 amazon-ssm-agent[2188]: 2026-03-12 23:46:06.3120 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 12 23:46:06.840658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:46:06.861226 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 23:46:07.906063 kubelet[2246]: E0312 23:46:07.905994 2246 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 23:46:07.910823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 23:46:07.911502 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 23:46:07.912167 systemd[1]: kubelet.service: Consumed 1.298s CPU time, 250.3M memory peak. Mar 12 23:46:08.132043 sshd_keygen[2001]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 23:46:08.170360 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 23:46:08.177032 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 23:46:08.187140 systemd[1]: Started sshd@0-172.31.24.143:22-4.153.228.146:52540.service - OpenSSH per-connection server daemon (4.153.228.146:52540). Mar 12 23:46:08.206346 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 23:46:08.207072 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 23:46:08.216197 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 23:46:08.253874 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 23:46:08.262959 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 23:46:08.269244 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 12 23:46:08.340583 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 23:46:08.343004 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 23:46:08.346856 systemd[1]: Startup finished in 3.907s (kernel) + 8.215s (initrd) + 11.498s (userspace) = 23.622s. Mar 12 23:46:08.776110 sshd[2262]: Accepted publickey for core from 4.153.228.146 port 52540 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:46:08.779764 sshd-session[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:46:08.801698 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 23:46:08.804593 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 23:46:08.810828 systemd-logind[1980]: New session 1 of user core. Mar 12 23:46:08.840783 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 23:46:08.846023 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 23:46:08.867053 (systemd)[2278]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 23:46:08.871556 systemd-logind[1980]: New session c1 of user core. Mar 12 23:46:09.168732 systemd[2278]: Queued start job for default target default.target. Mar 12 23:46:09.177031 systemd[2278]: Created slice app.slice - User Application Slice. Mar 12 23:46:09.177108 systemd[2278]: Reached target paths.target - Paths. Mar 12 23:46:09.177203 systemd[2278]: Reached target timers.target - Timers. Mar 12 23:46:09.179887 systemd[2278]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 23:46:09.224421 systemd[2278]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 23:46:09.224921 systemd[2278]: Reached target sockets.target - Sockets. Mar 12 23:46:09.225026 systemd[2278]: Reached target basic.target - Basic System. Mar 12 23:46:09.225112 systemd[2278]: Reached target default.target - Main User Target. Mar 12 23:46:09.225172 systemd[2278]: Startup finished in 341ms. Mar 12 23:46:09.225206 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 23:46:09.237037 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 23:46:09.492613 systemd[1]: Started sshd@1-172.31.24.143:22-4.153.228.146:35066.service - OpenSSH per-connection server daemon (4.153.228.146:35066). Mar 12 23:46:09.962916 sshd[2289]: Accepted publickey for core from 4.153.228.146 port 35066 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:46:09.965498 sshd-session[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:46:09.974825 systemd-logind[1980]: New session 2 of user core. Mar 12 23:46:09.995072 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 23:46:10.203814 sshd[2292]: Connection closed by 4.153.228.146 port 35066 Mar 12 23:46:10.205002 sshd-session[2289]: pam_unix(sshd:session): session closed for user core Mar 12 23:46:10.212489 systemd-logind[1980]: Session 2 logged out. Waiting for processes to exit. Mar 12 23:46:10.212782 systemd[1]: sshd@1-172.31.24.143:22-4.153.228.146:35066.service: Deactivated successfully. Mar 12 23:46:10.217224 systemd[1]: session-2.scope: Deactivated successfully. Mar 12 23:46:10.221477 systemd-logind[1980]: Removed session 2. Mar 12 23:46:10.296517 systemd[1]: Started sshd@2-172.31.24.143:22-4.153.228.146:35082.service - OpenSSH per-connection server daemon (4.153.228.146:35082). Mar 12 23:46:10.752356 sshd[2298]: Accepted publickey for core from 4.153.228.146 port 35082 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:46:10.754527 sshd-session[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:46:10.763821 systemd-logind[1980]: New session 3 of user core. Mar 12 23:46:10.772034 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 23:46:10.982771 sshd[2301]: Connection closed by 4.153.228.146 port 35082 Mar 12 23:46:10.982637 sshd-session[2298]: pam_unix(sshd:session): session closed for user core Mar 12 23:46:10.992139 systemd-logind[1980]: Session 3 logged out. Waiting for processes to exit. Mar 12 23:46:10.993506 systemd[1]: sshd@2-172.31.24.143:22-4.153.228.146:35082.service: Deactivated successfully. Mar 12 23:46:10.998635 systemd[1]: session-3.scope: Deactivated successfully. Mar 12 23:46:11.004170 systemd-logind[1980]: Removed session 3. Mar 12 23:46:11.080766 systemd[1]: Started sshd@3-172.31.24.143:22-4.153.228.146:35084.service - OpenSSH per-connection server daemon (4.153.228.146:35084). Mar 12 23:46:11.787851 systemd-resolved[1822]: Clock change detected. Flushing caches. Mar 12 23:46:11.992166 sshd[2307]: Accepted publickey for core from 4.153.228.146 port 35084 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:46:11.994521 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:46:12.002111 systemd-logind[1980]: New session 4 of user core. Mar 12 23:46:12.023248 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 23:46:12.234636 sshd[2310]: Connection closed by 4.153.228.146 port 35084 Mar 12 23:46:12.235412 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Mar 12 23:46:12.243819 systemd[1]: sshd@3-172.31.24.143:22-4.153.228.146:35084.service: Deactivated successfully. Mar 12 23:46:12.247668 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 23:46:12.250352 systemd-logind[1980]: Session 4 logged out. Waiting for processes to exit. Mar 12 23:46:12.253583 systemd-logind[1980]: Removed session 4. Mar 12 23:46:12.331701 systemd[1]: Started sshd@4-172.31.24.143:22-4.153.228.146:35098.service - OpenSSH per-connection server daemon (4.153.228.146:35098). Mar 12 23:46:12.796665 sshd[2316]: Accepted publickey for core from 4.153.228.146 port 35098 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:46:12.798834 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:46:12.806372 systemd-logind[1980]: New session 5 of user core. Mar 12 23:46:12.812244 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 23:46:12.980593 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 23:46:12.981861 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 23:46:13.000189 sudo[2320]: pam_unix(sudo:session): session closed for user root Mar 12 23:46:13.080104 sshd[2319]: Connection closed by 4.153.228.146 port 35098 Mar 12 23:46:13.081386 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Mar 12 23:46:13.090382 systemd[1]: sshd@4-172.31.24.143:22-4.153.228.146:35098.service: Deactivated successfully. Mar 12 23:46:13.095722 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 23:46:13.098705 systemd-logind[1980]: Session 5 logged out. Waiting for processes to exit. Mar 12 23:46:13.102275 systemd-logind[1980]: Removed session 5. Mar 12 23:46:13.178242 systemd[1]: Started sshd@5-172.31.24.143:22-4.153.228.146:35102.service - OpenSSH per-connection server daemon (4.153.228.146:35102). Mar 12 23:46:13.638417 sshd[2326]: Accepted publickey for core from 4.153.228.146 port 35102 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:46:13.640799 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:46:13.648496 systemd-logind[1980]: New session 6 of user core. Mar 12 23:46:13.656236 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 23:46:13.804839 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 23:46:13.805935 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 23:46:13.815282 sudo[2331]: pam_unix(sudo:session): session closed for user root Mar 12 23:46:13.825104 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 12 23:46:13.825717 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 23:46:13.844017 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 12 23:46:13.900965 augenrules[2353]: No rules Mar 12 23:46:13.903419 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 23:46:13.903887 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 12 23:46:13.906239 sudo[2330]: pam_unix(sudo:session): session closed for user root Mar 12 23:46:13.985213 sshd[2329]: Connection closed by 4.153.228.146 port 35102 Mar 12 23:46:13.985958 sshd-session[2326]: pam_unix(sshd:session): session closed for user core Mar 12 23:46:13.993845 systemd[1]: sshd@5-172.31.24.143:22-4.153.228.146:35102.service: Deactivated successfully. Mar 12 23:46:13.997211 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 23:46:14.000289 systemd-logind[1980]: Session 6 logged out. Waiting for processes to exit. Mar 12 23:46:14.002835 systemd-logind[1980]: Removed session 6. Mar 12 23:46:14.082747 systemd[1]: Started sshd@6-172.31.24.143:22-4.153.228.146:35104.service - OpenSSH per-connection server daemon (4.153.228.146:35104). Mar 12 23:46:14.544758 sshd[2362]: Accepted publickey for core from 4.153.228.146 port 35104 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:46:14.546938 sshd-session[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:46:14.554602 systemd-logind[1980]: New session 7 of user core. Mar 12 23:46:14.562261 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 23:46:14.710268 sudo[2366]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 23:46:14.710890 sudo[2366]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 23:46:15.257034 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 23:46:15.285528 (dockerd)[2383]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 23:46:15.673494 dockerd[2383]: time="2026-03-12T23:46:15.673416043Z" level=info msg="Starting up" Mar 12 23:46:15.677071 dockerd[2383]: time="2026-03-12T23:46:15.676760539Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 12 23:46:15.697376 dockerd[2383]: time="2026-03-12T23:46:15.697313383Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 12 23:46:15.757837 systemd[1]: var-lib-docker-metacopy\x2dcheck2433050257-merged.mount: Deactivated successfully. Mar 12 23:46:15.782237 dockerd[2383]: time="2026-03-12T23:46:15.781932187Z" level=info msg="Loading containers: start." Mar 12 23:46:15.799284 kernel: Initializing XFRM netlink socket Mar 12 23:46:16.180723 (udev-worker)[2404]: Network interface NamePolicy= disabled on kernel command line. Mar 12 23:46:16.265328 systemd-networkd[1821]: docker0: Link UP Mar 12 23:46:16.281227 dockerd[2383]: time="2026-03-12T23:46:16.280541382Z" level=info msg="Loading containers: done." Mar 12 23:46:16.315453 dockerd[2383]: time="2026-03-12T23:46:16.315383286Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 23:46:16.315845 dockerd[2383]: time="2026-03-12T23:46:16.315776346Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 12 23:46:16.316195 dockerd[2383]: time="2026-03-12T23:46:16.316085574Z" level=info msg="Initializing buildkit" Mar 12 23:46:16.378685 dockerd[2383]: time="2026-03-12T23:46:16.378247350Z" level=info msg="Completed buildkit initialization" Mar 12 23:46:16.393663 dockerd[2383]: time="2026-03-12T23:46:16.393607242Z" level=info msg="Daemon has completed initialization" Mar 12 23:46:16.394111 dockerd[2383]: time="2026-03-12T23:46:16.393888870Z" level=info msg="API listen on /run/docker.sock" Mar 12 23:46:16.395136 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 23:46:16.730695 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3160053664-merged.mount: Deactivated successfully. Mar 12 23:46:17.291829 containerd[2010]: time="2026-03-12T23:46:17.291460687Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 12 23:46:18.028429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount109188139.mount: Deactivated successfully. Mar 12 23:46:18.524079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 23:46:18.528256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:46:18.970138 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:46:18.983638 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 23:46:19.097894 kubelet[2659]: E0312 23:46:19.097805 2659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 23:46:19.108142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 23:46:19.108959 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 23:46:19.110132 systemd[1]: kubelet.service: Consumed 377ms CPU time, 106.8M memory peak. Mar 12 23:46:19.703040 containerd[2010]: time="2026-03-12T23:46:19.702954803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:19.706704 containerd[2010]: time="2026-03-12T23:46:19.706643783Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=24583252" Mar 12 23:46:19.709217 containerd[2010]: time="2026-03-12T23:46:19.709126127Z" level=info msg="ImageCreate event name:\"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:19.717061 containerd[2010]: time="2026-03-12T23:46:19.716910827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:19.724032 containerd[2010]: time="2026-03-12T23:46:19.723516179Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"24579851\" in 2.43194772s" Mar 12 23:46:19.724032 containerd[2010]: time="2026-03-12T23:46:19.723611255Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\"" Mar 12 23:46:19.725843 containerd[2010]: time="2026-03-12T23:46:19.725789267Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 12 23:46:21.452864 containerd[2010]: time="2026-03-12T23:46:21.452128836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:21.455687 containerd[2010]: time="2026-03-12T23:46:21.455587260Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=19139641" Mar 12 23:46:21.458034 containerd[2010]: time="2026-03-12T23:46:21.457380816Z" level=info msg="ImageCreate event name:\"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:21.464335 containerd[2010]: time="2026-03-12T23:46:21.464270328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:21.465833 containerd[2010]: time="2026-03-12T23:46:21.465742536Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"20724045\" in 1.739680581s" Mar 12 23:46:21.465833 containerd[2010]: time="2026-03-12T23:46:21.465819072Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\"" Mar 12 23:46:21.466601 containerd[2010]: time="2026-03-12T23:46:21.466537320Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 12 23:46:22.739030 containerd[2010]: time="2026-03-12T23:46:22.738927410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:22.742729 containerd[2010]: time="2026-03-12T23:46:22.742644206Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=14195544" Mar 12 23:46:22.746153 containerd[2010]: time="2026-03-12T23:46:22.746053886Z" level=info msg="ImageCreate event name:\"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:22.754278 containerd[2010]: time="2026-03-12T23:46:22.754152422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:22.756889 containerd[2010]: time="2026-03-12T23:46:22.756239498Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"15779966\" in 1.289636178s" Mar 12 23:46:22.756889 containerd[2010]: time="2026-03-12T23:46:22.756317570Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\"" Mar 12 23:46:22.757344 containerd[2010]: time="2026-03-12T23:46:22.757288154Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 12 23:46:23.989717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3525348791.mount: Deactivated successfully. Mar 12 23:46:24.389468 containerd[2010]: time="2026-03-12T23:46:24.388652606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:24.390428 containerd[2010]: time="2026-03-12T23:46:24.390386270Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=22697088" Mar 12 23:46:24.390749 containerd[2010]: time="2026-03-12T23:46:24.390713522Z" level=info msg="ImageCreate event name:\"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:24.393515 containerd[2010]: time="2026-03-12T23:46:24.393464606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:24.394753 containerd[2010]: time="2026-03-12T23:46:24.394710866Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"22696107\" in 1.637361404s" Mar 12 23:46:24.394904 containerd[2010]: time="2026-03-12T23:46:24.394877102Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\"" Mar 12 23:46:24.395924 containerd[2010]: time="2026-03-12T23:46:24.395885234Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 12 23:46:24.944931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1229278846.mount: Deactivated successfully. Mar 12 23:46:26.133681 containerd[2010]: time="2026-03-12T23:46:26.133587759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:26.137102 containerd[2010]: time="2026-03-12T23:46:26.137049855Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Mar 12 23:46:26.139669 containerd[2010]: time="2026-03-12T23:46:26.139595571Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:26.145336 containerd[2010]: time="2026-03-12T23:46:26.145253391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:26.147187 containerd[2010]: time="2026-03-12T23:46:26.147136623Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.750941369s" Mar 12 23:46:26.147360 containerd[2010]: time="2026-03-12T23:46:26.147329127Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Mar 12 23:46:26.148817 containerd[2010]: time="2026-03-12T23:46:26.148755327Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 12 23:46:26.649737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3543796773.mount: Deactivated successfully. Mar 12 23:46:26.662824 containerd[2010]: time="2026-03-12T23:46:26.662743193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:26.666515 containerd[2010]: time="2026-03-12T23:46:26.666459809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 12 23:46:26.668592 containerd[2010]: time="2026-03-12T23:46:26.668526341Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:26.674454 containerd[2010]: time="2026-03-12T23:46:26.674376413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:26.677298 containerd[2010]: time="2026-03-12T23:46:26.677255717Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 528.44345ms" Mar 12 23:46:26.677455 containerd[2010]: time="2026-03-12T23:46:26.677428073Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 12 23:46:26.678313 containerd[2010]: time="2026-03-12T23:46:26.678279377Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 12 23:46:27.252109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3125817230.mount: Deactivated successfully. Mar 12 23:46:28.564823 containerd[2010]: time="2026-03-12T23:46:28.564726607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:28.568605 containerd[2010]: time="2026-03-12T23:46:28.568207939Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21125515" Mar 12 23:46:28.570641 containerd[2010]: time="2026-03-12T23:46:28.570582523Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:28.576380 containerd[2010]: time="2026-03-12T23:46:28.576306283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:28.579304 containerd[2010]: time="2026-03-12T23:46:28.579233923Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.900735906s" Mar 12 23:46:28.579496 containerd[2010]: time="2026-03-12T23:46:28.579467047Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Mar 12 23:46:29.274103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 23:46:29.279178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:46:29.662298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:46:29.678114 (kubelet)[2830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 23:46:29.749187 kubelet[2830]: E0312 23:46:29.749108 2830 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 23:46:29.753340 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 23:46:29.753785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 23:46:29.754927 systemd[1]: kubelet.service: Consumed 327ms CPU time, 105.6M memory peak. Mar 12 23:46:35.338777 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 12 23:46:35.574825 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:46:35.575807 systemd[1]: kubelet.service: Consumed 327ms CPU time, 105.6M memory peak. Mar 12 23:46:35.581133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:46:35.635179 systemd[1]: Reload requested from client PID 2847 ('systemctl') (unit session-7.scope)... Mar 12 23:46:35.635214 systemd[1]: Reloading... Mar 12 23:46:35.891047 zram_generator::config[2897]: No configuration found. Mar 12 23:46:36.342641 systemd[1]: Reloading finished in 706 ms. Mar 12 23:46:36.419552 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 12 23:46:36.419910 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 12 23:46:36.420716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:46:36.420938 systemd[1]: kubelet.service: Consumed 232ms CPU time, 94.9M memory peak. Mar 12 23:46:36.425561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:46:36.766524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:46:36.783595 (kubelet)[2955]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 23:46:36.858487 kubelet[2955]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 23:46:36.858487 kubelet[2955]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 23:46:36.858977 kubelet[2955]: I0312 23:46:36.858565 2955 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 23:46:38.290740 kubelet[2955]: I0312 23:46:38.290681 2955 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 12 23:46:38.292056 kubelet[2955]: I0312 23:46:38.291356 2955 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 23:46:38.292056 kubelet[2955]: I0312 23:46:38.291427 2955 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 23:46:38.292056 kubelet[2955]: I0312 23:46:38.291445 2955 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 23:46:38.292056 kubelet[2955]: I0312 23:46:38.291858 2955 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 23:46:38.304584 kubelet[2955]: E0312 23:46:38.304528 2955 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.143:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.143:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 23:46:38.306836 kubelet[2955]: I0312 23:46:38.306755 2955 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 23:46:38.316068 kubelet[2955]: I0312 23:46:38.316032 2955 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 23:46:38.323052 kubelet[2955]: I0312 23:46:38.322891 2955 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 23:46:38.324034 kubelet[2955]: I0312 23:46:38.323653 2955 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 23:46:38.324425 kubelet[2955]: I0312 23:46:38.323718 2955 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-143","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 23:46:38.324742 kubelet[2955]: I0312 23:46:38.324704 2955 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 23:46:38.324947 kubelet[2955]: I0312 23:46:38.324919 2955 container_manager_linux.go:306] "Creating device plugin manager" Mar 12 23:46:38.325312 kubelet[2955]: I0312 23:46:38.325269 2955 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 23:46:38.330842 kubelet[2955]: I0312 23:46:38.330782 2955 state_mem.go:36] "Initialized new in-memory state store" Mar 12 23:46:38.333501 kubelet[2955]: I0312 23:46:38.333449 2955 kubelet.go:475] "Attempting to sync node with API server" Mar 12 23:46:38.333501 kubelet[2955]: I0312 23:46:38.333504 2955 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 23:46:38.333715 kubelet[2955]: I0312 23:46:38.333559 2955 kubelet.go:387] "Adding apiserver pod source" Mar 12 23:46:38.333715 kubelet[2955]: I0312 23:46:38.333582 2955 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 23:46:38.336566 kubelet[2955]: E0312 23:46:38.336499 2955 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 23:46:38.337531 kubelet[2955]: E0312 23:46:38.337443 2955 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-143&limit=500&resourceVersion=0\": dial tcp 172.31.24.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 23:46:38.337695 kubelet[2955]: I0312 23:46:38.337656 2955 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 12 23:46:38.338758 kubelet[2955]: I0312 23:46:38.338680 2955 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 23:46:38.338758 kubelet[2955]: I0312 23:46:38.338762 2955 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 23:46:38.339029 kubelet[2955]: W0312 23:46:38.338847 2955 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 23:46:38.344186 kubelet[2955]: I0312 23:46:38.344132 2955 server.go:1262] "Started kubelet" Mar 12 23:46:38.349721 kubelet[2955]: I0312 23:46:38.349642 2955 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 23:46:38.350628 kubelet[2955]: I0312 23:46:38.350513 2955 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 23:46:38.350628 kubelet[2955]: I0312 23:46:38.350633 2955 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 23:46:38.351330 kubelet[2955]: I0312 23:46:38.351264 2955 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 23:46:38.352446 kubelet[2955]: I0312 23:46:38.352397 2955 server.go:310] "Adding debug handlers to kubelet server" Mar 12 23:46:38.357646 kubelet[2955]: I0312 23:46:38.357320 2955 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 23:46:38.359293 kubelet[2955]: E0312 23:46:38.356841 2955 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.143:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.143:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-143.189c3ccb94782b1f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-143,UID:ip-172-31-24-143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-143,},FirstTimestamp:2026-03-12 23:46:38.344063775 +0000 UTC m=+1.553842700,LastTimestamp:2026-03-12 23:46:38.344063775 +0000 UTC m=+1.553842700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-143,}" Mar 12 23:46:38.365400 kubelet[2955]: I0312 23:46:38.362263 2955 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 23:46:38.370179 kubelet[2955]: E0312 23:46:38.370141 2955 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-143\" not found" Mar 12 23:46:38.370875 kubelet[2955]: I0312 23:46:38.370842 2955 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 12 23:46:38.371475 kubelet[2955]: I0312 23:46:38.371429 2955 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 23:46:38.371712 kubelet[2955]: I0312 23:46:38.371687 2955 reconciler.go:29] "Reconciler: start to sync state" Mar 12 23:46:38.373177 kubelet[2955]: E0312 23:46:38.373127 2955 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 23:46:38.373966 kubelet[2955]: I0312 23:46:38.373916 2955 factory.go:223] Registration of the systemd container factory successfully Mar 12 23:46:38.374470 kubelet[2955]: I0312 23:46:38.374425 2955 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 23:46:38.377266 kubelet[2955]: E0312 23:46:38.377207 2955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-143?timeout=10s\": dial tcp 172.31.24.143:6443: connect: connection refused" interval="200ms" Mar 12 23:46:38.377633 kubelet[2955]: E0312 23:46:38.377595 2955 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 23:46:38.378052 kubelet[2955]: I0312 23:46:38.377982 2955 factory.go:223] Registration of the containerd container factory successfully Mar 12 23:46:38.410665 kubelet[2955]: I0312 23:46:38.410611 2955 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 23:46:38.418685 kubelet[2955]: I0312 23:46:38.418614 2955 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 23:46:38.418685 kubelet[2955]: I0312 23:46:38.418672 2955 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 12 23:46:38.418890 kubelet[2955]: I0312 23:46:38.418711 2955 kubelet.go:2428] "Starting kubelet main sync loop" Mar 12 23:46:38.418890 kubelet[2955]: E0312 23:46:38.418794 2955 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 23:46:38.425090 kubelet[2955]: E0312 23:46:38.424967 2955 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 23:46:38.434691 kubelet[2955]: I0312 23:46:38.434649 2955 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 23:46:38.434890 kubelet[2955]: I0312 23:46:38.434866 2955 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 23:46:38.435050 kubelet[2955]: I0312 23:46:38.435028 2955 state_mem.go:36] "Initialized new in-memory state store" Mar 12 23:46:38.441085 kubelet[2955]: I0312 23:46:38.441046 2955 policy_none.go:49] "None policy: Start" Mar 12 23:46:38.442600 kubelet[2955]: I0312 23:46:38.442567 2955 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 23:46:38.442796 kubelet[2955]: I0312 23:46:38.442770 2955 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 23:46:38.446448 kubelet[2955]: I0312 23:46:38.446408 2955 policy_none.go:47] "Start" Mar 12 23:46:38.457392 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 23:46:38.471820 kubelet[2955]: E0312 23:46:38.471746 2955 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-143\" not found" Mar 12 23:46:38.475810 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 23:46:38.495131 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 23:46:38.500604 kubelet[2955]: E0312 23:46:38.500547 2955 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 23:46:38.501017 kubelet[2955]: I0312 23:46:38.500902 2955 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 23:46:38.501017 kubelet[2955]: I0312 23:46:38.500945 2955 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 23:46:38.502153 kubelet[2955]: I0312 23:46:38.501536 2955 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 23:46:38.504814 kubelet[2955]: E0312 23:46:38.504747 2955 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 23:46:38.505630 kubelet[2955]: E0312 23:46:38.505553 2955 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-143\" not found" Mar 12 23:46:38.549645 systemd[1]: Created slice kubepods-burstable-pod3d37019ecfab11c2d2ee4ac88dd04f14.slice - libcontainer container kubepods-burstable-pod3d37019ecfab11c2d2ee4ac88dd04f14.slice. Mar 12 23:46:38.562924 kubelet[2955]: E0312 23:46:38.562717 2955 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-143\" not found" node="ip-172-31-24-143" Mar 12 23:46:38.571834 systemd[1]: Created slice kubepods-burstable-pod0f2c2c191bdb8ef427921e962b2cedef.slice - libcontainer container kubepods-burstable-pod0f2c2c191bdb8ef427921e962b2cedef.slice. Mar 12 23:46:38.578463 kubelet[2955]: E0312 23:46:38.578390 2955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-143?timeout=10s\": dial tcp 172.31.24.143:6443: connect: connection refused" interval="400ms" Mar 12 23:46:38.582107 kubelet[2955]: E0312 23:46:38.581502 2955 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-143\" not found" node="ip-172-31-24-143" Mar 12 23:46:38.587762 systemd[1]: Created slice kubepods-burstable-pod4de86953fa8c46568e1ebaf6198986fb.slice - libcontainer container kubepods-burstable-pod4de86953fa8c46568e1ebaf6198986fb.slice. Mar 12 23:46:38.592281 kubelet[2955]: E0312 23:46:38.592239 2955 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-143\" not found" node="ip-172-31-24-143" Mar 12 23:46:38.605111 kubelet[2955]: I0312 23:46:38.604667 2955 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-143" Mar 12 23:46:38.605661 kubelet[2955]: E0312 23:46:38.605590 2955 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.143:6443/api/v1/nodes\": dial tcp 172.31.24.143:6443: connect: connection refused" node="ip-172-31-24-143" Mar 12 23:46:38.675288 kubelet[2955]: I0312 23:46:38.675181 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4de86953fa8c46568e1ebaf6198986fb-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-143\" (UID: \"4de86953fa8c46568e1ebaf6198986fb\") " pod="kube-system/kube-apiserver-ip-172-31-24-143" Mar 12 23:46:38.675288 kubelet[2955]: I0312 23:46:38.675255 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d37019ecfab11c2d2ee4ac88dd04f14-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-143\" (UID: \"3d37019ecfab11c2d2ee4ac88dd04f14\") " pod="kube-system/kube-controller-manager-ip-172-31-24-143" Mar 12 23:46:38.675639 kubelet[2955]: I0312 23:46:38.675309 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d37019ecfab11c2d2ee4ac88dd04f14-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-143\" (UID: \"3d37019ecfab11c2d2ee4ac88dd04f14\") " pod="kube-system/kube-controller-manager-ip-172-31-24-143" Mar 12 23:46:38.675639 kubelet[2955]: I0312 23:46:38.675396 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d37019ecfab11c2d2ee4ac88dd04f14-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-143\" (UID: \"3d37019ecfab11c2d2ee4ac88dd04f14\") " pod="kube-system/kube-controller-manager-ip-172-31-24-143" Mar 12 23:46:38.675639 kubelet[2955]: I0312 23:46:38.675479 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d37019ecfab11c2d2ee4ac88dd04f14-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-143\" (UID: \"3d37019ecfab11c2d2ee4ac88dd04f14\") " pod="kube-system/kube-controller-manager-ip-172-31-24-143" Mar 12 23:46:38.675639 kubelet[2955]: I0312 23:46:38.675522 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f2c2c191bdb8ef427921e962b2cedef-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-143\" (UID: \"0f2c2c191bdb8ef427921e962b2cedef\") " pod="kube-system/kube-scheduler-ip-172-31-24-143" Mar 12 23:46:38.675639 kubelet[2955]: I0312 23:46:38.675560 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4de86953fa8c46568e1ebaf6198986fb-ca-certs\") pod \"kube-apiserver-ip-172-31-24-143\" (UID: \"4de86953fa8c46568e1ebaf6198986fb\") " pod="kube-system/kube-apiserver-ip-172-31-24-143" Mar 12 23:46:38.675907 kubelet[2955]: I0312 23:46:38.675597 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4de86953fa8c46568e1ebaf6198986fb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-143\" (UID: \"4de86953fa8c46568e1ebaf6198986fb\") " pod="kube-system/kube-apiserver-ip-172-31-24-143" Mar 12 23:46:38.675907 kubelet[2955]: I0312 23:46:38.675649 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d37019ecfab11c2d2ee4ac88dd04f14-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-143\" (UID: \"3d37019ecfab11c2d2ee4ac88dd04f14\") " pod="kube-system/kube-controller-manager-ip-172-31-24-143" Mar 12 23:46:38.808835 kubelet[2955]: I0312 23:46:38.808668 2955 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-143" Mar 12 23:46:38.810335 kubelet[2955]: E0312 23:46:38.810267 2955 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.143:6443/api/v1/nodes\": dial tcp 172.31.24.143:6443: connect: connection refused" node="ip-172-31-24-143" Mar 12 23:46:38.870752 containerd[2010]: time="2026-03-12T23:46:38.870657810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-143,Uid:3d37019ecfab11c2d2ee4ac88dd04f14,Namespace:kube-system,Attempt:0,}" Mar 12 23:46:38.887910 containerd[2010]: time="2026-03-12T23:46:38.887786442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-143,Uid:0f2c2c191bdb8ef427921e962b2cedef,Namespace:kube-system,Attempt:0,}" Mar 12 23:46:38.899845 containerd[2010]: time="2026-03-12T23:46:38.899762706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-143,Uid:4de86953fa8c46568e1ebaf6198986fb,Namespace:kube-system,Attempt:0,}" Mar 12 23:46:38.979248 kubelet[2955]: E0312 23:46:38.979174 2955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-143?timeout=10s\": dial tcp 172.31.24.143:6443: connect: connection refused" interval="800ms" Mar 12 23:46:39.214086 kubelet[2955]: I0312 23:46:39.213825 2955 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-143" Mar 12 23:46:39.215566 kubelet[2955]: E0312 23:46:39.215496 2955 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.143:6443/api/v1/nodes\": dial tcp 172.31.24.143:6443: connect: connection refused" node="ip-172-31-24-143" Mar 12 23:46:39.276968 kubelet[2955]: E0312 23:46:39.276903 2955 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 23:46:39.400545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount31796836.mount: Deactivated successfully. Mar 12 23:46:39.416327 containerd[2010]: time="2026-03-12T23:46:39.416222981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 23:46:39.423127 containerd[2010]: time="2026-03-12T23:46:39.423047573Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 12 23:46:39.425205 containerd[2010]: time="2026-03-12T23:46:39.425110889Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 23:46:39.428906 containerd[2010]: time="2026-03-12T23:46:39.428111477Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 23:46:39.431964 containerd[2010]: time="2026-03-12T23:46:39.431866421Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 23:46:39.435959 containerd[2010]: time="2026-03-12T23:46:39.435857813Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 12 23:46:39.437799 containerd[2010]: time="2026-03-12T23:46:39.437717633Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 12 23:46:39.440080 containerd[2010]: time="2026-03-12T23:46:39.439944713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 23:46:39.442039 containerd[2010]: time="2026-03-12T23:46:39.441413621Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 566.802783ms" Mar 12 23:46:39.449691 containerd[2010]: time="2026-03-12T23:46:39.449609729Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 546.758451ms" Mar 12 23:46:39.456358 containerd[2010]: time="2026-03-12T23:46:39.456258353Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 564.903411ms" Mar 12 23:46:39.468222 kubelet[2955]: E0312 23:46:39.467507 2955 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 23:46:39.521412 containerd[2010]: time="2026-03-12T23:46:39.521316053Z" level=info msg="connecting to shim c379f654373486c4f9161a22d0a19ffd1174249595abf75cb74dcbf87f6a8359" address="unix:///run/containerd/s/f921a3b64df56a2583352bb8351fb721eda26fc65c5949ac4994f630f7525e29" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:46:39.534438 containerd[2010]: time="2026-03-12T23:46:39.534305741Z" level=info msg="connecting to shim 398130d4757a6abb12c518ae80665daafecf5d047a8a7af246c9e354165da184" address="unix:///run/containerd/s/c1c17f1d6b5b0d05cf9464e745295a493845b4ad347fe9b24db8b315e623916c" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:46:39.546383 containerd[2010]: time="2026-03-12T23:46:39.546277025Z" level=info msg="connecting to shim f127491ded726ec7811a9ec2494ef41214a7473848976126d18397006d7ceae5" address="unix:///run/containerd/s/938635d37ada50f57fa5a4a9a4dfa008920c613f369a195800f92091d1294af1" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:46:39.600462 kubelet[2955]: E0312 23:46:39.600237 2955 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 23:46:39.615435 systemd[1]: Started cri-containerd-c379f654373486c4f9161a22d0a19ffd1174249595abf75cb74dcbf87f6a8359.scope - libcontainer container c379f654373486c4f9161a22d0a19ffd1174249595abf75cb74dcbf87f6a8359. Mar 12 23:46:39.634308 systemd[1]: Started cri-containerd-398130d4757a6abb12c518ae80665daafecf5d047a8a7af246c9e354165da184.scope - libcontainer container 398130d4757a6abb12c518ae80665daafecf5d047a8a7af246c9e354165da184. Mar 12 23:46:39.639282 systemd[1]: Started cri-containerd-f127491ded726ec7811a9ec2494ef41214a7473848976126d18397006d7ceae5.scope - libcontainer container f127491ded726ec7811a9ec2494ef41214a7473848976126d18397006d7ceae5. Mar 12 23:46:39.771231 containerd[2010]: time="2026-03-12T23:46:39.769600542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-143,Uid:3d37019ecfab11c2d2ee4ac88dd04f14,Namespace:kube-system,Attempt:0,} returns sandbox id \"c379f654373486c4f9161a22d0a19ffd1174249595abf75cb74dcbf87f6a8359\"" Mar 12 23:46:39.780665 kubelet[2955]: E0312 23:46:39.780577 2955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-143?timeout=10s\": dial tcp 172.31.24.143:6443: connect: connection refused" interval="1.6s" Mar 12 23:46:39.790904 containerd[2010]: time="2026-03-12T23:46:39.790815127Z" level=info msg="CreateContainer within sandbox \"c379f654373486c4f9161a22d0a19ffd1174249595abf75cb74dcbf87f6a8359\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 23:46:39.803849 containerd[2010]: time="2026-03-12T23:46:39.803776387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-143,Uid:4de86953fa8c46568e1ebaf6198986fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"f127491ded726ec7811a9ec2494ef41214a7473848976126d18397006d7ceae5\"" Mar 12 23:46:39.816212 containerd[2010]: time="2026-03-12T23:46:39.816153727Z" level=info msg="CreateContainer within sandbox \"f127491ded726ec7811a9ec2494ef41214a7473848976126d18397006d7ceae5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 23:46:39.830304 containerd[2010]: time="2026-03-12T23:46:39.830225071Z" level=info msg="Container be5ca988691be188bf36d3d44b1412e9b1390d3640e1705d882aaf16eda5da49: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:46:39.847767 containerd[2010]: time="2026-03-12T23:46:39.847694611Z" level=info msg="Container 925ac115bc9cb96399a572bd28ace575db47c863e54b84a6c4ff8c13dbb69749: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:46:39.853328 kubelet[2955]: E0312 23:46:39.853270 2955 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-143&limit=500&resourceVersion=0\": dial tcp 172.31.24.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 23:46:39.859396 containerd[2010]: time="2026-03-12T23:46:39.859259563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-143,Uid:0f2c2c191bdb8ef427921e962b2cedef,Namespace:kube-system,Attempt:0,} returns sandbox id \"398130d4757a6abb12c518ae80665daafecf5d047a8a7af246c9e354165da184\"" Mar 12 23:46:39.863312 containerd[2010]: time="2026-03-12T23:46:39.863140639Z" level=info msg="CreateContainer within sandbox \"c379f654373486c4f9161a22d0a19ffd1174249595abf75cb74dcbf87f6a8359\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"be5ca988691be188bf36d3d44b1412e9b1390d3640e1705d882aaf16eda5da49\"" Mar 12 23:46:39.864758 containerd[2010]: time="2026-03-12T23:46:39.864533875Z" level=info msg="StartContainer for \"be5ca988691be188bf36d3d44b1412e9b1390d3640e1705d882aaf16eda5da49\"" Mar 12 23:46:39.869175 containerd[2010]: time="2026-03-12T23:46:39.869105083Z" level=info msg="connecting to shim be5ca988691be188bf36d3d44b1412e9b1390d3640e1705d882aaf16eda5da49" address="unix:///run/containerd/s/f921a3b64df56a2583352bb8351fb721eda26fc65c5949ac4994f630f7525e29" protocol=ttrpc version=3 Mar 12 23:46:39.876278 containerd[2010]: time="2026-03-12T23:46:39.876205015Z" level=info msg="CreateContainer within sandbox \"398130d4757a6abb12c518ae80665daafecf5d047a8a7af246c9e354165da184\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 23:46:39.878421 containerd[2010]: time="2026-03-12T23:46:39.878348587Z" level=info msg="CreateContainer within sandbox \"f127491ded726ec7811a9ec2494ef41214a7473848976126d18397006d7ceae5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"925ac115bc9cb96399a572bd28ace575db47c863e54b84a6c4ff8c13dbb69749\"" Mar 12 23:46:39.880320 containerd[2010]: time="2026-03-12T23:46:39.880195363Z" level=info msg="StartContainer for \"925ac115bc9cb96399a572bd28ace575db47c863e54b84a6c4ff8c13dbb69749\"" Mar 12 23:46:39.883435 containerd[2010]: time="2026-03-12T23:46:39.883373479Z" level=info msg="connecting to shim 925ac115bc9cb96399a572bd28ace575db47c863e54b84a6c4ff8c13dbb69749" address="unix:///run/containerd/s/938635d37ada50f57fa5a4a9a4dfa008920c613f369a195800f92091d1294af1" protocol=ttrpc version=3 Mar 12 23:46:39.901757 containerd[2010]: time="2026-03-12T23:46:39.901561027Z" level=info msg="Container 641633648b53a2824c4d41b8e2526ec4d162d8224492883e1f47bda6b376d120: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:46:39.916685 systemd[1]: Started cri-containerd-be5ca988691be188bf36d3d44b1412e9b1390d3640e1705d882aaf16eda5da49.scope - libcontainer container be5ca988691be188bf36d3d44b1412e9b1390d3640e1705d882aaf16eda5da49. Mar 12 23:46:39.921799 containerd[2010]: time="2026-03-12T23:46:39.921614227Z" level=info msg="CreateContainer within sandbox \"398130d4757a6abb12c518ae80665daafecf5d047a8a7af246c9e354165da184\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"641633648b53a2824c4d41b8e2526ec4d162d8224492883e1f47bda6b376d120\"" Mar 12 23:46:39.924155 containerd[2010]: time="2026-03-12T23:46:39.923739823Z" level=info msg="StartContainer for \"641633648b53a2824c4d41b8e2526ec4d162d8224492883e1f47bda6b376d120\"" Mar 12 23:46:39.928737 containerd[2010]: time="2026-03-12T23:46:39.928625443Z" level=info msg="connecting to shim 641633648b53a2824c4d41b8e2526ec4d162d8224492883e1f47bda6b376d120" address="unix:///run/containerd/s/c1c17f1d6b5b0d05cf9464e745295a493845b4ad347fe9b24db8b315e623916c" protocol=ttrpc version=3 Mar 12 23:46:39.956277 systemd[1]: Started cri-containerd-925ac115bc9cb96399a572bd28ace575db47c863e54b84a6c4ff8c13dbb69749.scope - libcontainer container 925ac115bc9cb96399a572bd28ace575db47c863e54b84a6c4ff8c13dbb69749. Mar 12 23:46:39.984322 systemd[1]: Started cri-containerd-641633648b53a2824c4d41b8e2526ec4d162d8224492883e1f47bda6b376d120.scope - libcontainer container 641633648b53a2824c4d41b8e2526ec4d162d8224492883e1f47bda6b376d120. Mar 12 23:46:40.020064 kubelet[2955]: I0312 23:46:40.019787 2955 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-143" Mar 12 23:46:40.022104 kubelet[2955]: E0312 23:46:40.021947 2955 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.143:6443/api/v1/nodes\": dial tcp 172.31.24.143:6443: connect: connection refused" node="ip-172-31-24-143" Mar 12 23:46:40.137854 containerd[2010]: time="2026-03-12T23:46:40.137768128Z" level=info msg="StartContainer for \"be5ca988691be188bf36d3d44b1412e9b1390d3640e1705d882aaf16eda5da49\" returns successfully" Mar 12 23:46:40.140421 containerd[2010]: time="2026-03-12T23:46:40.140329684Z" level=info msg="StartContainer for \"925ac115bc9cb96399a572bd28ace575db47c863e54b84a6c4ff8c13dbb69749\" returns successfully" Mar 12 23:46:40.207603 containerd[2010]: time="2026-03-12T23:46:40.207494765Z" level=info msg="StartContainer for \"641633648b53a2824c4d41b8e2526ec4d162d8224492883e1f47bda6b376d120\" returns successfully" Mar 12 23:46:40.451080 kubelet[2955]: E0312 23:46:40.451028 2955 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-143\" not found" node="ip-172-31-24-143" Mar 12 23:46:40.466354 kubelet[2955]: E0312 23:46:40.466047 2955 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-143\" not found" node="ip-172-31-24-143" Mar 12 23:46:40.470488 kubelet[2955]: E0312 23:46:40.470106 2955 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-143\" not found" node="ip-172-31-24-143" Mar 12 23:46:41.475972 kubelet[2955]: E0312 23:46:41.475677 2955 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-143\" not found" node="ip-172-31-24-143" Mar 12 23:46:41.478416 kubelet[2955]: E0312 23:46:41.478112 2955 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-143\" not found" node="ip-172-31-24-143" Mar 12 23:46:41.627042 kubelet[2955]: I0312 23:46:41.626971 2955 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-143" Mar 12 23:46:43.150073 kubelet[2955]: E0312 23:46:43.148625 2955 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-143\" not found" node="ip-172-31-24-143" Mar 12 23:46:43.384951 kubelet[2955]: E0312 23:46:43.384659 2955 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-143\" not found" node="ip-172-31-24-143" Mar 12 23:46:43.768470 kubelet[2955]: E0312 23:46:43.768175 2955 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-143\" not found" node="ip-172-31-24-143" Mar 12 23:46:43.903672 kubelet[2955]: E0312 23:46:43.903631 2955 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-143\" not found" node="ip-172-31-24-143" Mar 12 23:46:43.934709 kubelet[2955]: E0312 23:46:43.934457 2955 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-24-143.189c3ccb94782b1f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-143,UID:ip-172-31-24-143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-143,},FirstTimestamp:2026-03-12 23:46:38.344063775 +0000 UTC m=+1.553842700,LastTimestamp:2026-03-12 23:46:38.344063775 +0000 UTC m=+1.553842700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-143,}" Mar 12 23:46:43.971225 kubelet[2955]: I0312 23:46:43.971066 2955 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-143" Mar 12 23:46:43.971225 kubelet[2955]: E0312 23:46:43.971122 2955 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-24-143\": node \"ip-172-31-24-143\" not found" Mar 12 23:46:44.034712 kubelet[2955]: E0312 23:46:44.034583 2955 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-24-143.189c3ccb96776908 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-143,UID:ip-172-31-24-143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-24-143,},FirstTimestamp:2026-03-12 23:46:38.37756852 +0000 UTC m=+1.587347433,LastTimestamp:2026-03-12 23:46:38.37756852 +0000 UTC m=+1.587347433,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-143,}" Mar 12 23:46:44.075728 kubelet[2955]: I0312 23:46:44.075676 2955 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-143" Mar 12 23:46:44.092535 kubelet[2955]: E0312 23:46:44.092319 2955 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-143\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-143" Mar 12 23:46:44.092535 kubelet[2955]: I0312 23:46:44.092366 2955 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-143" Mar 12 23:46:44.099950 kubelet[2955]: E0312 23:46:44.099723 2955 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-143\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-143" Mar 12 23:46:44.099950 kubelet[2955]: I0312 23:46:44.099767 2955 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-143" Mar 12 23:46:44.111576 kubelet[2955]: E0312 23:46:44.111521 2955 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-143\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-143" Mar 12 23:46:44.340765 kubelet[2955]: I0312 23:46:44.339888 2955 apiserver.go:52] "Watching apiserver" Mar 12 23:46:44.371785 kubelet[2955]: I0312 23:46:44.371700 2955 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 23:46:46.268542 systemd[1]: Reload requested from client PID 3245 ('systemctl') (unit session-7.scope)... Mar 12 23:46:46.269096 systemd[1]: Reloading... Mar 12 23:46:46.473026 zram_generator::config[3289]: No configuration found. Mar 12 23:46:46.977430 systemd[1]: Reloading finished in 707 ms. Mar 12 23:46:47.021517 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:46:47.044729 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 23:46:47.045715 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:46:47.047183 systemd[1]: kubelet.service: Consumed 2.375s CPU time, 121.2M memory peak. Mar 12 23:46:47.051618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:46:47.474897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:46:47.491553 (kubelet)[3349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 23:46:47.595066 kubelet[3349]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 23:46:47.595066 kubelet[3349]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 23:46:47.595066 kubelet[3349]: I0312 23:46:47.594279 3349 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 23:46:47.616099 kubelet[3349]: I0312 23:46:47.615897 3349 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 12 23:46:47.616099 kubelet[3349]: I0312 23:46:47.615950 3349 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 23:46:47.616099 kubelet[3349]: I0312 23:46:47.616093 3349 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 23:46:47.616415 kubelet[3349]: I0312 23:46:47.616114 3349 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 23:46:47.616548 kubelet[3349]: I0312 23:46:47.616496 3349 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 23:46:47.619542 kubelet[3349]: I0312 23:46:47.619494 3349 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 23:46:47.625132 kubelet[3349]: I0312 23:46:47.624220 3349 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 23:46:47.638980 kubelet[3349]: I0312 23:46:47.638934 3349 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 23:46:47.646360 kubelet[3349]: I0312 23:46:47.644851 3349 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 23:46:47.646360 kubelet[3349]: I0312 23:46:47.645407 3349 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 23:46:47.646360 kubelet[3349]: I0312 23:46:47.645452 3349 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-143","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 23:46:47.646360 kubelet[3349]: I0312 23:46:47.645910 3349 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 23:46:47.646741 kubelet[3349]: I0312 23:46:47.645932 3349 container_manager_linux.go:306] "Creating device plugin manager" Mar 12 23:46:47.646741 kubelet[3349]: I0312 23:46:47.645975 3349 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 23:46:47.647046 kubelet[3349]: I0312 23:46:47.646945 3349 state_mem.go:36] "Initialized new in-memory state store" Mar 12 23:46:47.647287 kubelet[3349]: I0312 23:46:47.647250 3349 kubelet.go:475] "Attempting to sync node with API server" Mar 12 23:46:47.647358 kubelet[3349]: I0312 23:46:47.647297 3349 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 23:46:47.647358 kubelet[3349]: I0312 23:46:47.647344 3349 kubelet.go:387] "Adding apiserver pod source" Mar 12 23:46:47.648606 kubelet[3349]: I0312 23:46:47.647374 3349 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 23:46:47.653722 kubelet[3349]: I0312 23:46:47.653685 3349 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 12 23:46:47.659258 kubelet[3349]: I0312 23:46:47.657544 3349 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 23:46:47.659258 kubelet[3349]: I0312 23:46:47.657637 3349 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 23:46:47.672540 kubelet[3349]: I0312 23:46:47.672496 3349 server.go:1262] "Started kubelet" Mar 12 23:46:47.676121 kubelet[3349]: I0312 23:46:47.676064 3349 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 23:46:47.686915 kubelet[3349]: I0312 23:46:47.685922 3349 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 23:46:47.688043 kubelet[3349]: I0312 23:46:47.687901 3349 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 23:46:47.688043 kubelet[3349]: I0312 23:46:47.688038 3349 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 23:46:47.689978 kubelet[3349]: I0312 23:46:47.688412 3349 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 23:46:47.694026 kubelet[3349]: I0312 23:46:47.693298 3349 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 23:46:47.697212 kubelet[3349]: I0312 23:46:47.696409 3349 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 12 23:46:47.697212 kubelet[3349]: E0312 23:46:47.696759 3349 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-143\" not found" Mar 12 23:46:47.697903 kubelet[3349]: I0312 23:46:47.697853 3349 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 23:46:47.698149 kubelet[3349]: I0312 23:46:47.698118 3349 reconciler.go:29] "Reconciler: start to sync state" Mar 12 23:46:47.702042 kubelet[3349]: I0312 23:46:47.699626 3349 server.go:310] "Adding debug handlers to kubelet server" Mar 12 23:46:47.709036 kubelet[3349]: I0312 23:46:47.708448 3349 factory.go:223] Registration of the systemd container factory successfully Mar 12 23:46:47.714922 kubelet[3349]: I0312 23:46:47.714707 3349 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 23:46:47.752238 kubelet[3349]: I0312 23:46:47.752113 3349 factory.go:223] Registration of the containerd container factory successfully Mar 12 23:46:47.777022 kubelet[3349]: E0312 23:46:47.776249 3349 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 23:46:47.813472 kubelet[3349]: I0312 23:46:47.813400 3349 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 23:46:47.823036 kubelet[3349]: I0312 23:46:47.822956 3349 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 23:46:47.823036 kubelet[3349]: I0312 23:46:47.823031 3349 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 12 23:46:47.823245 kubelet[3349]: I0312 23:46:47.823076 3349 kubelet.go:2428] "Starting kubelet main sync loop" Mar 12 23:46:47.824326 kubelet[3349]: E0312 23:46:47.823146 3349 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 23:46:47.894603 kubelet[3349]: I0312 23:46:47.894571 3349 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 23:46:47.896023 kubelet[3349]: I0312 23:46:47.894746 3349 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 23:46:47.896023 kubelet[3349]: I0312 23:46:47.894784 3349 state_mem.go:36] "Initialized new in-memory state store" Mar 12 23:46:47.896023 kubelet[3349]: I0312 23:46:47.895041 3349 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 23:46:47.896023 kubelet[3349]: I0312 23:46:47.895061 3349 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 23:46:47.896023 kubelet[3349]: I0312 23:46:47.895092 3349 policy_none.go:49] "None policy: Start" Mar 12 23:46:47.896023 kubelet[3349]: I0312 23:46:47.895110 3349 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 23:46:47.896023 kubelet[3349]: I0312 23:46:47.895129 3349 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 23:46:47.896023 kubelet[3349]: I0312 23:46:47.895322 3349 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 12 23:46:47.896023 kubelet[3349]: I0312 23:46:47.895339 3349 policy_none.go:47] "Start" Mar 12 23:46:47.912328 kubelet[3349]: E0312 23:46:47.912291 3349 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 23:46:47.913351 kubelet[3349]: I0312 23:46:47.913326 3349 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 23:46:47.913644 kubelet[3349]: I0312 23:46:47.913583 3349 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 23:46:47.914389 kubelet[3349]: I0312 23:46:47.914350 3349 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 23:46:47.923019 kubelet[3349]: E0312 23:46:47.921639 3349 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 23:46:47.926765 kubelet[3349]: I0312 23:46:47.926720 3349 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-143" Mar 12 23:46:47.930062 kubelet[3349]: I0312 23:46:47.927660 3349 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-143" Mar 12 23:46:47.933036 kubelet[3349]: I0312 23:46:47.927864 3349 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-143" Mar 12 23:46:48.038368 kubelet[3349]: I0312 23:46:48.038253 3349 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-143" Mar 12 23:46:48.055567 kubelet[3349]: I0312 23:46:48.055507 3349 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-143" Mar 12 23:46:48.055932 kubelet[3349]: I0312 23:46:48.055851 3349 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-143" Mar 12 23:46:48.099861 kubelet[3349]: I0312 23:46:48.099719 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d37019ecfab11c2d2ee4ac88dd04f14-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-143\" (UID: \"3d37019ecfab11c2d2ee4ac88dd04f14\") " pod="kube-system/kube-controller-manager-ip-172-31-24-143" Mar 12 23:46:48.100438 kubelet[3349]: I0312 23:46:48.100389 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d37019ecfab11c2d2ee4ac88dd04f14-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-143\" (UID: \"3d37019ecfab11c2d2ee4ac88dd04f14\") " pod="kube-system/kube-controller-manager-ip-172-31-24-143" Mar 12 23:46:48.100597 kubelet[3349]: I0312 23:46:48.100573 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4de86953fa8c46568e1ebaf6198986fb-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-143\" (UID: \"4de86953fa8c46568e1ebaf6198986fb\") " pod="kube-system/kube-apiserver-ip-172-31-24-143" Mar 12 23:46:48.100763 kubelet[3349]: I0312 23:46:48.100738 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4de86953fa8c46568e1ebaf6198986fb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-143\" (UID: \"4de86953fa8c46568e1ebaf6198986fb\") " pod="kube-system/kube-apiserver-ip-172-31-24-143" Mar 12 23:46:48.100922 kubelet[3349]: I0312 23:46:48.100897 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d37019ecfab11c2d2ee4ac88dd04f14-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-143\" (UID: \"3d37019ecfab11c2d2ee4ac88dd04f14\") " pod="kube-system/kube-controller-manager-ip-172-31-24-143" Mar 12 23:46:48.102140 kubelet[3349]: I0312 23:46:48.101867 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d37019ecfab11c2d2ee4ac88dd04f14-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-143\" (UID: \"3d37019ecfab11c2d2ee4ac88dd04f14\") " pod="kube-system/kube-controller-manager-ip-172-31-24-143" Mar 12 23:46:48.102140 kubelet[3349]: I0312 23:46:48.101920 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d37019ecfab11c2d2ee4ac88dd04f14-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-143\" (UID: \"3d37019ecfab11c2d2ee4ac88dd04f14\") " pod="kube-system/kube-controller-manager-ip-172-31-24-143" Mar 12 23:46:48.102140 kubelet[3349]: I0312 23:46:48.101961 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f2c2c191bdb8ef427921e962b2cedef-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-143\" (UID: \"0f2c2c191bdb8ef427921e962b2cedef\") " pod="kube-system/kube-scheduler-ip-172-31-24-143" Mar 12 23:46:48.102140 kubelet[3349]: I0312 23:46:48.102016 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4de86953fa8c46568e1ebaf6198986fb-ca-certs\") pod \"kube-apiserver-ip-172-31-24-143\" (UID: \"4de86953fa8c46568e1ebaf6198986fb\") " pod="kube-system/kube-apiserver-ip-172-31-24-143" Mar 12 23:46:48.668043 kubelet[3349]: I0312 23:46:48.667763 3349 apiserver.go:52] "Watching apiserver" Mar 12 23:46:48.698620 kubelet[3349]: I0312 23:46:48.698537 3349 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 23:46:48.884253 kubelet[3349]: I0312 23:46:48.884199 3349 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-143" Mar 12 23:46:48.885897 kubelet[3349]: I0312 23:46:48.885837 3349 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-143" Mar 12 23:46:48.913302 kubelet[3349]: E0312 23:46:48.913227 3349 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-143\" already exists" pod="kube-system/kube-scheduler-ip-172-31-24-143" Mar 12 23:46:48.913727 kubelet[3349]: E0312 23:46:48.913659 3349 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-143\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-143" Mar 12 23:46:48.977515 kubelet[3349]: I0312 23:46:48.976308 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-143" podStartSLOduration=1.97628596 podStartE2EDuration="1.97628596s" podCreationTimestamp="2026-03-12 23:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:46:48.960679912 +0000 UTC m=+1.459433636" watchObservedRunningTime="2026-03-12 23:46:48.97628596 +0000 UTC m=+1.475039684" Mar 12 23:46:48.977515 kubelet[3349]: I0312 23:46:48.976475 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-143" podStartSLOduration=1.97646494 podStartE2EDuration="1.97646494s" podCreationTimestamp="2026-03-12 23:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:46:48.974448712 +0000 UTC m=+1.473202508" watchObservedRunningTime="2026-03-12 23:46:48.97646494 +0000 UTC m=+1.475218652" Mar 12 23:46:48.994704 kubelet[3349]: I0312 23:46:48.994509 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-143" podStartSLOduration=1.994488016 podStartE2EDuration="1.994488016s" podCreationTimestamp="2026-03-12 23:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:46:48.994181692 +0000 UTC m=+1.492935416" watchObservedRunningTime="2026-03-12 23:46:48.994488016 +0000 UTC m=+1.493241728" Mar 12 23:46:49.173053 update_engine[1981]: I20260312 23:46:49.172829 1981 update_attempter.cc:509] Updating boot flags... Mar 12 23:46:53.205527 kubelet[3349]: I0312 23:46:53.205305 3349 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 23:46:53.206891 containerd[2010]: time="2026-03-12T23:46:53.206830673Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 23:46:53.208050 kubelet[3349]: I0312 23:46:53.207234 3349 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 23:46:54.102472 systemd[1]: Created slice kubepods-besteffort-podec69ebef_777f_4cd3_aa3b_d8bd9f4430fc.slice - libcontainer container kubepods-besteffort-podec69ebef_777f_4cd3_aa3b_d8bd9f4430fc.slice. Mar 12 23:46:54.143699 kubelet[3349]: I0312 23:46:54.143624 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttkmd\" (UniqueName: \"kubernetes.io/projected/ec69ebef-777f-4cd3-aa3b-d8bd9f4430fc-kube-api-access-ttkmd\") pod \"kube-proxy-7pnq2\" (UID: \"ec69ebef-777f-4cd3-aa3b-d8bd9f4430fc\") " pod="kube-system/kube-proxy-7pnq2" Mar 12 23:46:54.143699 kubelet[3349]: I0312 23:46:54.143696 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ec69ebef-777f-4cd3-aa3b-d8bd9f4430fc-kube-proxy\") pod \"kube-proxy-7pnq2\" (UID: \"ec69ebef-777f-4cd3-aa3b-d8bd9f4430fc\") " pod="kube-system/kube-proxy-7pnq2" Mar 12 23:46:54.143922 kubelet[3349]: I0312 23:46:54.143733 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec69ebef-777f-4cd3-aa3b-d8bd9f4430fc-xtables-lock\") pod \"kube-proxy-7pnq2\" (UID: \"ec69ebef-777f-4cd3-aa3b-d8bd9f4430fc\") " pod="kube-system/kube-proxy-7pnq2" Mar 12 23:46:54.143922 kubelet[3349]: I0312 23:46:54.143768 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec69ebef-777f-4cd3-aa3b-d8bd9f4430fc-lib-modules\") pod \"kube-proxy-7pnq2\" (UID: \"ec69ebef-777f-4cd3-aa3b-d8bd9f4430fc\") " pod="kube-system/kube-proxy-7pnq2" Mar 12 23:46:54.425351 containerd[2010]: time="2026-03-12T23:46:54.425191399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7pnq2,Uid:ec69ebef-777f-4cd3-aa3b-d8bd9f4430fc,Namespace:kube-system,Attempt:0,}" Mar 12 23:46:54.443021 systemd[1]: Created slice kubepods-besteffort-pod3ce07789_10ac_4219_9e5d_9ea36b833731.slice - libcontainer container kubepods-besteffort-pod3ce07789_10ac_4219_9e5d_9ea36b833731.slice. Mar 12 23:46:54.445764 kubelet[3349]: I0312 23:46:54.445253 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3ce07789-10ac-4219-9e5d-9ea36b833731-var-lib-calico\") pod \"tigera-operator-5588576f44-h24xd\" (UID: \"3ce07789-10ac-4219-9e5d-9ea36b833731\") " pod="tigera-operator/tigera-operator-5588576f44-h24xd" Mar 12 23:46:54.448094 kubelet[3349]: I0312 23:46:54.446399 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjqnz\" (UniqueName: \"kubernetes.io/projected/3ce07789-10ac-4219-9e5d-9ea36b833731-kube-api-access-cjqnz\") pod \"tigera-operator-5588576f44-h24xd\" (UID: \"3ce07789-10ac-4219-9e5d-9ea36b833731\") " pod="tigera-operator/tigera-operator-5588576f44-h24xd" Mar 12 23:46:54.492019 containerd[2010]: time="2026-03-12T23:46:54.491475512Z" level=info msg="connecting to shim 539c8a2dee29af6bf3d2c303dad6ddb1f78a2f42a70824de8d284a937f9a14d3" address="unix:///run/containerd/s/aecb86486e92c6cd51a41ca70c4ea3275d8546542a8f41ec5a9185a5c3e8fbb6" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:46:54.543310 systemd[1]: Started cri-containerd-539c8a2dee29af6bf3d2c303dad6ddb1f78a2f42a70824de8d284a937f9a14d3.scope - libcontainer container 539c8a2dee29af6bf3d2c303dad6ddb1f78a2f42a70824de8d284a937f9a14d3. Mar 12 23:46:54.610920 containerd[2010]: time="2026-03-12T23:46:54.610861988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7pnq2,Uid:ec69ebef-777f-4cd3-aa3b-d8bd9f4430fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"539c8a2dee29af6bf3d2c303dad6ddb1f78a2f42a70824de8d284a937f9a14d3\"" Mar 12 23:46:54.622553 containerd[2010]: time="2026-03-12T23:46:54.622504496Z" level=info msg="CreateContainer within sandbox \"539c8a2dee29af6bf3d2c303dad6ddb1f78a2f42a70824de8d284a937f9a14d3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 23:46:54.649082 containerd[2010]: time="2026-03-12T23:46:54.648840836Z" level=info msg="Container c4e3ad98047d128ede21af6b54bd60df4e1b1393b499afc7ef7220e2153f7092: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:46:54.665090 containerd[2010]: time="2026-03-12T23:46:54.664964672Z" level=info msg="CreateContainer within sandbox \"539c8a2dee29af6bf3d2c303dad6ddb1f78a2f42a70824de8d284a937f9a14d3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c4e3ad98047d128ede21af6b54bd60df4e1b1393b499afc7ef7220e2153f7092\"" Mar 12 23:46:54.666328 containerd[2010]: time="2026-03-12T23:46:54.666092684Z" level=info msg="StartContainer for \"c4e3ad98047d128ede21af6b54bd60df4e1b1393b499afc7ef7220e2153f7092\"" Mar 12 23:46:54.670383 containerd[2010]: time="2026-03-12T23:46:54.670277120Z" level=info msg="connecting to shim c4e3ad98047d128ede21af6b54bd60df4e1b1393b499afc7ef7220e2153f7092" address="unix:///run/containerd/s/aecb86486e92c6cd51a41ca70c4ea3275d8546542a8f41ec5a9185a5c3e8fbb6" protocol=ttrpc version=3 Mar 12 23:46:54.699299 systemd[1]: Started cri-containerd-c4e3ad98047d128ede21af6b54bd60df4e1b1393b499afc7ef7220e2153f7092.scope - libcontainer container c4e3ad98047d128ede21af6b54bd60df4e1b1393b499afc7ef7220e2153f7092. Mar 12 23:46:54.755926 containerd[2010]: time="2026-03-12T23:46:54.755845593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-h24xd,Uid:3ce07789-10ac-4219-9e5d-9ea36b833731,Namespace:tigera-operator,Attempt:0,}" Mar 12 23:46:54.807594 containerd[2010]: time="2026-03-12T23:46:54.807335061Z" level=info msg="connecting to shim f85a3412d430f96abfbfa62a1456cb5d9938082f7077bb24f40530ef19c05309" address="unix:///run/containerd/s/08ccef7032bdf2911c32b19d9af66fa4121013446f9ad86cd21dbe5443bce8ef" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:46:54.841846 containerd[2010]: time="2026-03-12T23:46:54.840717129Z" level=info msg="StartContainer for \"c4e3ad98047d128ede21af6b54bd60df4e1b1393b499afc7ef7220e2153f7092\" returns successfully" Mar 12 23:46:54.872364 systemd[1]: Started cri-containerd-f85a3412d430f96abfbfa62a1456cb5d9938082f7077bb24f40530ef19c05309.scope - libcontainer container f85a3412d430f96abfbfa62a1456cb5d9938082f7077bb24f40530ef19c05309. Mar 12 23:46:55.003829 containerd[2010]: time="2026-03-12T23:46:55.003409350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-h24xd,Uid:3ce07789-10ac-4219-9e5d-9ea36b833731,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f85a3412d430f96abfbfa62a1456cb5d9938082f7077bb24f40530ef19c05309\"" Mar 12 23:46:55.008813 containerd[2010]: time="2026-03-12T23:46:55.008686770Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 12 23:46:56.502909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount981696469.mount: Deactivated successfully. Mar 12 23:46:56.881729 kubelet[3349]: I0312 23:46:56.881569 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7pnq2" podStartSLOduration=2.881546039 podStartE2EDuration="2.881546039s" podCreationTimestamp="2026-03-12 23:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:46:54.935653978 +0000 UTC m=+7.434407738" watchObservedRunningTime="2026-03-12 23:46:56.881546039 +0000 UTC m=+9.380299751" Mar 12 23:46:57.412595 containerd[2010]: time="2026-03-12T23:46:57.412528114Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:57.414036 containerd[2010]: time="2026-03-12T23:46:57.413866114Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 12 23:46:57.415716 containerd[2010]: time="2026-03-12T23:46:57.415661254Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:57.420175 containerd[2010]: time="2026-03-12T23:46:57.419549386Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:57.420959 containerd[2010]: time="2026-03-12T23:46:57.420905722Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.41209924s" Mar 12 23:46:57.421086 containerd[2010]: time="2026-03-12T23:46:57.420958210Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 12 23:46:57.428970 containerd[2010]: time="2026-03-12T23:46:57.428816986Z" level=info msg="CreateContainer within sandbox \"f85a3412d430f96abfbfa62a1456cb5d9938082f7077bb24f40530ef19c05309\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 12 23:46:57.447018 containerd[2010]: time="2026-03-12T23:46:57.446900362Z" level=info msg="Container f123172f16295206c0abc58f7378147e1266cbcbeeeb75d47650a5d7a8a0e398: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:46:57.453522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154987928.mount: Deactivated successfully. Mar 12 23:46:57.463617 containerd[2010]: time="2026-03-12T23:46:57.463557550Z" level=info msg="CreateContainer within sandbox \"f85a3412d430f96abfbfa62a1456cb5d9938082f7077bb24f40530ef19c05309\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f123172f16295206c0abc58f7378147e1266cbcbeeeb75d47650a5d7a8a0e398\"" Mar 12 23:46:57.465163 containerd[2010]: time="2026-03-12T23:46:57.465093754Z" level=info msg="StartContainer for \"f123172f16295206c0abc58f7378147e1266cbcbeeeb75d47650a5d7a8a0e398\"" Mar 12 23:46:57.467213 containerd[2010]: time="2026-03-12T23:46:57.467144554Z" level=info msg="connecting to shim f123172f16295206c0abc58f7378147e1266cbcbeeeb75d47650a5d7a8a0e398" address="unix:///run/containerd/s/08ccef7032bdf2911c32b19d9af66fa4121013446f9ad86cd21dbe5443bce8ef" protocol=ttrpc version=3 Mar 12 23:46:57.515342 systemd[1]: Started cri-containerd-f123172f16295206c0abc58f7378147e1266cbcbeeeb75d47650a5d7a8a0e398.scope - libcontainer container f123172f16295206c0abc58f7378147e1266cbcbeeeb75d47650a5d7a8a0e398. Mar 12 23:46:57.589560 containerd[2010]: time="2026-03-12T23:46:57.589426415Z" level=info msg="StartContainer for \"f123172f16295206c0abc58f7378147e1266cbcbeeeb75d47650a5d7a8a0e398\" returns successfully" Mar 12 23:46:57.945928 kubelet[3349]: I0312 23:46:57.945757 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-h24xd" podStartSLOduration=1.530815401 podStartE2EDuration="3.945736885s" podCreationTimestamp="2026-03-12 23:46:54 +0000 UTC" firstStartedPulling="2026-03-12 23:46:55.007765914 +0000 UTC m=+7.506519626" lastFinishedPulling="2026-03-12 23:46:57.422687398 +0000 UTC m=+9.921441110" observedRunningTime="2026-03-12 23:46:57.945402001 +0000 UTC m=+10.444155713" watchObservedRunningTime="2026-03-12 23:46:57.945736885 +0000 UTC m=+10.444490597" Mar 12 23:47:04.384752 sudo[2366]: pam_unix(sudo:session): session closed for user root Mar 12 23:47:04.464256 sshd[2365]: Connection closed by 4.153.228.146 port 35104 Mar 12 23:47:04.465266 sshd-session[2362]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:04.474418 systemd[1]: sshd@6-172.31.24.143:22-4.153.228.146:35104.service: Deactivated successfully. Mar 12 23:47:04.483764 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 23:47:04.484436 systemd[1]: session-7.scope: Consumed 10.917s CPU time, 220.3M memory peak. Mar 12 23:47:04.492275 systemd-logind[1980]: Session 7 logged out. Waiting for processes to exit. Mar 12 23:47:04.494879 systemd-logind[1980]: Removed session 7. Mar 12 23:47:12.923244 systemd[1]: Created slice kubepods-besteffort-pod5d5b1aee_2f97_4fd7_8254_43035d389c8e.slice - libcontainer container kubepods-besteffort-pod5d5b1aee_2f97_4fd7_8254_43035d389c8e.slice. Mar 12 23:47:12.992098 kubelet[3349]: I0312 23:47:12.991963 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d5b1aee-2f97-4fd7-8254-43035d389c8e-tigera-ca-bundle\") pod \"calico-typha-7b47bdf6ff-7g74t\" (UID: \"5d5b1aee-2f97-4fd7-8254-43035d389c8e\") " pod="calico-system/calico-typha-7b47bdf6ff-7g74t" Mar 12 23:47:12.992700 kubelet[3349]: I0312 23:47:12.992132 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5k4m\" (UniqueName: \"kubernetes.io/projected/5d5b1aee-2f97-4fd7-8254-43035d389c8e-kube-api-access-d5k4m\") pod \"calico-typha-7b47bdf6ff-7g74t\" (UID: \"5d5b1aee-2f97-4fd7-8254-43035d389c8e\") " pod="calico-system/calico-typha-7b47bdf6ff-7g74t" Mar 12 23:47:12.992700 kubelet[3349]: I0312 23:47:12.992303 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5d5b1aee-2f97-4fd7-8254-43035d389c8e-typha-certs\") pod \"calico-typha-7b47bdf6ff-7g74t\" (UID: \"5d5b1aee-2f97-4fd7-8254-43035d389c8e\") " pod="calico-system/calico-typha-7b47bdf6ff-7g74t" Mar 12 23:47:13.103830 systemd[1]: Created slice kubepods-besteffort-pod29dcec12_5627_4d4d_97ce_2732cae17317.slice - libcontainer container kubepods-besteffort-pod29dcec12_5627_4d4d_97ce_2732cae17317.slice. Mar 12 23:47:13.193762 kubelet[3349]: I0312 23:47:13.193357 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/29dcec12-5627-4d4d-97ce-2732cae17317-cni-bin-dir\") pod \"calico-node-md969\" (UID: \"29dcec12-5627-4d4d-97ce-2732cae17317\") " pod="calico-system/calico-node-md969" Mar 12 23:47:13.193762 kubelet[3349]: I0312 23:47:13.193419 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/29dcec12-5627-4d4d-97ce-2732cae17317-cni-net-dir\") pod \"calico-node-md969\" (UID: \"29dcec12-5627-4d4d-97ce-2732cae17317\") " pod="calico-system/calico-node-md969" Mar 12 23:47:13.193762 kubelet[3349]: I0312 23:47:13.193454 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/29dcec12-5627-4d4d-97ce-2732cae17317-nodeproc\") pod \"calico-node-md969\" (UID: \"29dcec12-5627-4d4d-97ce-2732cae17317\") " pod="calico-system/calico-node-md969" Mar 12 23:47:13.193762 kubelet[3349]: I0312 23:47:13.193489 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/29dcec12-5627-4d4d-97ce-2732cae17317-sys-fs\") pod \"calico-node-md969\" (UID: \"29dcec12-5627-4d4d-97ce-2732cae17317\") " pod="calico-system/calico-node-md969" Mar 12 23:47:13.193762 kubelet[3349]: I0312 23:47:13.193528 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/29dcec12-5627-4d4d-97ce-2732cae17317-var-lib-calico\") pod \"calico-node-md969\" (UID: \"29dcec12-5627-4d4d-97ce-2732cae17317\") " pod="calico-system/calico-node-md969" Mar 12 23:47:13.194611 kubelet[3349]: I0312 23:47:13.193619 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29dcec12-5627-4d4d-97ce-2732cae17317-lib-modules\") pod \"calico-node-md969\" (UID: \"29dcec12-5627-4d4d-97ce-2732cae17317\") " pod="calico-system/calico-node-md969" Mar 12 23:47:13.194611 kubelet[3349]: I0312 23:47:13.193969 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/29dcec12-5627-4d4d-97ce-2732cae17317-flexvol-driver-host\") pod \"calico-node-md969\" (UID: \"29dcec12-5627-4d4d-97ce-2732cae17317\") " pod="calico-system/calico-node-md969" Mar 12 23:47:13.194611 kubelet[3349]: I0312 23:47:13.194125 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/29dcec12-5627-4d4d-97ce-2732cae17317-var-run-calico\") pod \"calico-node-md969\" (UID: \"29dcec12-5627-4d4d-97ce-2732cae17317\") " pod="calico-system/calico-node-md969" Mar 12 23:47:13.194611 kubelet[3349]: I0312 23:47:13.194198 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/29dcec12-5627-4d4d-97ce-2732cae17317-bpffs\") pod \"calico-node-md969\" (UID: \"29dcec12-5627-4d4d-97ce-2732cae17317\") " pod="calico-system/calico-node-md969" Mar 12 23:47:13.194611 kubelet[3349]: I0312 23:47:13.194281 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/29dcec12-5627-4d4d-97ce-2732cae17317-node-certs\") pod \"calico-node-md969\" (UID: \"29dcec12-5627-4d4d-97ce-2732cae17317\") " pod="calico-system/calico-node-md969" Mar 12 23:47:13.194890 kubelet[3349]: I0312 23:47:13.194325 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/29dcec12-5627-4d4d-97ce-2732cae17317-policysync\") pod \"calico-node-md969\" (UID: \"29dcec12-5627-4d4d-97ce-2732cae17317\") " pod="calico-system/calico-node-md969" Mar 12 23:47:13.194890 kubelet[3349]: I0312 23:47:13.194393 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6zqf\" (UniqueName: \"kubernetes.io/projected/29dcec12-5627-4d4d-97ce-2732cae17317-kube-api-access-d6zqf\") pod \"calico-node-md969\" (UID: \"29dcec12-5627-4d4d-97ce-2732cae17317\") " pod="calico-system/calico-node-md969" Mar 12 23:47:13.194890 kubelet[3349]: I0312 23:47:13.194436 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29dcec12-5627-4d4d-97ce-2732cae17317-tigera-ca-bundle\") pod \"calico-node-md969\" (UID: \"29dcec12-5627-4d4d-97ce-2732cae17317\") " pod="calico-system/calico-node-md969" Mar 12 23:47:13.194890 kubelet[3349]: I0312 23:47:13.194504 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/29dcec12-5627-4d4d-97ce-2732cae17317-cni-log-dir\") pod \"calico-node-md969\" (UID: \"29dcec12-5627-4d4d-97ce-2732cae17317\") " pod="calico-system/calico-node-md969" Mar 12 23:47:13.194890 kubelet[3349]: I0312 23:47:13.194591 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29dcec12-5627-4d4d-97ce-2732cae17317-xtables-lock\") pod \"calico-node-md969\" (UID: \"29dcec12-5627-4d4d-97ce-2732cae17317\") " pod="calico-system/calico-node-md969" Mar 12 23:47:13.217751 kubelet[3349]: E0312 23:47:13.217551 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6s59z" podUID="facd10a7-b796-431d-84d9-924988ee39fa" Mar 12 23:47:13.242337 containerd[2010]: time="2026-03-12T23:47:13.242234329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b47bdf6ff-7g74t,Uid:5d5b1aee-2f97-4fd7-8254-43035d389c8e,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:13.295896 kubelet[3349]: I0312 23:47:13.295827 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/facd10a7-b796-431d-84d9-924988ee39fa-socket-dir\") pod \"csi-node-driver-6s59z\" (UID: \"facd10a7-b796-431d-84d9-924988ee39fa\") " pod="calico-system/csi-node-driver-6s59z" Mar 12 23:47:13.298597 kubelet[3349]: I0312 23:47:13.298294 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/facd10a7-b796-431d-84d9-924988ee39fa-registration-dir\") pod \"csi-node-driver-6s59z\" (UID: \"facd10a7-b796-431d-84d9-924988ee39fa\") " pod="calico-system/csi-node-driver-6s59z" Mar 12 23:47:13.298597 kubelet[3349]: I0312 23:47:13.298373 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/facd10a7-b796-431d-84d9-924988ee39fa-varrun\") pod \"csi-node-driver-6s59z\" (UID: \"facd10a7-b796-431d-84d9-924988ee39fa\") " pod="calico-system/csi-node-driver-6s59z" Mar 12 23:47:13.298597 kubelet[3349]: I0312 23:47:13.298417 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h277s\" (UniqueName: \"kubernetes.io/projected/facd10a7-b796-431d-84d9-924988ee39fa-kube-api-access-h277s\") pod \"csi-node-driver-6s59z\" (UID: \"facd10a7-b796-431d-84d9-924988ee39fa\") " pod="calico-system/csi-node-driver-6s59z" Mar 12 23:47:13.298597 kubelet[3349]: I0312 23:47:13.298484 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/facd10a7-b796-431d-84d9-924988ee39fa-kubelet-dir\") pod \"csi-node-driver-6s59z\" (UID: \"facd10a7-b796-431d-84d9-924988ee39fa\") " pod="calico-system/csi-node-driver-6s59z" Mar 12 23:47:13.304966 kubelet[3349]: E0312 23:47:13.304886 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.304966 kubelet[3349]: W0312 23:47:13.304933 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.306537 kubelet[3349]: E0312 23:47:13.304979 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.307190 kubelet[3349]: E0312 23:47:13.307129 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.307190 kubelet[3349]: W0312 23:47:13.307168 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.308526 kubelet[3349]: E0312 23:47:13.307200 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.310218 kubelet[3349]: E0312 23:47:13.310170 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.310218 kubelet[3349]: W0312 23:47:13.310209 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.310718 kubelet[3349]: E0312 23:47:13.310242 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.311310 kubelet[3349]: E0312 23:47:13.311257 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.311310 kubelet[3349]: W0312 23:47:13.311295 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.311723 kubelet[3349]: E0312 23:47:13.311327 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.314398 kubelet[3349]: E0312 23:47:13.314343 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.314398 kubelet[3349]: W0312 23:47:13.314383 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.314716 kubelet[3349]: E0312 23:47:13.314417 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.318237 kubelet[3349]: E0312 23:47:13.318115 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.318237 kubelet[3349]: W0312 23:47:13.318154 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.318237 kubelet[3349]: E0312 23:47:13.318187 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.322949 kubelet[3349]: E0312 23:47:13.322153 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.322949 kubelet[3349]: W0312 23:47:13.322194 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.322949 kubelet[3349]: E0312 23:47:13.322227 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.323231 kubelet[3349]: E0312 23:47:13.323050 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.323231 kubelet[3349]: W0312 23:47:13.323074 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.323231 kubelet[3349]: E0312 23:47:13.323102 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.330071 kubelet[3349]: E0312 23:47:13.329405 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.330071 kubelet[3349]: W0312 23:47:13.329442 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.330071 kubelet[3349]: E0312 23:47:13.329476 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.334636 kubelet[3349]: E0312 23:47:13.333280 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.334636 kubelet[3349]: W0312 23:47:13.333322 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.334636 kubelet[3349]: E0312 23:47:13.333356 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.336580 kubelet[3349]: E0312 23:47:13.336500 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.336580 kubelet[3349]: W0312 23:47:13.336539 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.336580 kubelet[3349]: E0312 23:47:13.336573 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.345067 containerd[2010]: time="2026-03-12T23:47:13.343431457Z" level=info msg="connecting to shim 5633eb774ea5dcfe6712a79c21e4f8fdf756cf098a4ae89565b22ff5c603fa45" address="unix:///run/containerd/s/f180127778deafd76530616518d9dbc4d4030d788086667eab370ef2e180e533" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:13.398213 kubelet[3349]: E0312 23:47:13.398176 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.398721 kubelet[3349]: W0312 23:47:13.398389 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.398721 kubelet[3349]: E0312 23:47:13.398427 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.403816 kubelet[3349]: E0312 23:47:13.403676 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.403816 kubelet[3349]: W0312 23:47:13.403716 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.403816 kubelet[3349]: E0312 23:47:13.403751 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.406040 kubelet[3349]: E0312 23:47:13.405945 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.406845 kubelet[3349]: W0312 23:47:13.406359 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.407258 kubelet[3349]: E0312 23:47:13.407073 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.409817 kubelet[3349]: E0312 23:47:13.409678 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.409817 kubelet[3349]: W0312 23:47:13.409716 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.409817 kubelet[3349]: E0312 23:47:13.409750 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.413098 kubelet[3349]: E0312 23:47:13.410580 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.413098 kubelet[3349]: W0312 23:47:13.410605 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.413098 kubelet[3349]: E0312 23:47:13.410639 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.418912 kubelet[3349]: E0312 23:47:13.418860 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.418912 kubelet[3349]: W0312 23:47:13.418894 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.419105 kubelet[3349]: E0312 23:47:13.418926 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.420526 containerd[2010]: time="2026-03-12T23:47:13.420285794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-md969,Uid:29dcec12-5627-4d4d-97ce-2732cae17317,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:13.421157 kubelet[3349]: E0312 23:47:13.421105 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.421157 kubelet[3349]: W0312 23:47:13.421148 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.421471 kubelet[3349]: E0312 23:47:13.421181 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.421471 kubelet[3349]: E0312 23:47:13.421548 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.421471 kubelet[3349]: W0312 23:47:13.421568 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.421471 kubelet[3349]: E0312 23:47:13.421596 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.421471 kubelet[3349]: E0312 23:47:13.421902 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.421471 kubelet[3349]: W0312 23:47:13.421921 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.421471 kubelet[3349]: E0312 23:47:13.421943 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.422612 kubelet[3349]: E0312 23:47:13.422260 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.422612 kubelet[3349]: W0312 23:47:13.422283 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.422612 kubelet[3349]: E0312 23:47:13.422309 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.424369 kubelet[3349]: E0312 23:47:13.423533 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.424369 kubelet[3349]: W0312 23:47:13.423577 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.424369 kubelet[3349]: E0312 23:47:13.423613 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.425497 kubelet[3349]: E0312 23:47:13.425211 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.425497 kubelet[3349]: W0312 23:47:13.425248 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.425497 kubelet[3349]: E0312 23:47:13.425281 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.425714 kubelet[3349]: E0312 23:47:13.425626 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.425714 kubelet[3349]: W0312 23:47:13.425645 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.425714 kubelet[3349]: E0312 23:47:13.425670 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.427058 kubelet[3349]: E0312 23:47:13.426055 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.427058 kubelet[3349]: W0312 23:47:13.426085 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.427058 kubelet[3349]: E0312 23:47:13.426111 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.427058 kubelet[3349]: E0312 23:47:13.426885 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.427058 kubelet[3349]: W0312 23:47:13.426910 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.427058 kubelet[3349]: E0312 23:47:13.426939 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.428375 systemd[1]: Started cri-containerd-5633eb774ea5dcfe6712a79c21e4f8fdf756cf098a4ae89565b22ff5c603fa45.scope - libcontainer container 5633eb774ea5dcfe6712a79c21e4f8fdf756cf098a4ae89565b22ff5c603fa45. Mar 12 23:47:13.431552 kubelet[3349]: E0312 23:47:13.431488 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.434248 kubelet[3349]: W0312 23:47:13.432788 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.434248 kubelet[3349]: E0312 23:47:13.432844 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.436058 kubelet[3349]: E0312 23:47:13.435955 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.438343 kubelet[3349]: W0312 23:47:13.438198 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.438557 kubelet[3349]: E0312 23:47:13.438506 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.441384 kubelet[3349]: E0312 23:47:13.441326 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.441533 kubelet[3349]: W0312 23:47:13.441415 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.441533 kubelet[3349]: E0312 23:47:13.441453 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.446406 kubelet[3349]: E0312 23:47:13.444100 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.446406 kubelet[3349]: W0312 23:47:13.444186 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.446406 kubelet[3349]: E0312 23:47:13.444346 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.446760 kubelet[3349]: E0312 23:47:13.446525 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.447842 kubelet[3349]: W0312 23:47:13.446556 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.447842 kubelet[3349]: E0312 23:47:13.447223 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.450287 kubelet[3349]: E0312 23:47:13.450202 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.450287 kubelet[3349]: W0312 23:47:13.450273 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.451112 kubelet[3349]: E0312 23:47:13.451045 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.453654 kubelet[3349]: E0312 23:47:13.453032 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.453654 kubelet[3349]: W0312 23:47:13.453063 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.453654 kubelet[3349]: E0312 23:47:13.453096 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.456163 kubelet[3349]: E0312 23:47:13.456102 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.456410 kubelet[3349]: W0312 23:47:13.456164 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.456410 kubelet[3349]: E0312 23:47:13.456328 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.462359 kubelet[3349]: E0312 23:47:13.462261 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.462359 kubelet[3349]: W0312 23:47:13.462303 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.462359 kubelet[3349]: E0312 23:47:13.462338 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.464807 kubelet[3349]: E0312 23:47:13.464747 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.464807 kubelet[3349]: W0312 23:47:13.464788 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.464978 kubelet[3349]: E0312 23:47:13.464822 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.468054 kubelet[3349]: E0312 23:47:13.467287 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.468753 kubelet[3349]: W0312 23:47:13.467328 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.468886 kubelet[3349]: E0312 23:47:13.468766 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.504340 kubelet[3349]: E0312 23:47:13.503918 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:13.504340 kubelet[3349]: W0312 23:47:13.503962 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:13.504340 kubelet[3349]: E0312 23:47:13.504027 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:13.514435 containerd[2010]: time="2026-03-12T23:47:13.514058750Z" level=info msg="connecting to shim 4ec90756090568dc0819a9a81bff1cb9e5d354f0ff792f00f01176a650599ab5" address="unix:///run/containerd/s/b4ea61fdec462b7030b953f24df21b0bc27c430f57407947073a9186fa89900a" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:13.579299 systemd[1]: Started cri-containerd-4ec90756090568dc0819a9a81bff1cb9e5d354f0ff792f00f01176a650599ab5.scope - libcontainer container 4ec90756090568dc0819a9a81bff1cb9e5d354f0ff792f00f01176a650599ab5. Mar 12 23:47:13.644282 containerd[2010]: time="2026-03-12T23:47:13.644034915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b47bdf6ff-7g74t,Uid:5d5b1aee-2f97-4fd7-8254-43035d389c8e,Namespace:calico-system,Attempt:0,} returns sandbox id \"5633eb774ea5dcfe6712a79c21e4f8fdf756cf098a4ae89565b22ff5c603fa45\"" Mar 12 23:47:13.654134 containerd[2010]: time="2026-03-12T23:47:13.653267091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 12 23:47:13.721044 containerd[2010]: time="2026-03-12T23:47:13.719751855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-md969,Uid:29dcec12-5627-4d4d-97ce-2732cae17317,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ec90756090568dc0819a9a81bff1cb9e5d354f0ff792f00f01176a650599ab5\"" Mar 12 23:47:14.825380 kubelet[3349]: E0312 23:47:14.825118 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6s59z" podUID="facd10a7-b796-431d-84d9-924988ee39fa" Mar 12 23:47:15.003344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1488822158.mount: Deactivated successfully. Mar 12 23:47:15.757432 containerd[2010]: time="2026-03-12T23:47:15.757380437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:15.760828 containerd[2010]: time="2026-03-12T23:47:15.760783421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 12 23:47:15.761737 containerd[2010]: time="2026-03-12T23:47:15.761702525Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:15.766535 containerd[2010]: time="2026-03-12T23:47:15.766489397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:15.769363 containerd[2010]: time="2026-03-12T23:47:15.769307909Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.115984754s" Mar 12 23:47:15.769686 containerd[2010]: time="2026-03-12T23:47:15.769364477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 12 23:47:15.771889 containerd[2010]: time="2026-03-12T23:47:15.771839441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 12 23:47:15.800058 containerd[2010]: time="2026-03-12T23:47:15.799547045Z" level=info msg="CreateContainer within sandbox \"5633eb774ea5dcfe6712a79c21e4f8fdf756cf098a4ae89565b22ff5c603fa45\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 12 23:47:15.823166 containerd[2010]: time="2026-03-12T23:47:15.822488550Z" level=info msg="Container 7c00e27bacd06d48edfda83326f8a7c999890e156338b0b166b9baf2d67b1502: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:15.833889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3827748772.mount: Deactivated successfully. Mar 12 23:47:15.849187 containerd[2010]: time="2026-03-12T23:47:15.848949042Z" level=info msg="CreateContainer within sandbox \"5633eb774ea5dcfe6712a79c21e4f8fdf756cf098a4ae89565b22ff5c603fa45\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7c00e27bacd06d48edfda83326f8a7c999890e156338b0b166b9baf2d67b1502\"" Mar 12 23:47:15.851919 containerd[2010]: time="2026-03-12T23:47:15.851830782Z" level=info msg="StartContainer for \"7c00e27bacd06d48edfda83326f8a7c999890e156338b0b166b9baf2d67b1502\"" Mar 12 23:47:15.855380 containerd[2010]: time="2026-03-12T23:47:15.855327774Z" level=info msg="connecting to shim 7c00e27bacd06d48edfda83326f8a7c999890e156338b0b166b9baf2d67b1502" address="unix:///run/containerd/s/f180127778deafd76530616518d9dbc4d4030d788086667eab370ef2e180e533" protocol=ttrpc version=3 Mar 12 23:47:15.894385 systemd[1]: Started cri-containerd-7c00e27bacd06d48edfda83326f8a7c999890e156338b0b166b9baf2d67b1502.scope - libcontainer container 7c00e27bacd06d48edfda83326f8a7c999890e156338b0b166b9baf2d67b1502. Mar 12 23:47:15.994875 containerd[2010]: time="2026-03-12T23:47:15.994818534Z" level=info msg="StartContainer for \"7c00e27bacd06d48edfda83326f8a7c999890e156338b0b166b9baf2d67b1502\" returns successfully" Mar 12 23:47:16.823928 kubelet[3349]: E0312 23:47:16.823543 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6s59z" podUID="facd10a7-b796-431d-84d9-924988ee39fa" Mar 12 23:47:17.033619 kubelet[3349]: I0312 23:47:17.032323 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b47bdf6ff-7g74t" podStartSLOduration=2.911617638 podStartE2EDuration="5.031827016s" podCreationTimestamp="2026-03-12 23:47:12 +0000 UTC" firstStartedPulling="2026-03-12 23:47:13.650256951 +0000 UTC m=+26.149010663" lastFinishedPulling="2026-03-12 23:47:15.770466329 +0000 UTC m=+28.269220041" observedRunningTime="2026-03-12 23:47:17.03134854 +0000 UTC m=+29.530102252" watchObservedRunningTime="2026-03-12 23:47:17.031827016 +0000 UTC m=+29.530580728" Mar 12 23:47:17.089119 kubelet[3349]: E0312 23:47:17.088951 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.090025 kubelet[3349]: W0312 23:47:17.089356 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.090025 kubelet[3349]: E0312 23:47:17.089682 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.091137 kubelet[3349]: E0312 23:47:17.091090 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.091284 kubelet[3349]: W0312 23:47:17.091127 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.091284 kubelet[3349]: E0312 23:47:17.091201 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.092428 kubelet[3349]: E0312 23:47:17.092382 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.092428 kubelet[3349]: W0312 23:47:17.092420 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.092609 kubelet[3349]: E0312 23:47:17.092452 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.094166 kubelet[3349]: E0312 23:47:17.094073 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.094166 kubelet[3349]: W0312 23:47:17.094151 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.094377 kubelet[3349]: E0312 23:47:17.094208 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.095712 kubelet[3349]: E0312 23:47:17.095660 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.095852 kubelet[3349]: W0312 23:47:17.095701 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.096831 kubelet[3349]: E0312 23:47:17.095845 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.097288 kubelet[3349]: E0312 23:47:17.097228 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.097288 kubelet[3349]: W0312 23:47:17.097280 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.097476 kubelet[3349]: E0312 23:47:17.097316 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.098669 kubelet[3349]: E0312 23:47:17.098591 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.098669 kubelet[3349]: W0312 23:47:17.098639 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.098669 kubelet[3349]: E0312 23:47:17.098673 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.099921 kubelet[3349]: E0312 23:47:17.099842 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.099921 kubelet[3349]: W0312 23:47:17.099884 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.099921 kubelet[3349]: E0312 23:47:17.099916 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.101395 kubelet[3349]: E0312 23:47:17.101215 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.101395 kubelet[3349]: W0312 23:47:17.101385 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.101590 kubelet[3349]: E0312 23:47:17.101547 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.102550 kubelet[3349]: E0312 23:47:17.102490 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.102550 kubelet[3349]: W0312 23:47:17.102527 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.102759 kubelet[3349]: E0312 23:47:17.102560 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.104008 kubelet[3349]: E0312 23:47:17.103726 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.104008 kubelet[3349]: W0312 23:47:17.103764 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.104008 kubelet[3349]: E0312 23:47:17.103797 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.105236 kubelet[3349]: E0312 23:47:17.105189 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.105236 kubelet[3349]: W0312 23:47:17.105226 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.105449 kubelet[3349]: E0312 23:47:17.105259 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.106540 kubelet[3349]: E0312 23:47:17.106490 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.106540 kubelet[3349]: W0312 23:47:17.106528 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.106713 kubelet[3349]: E0312 23:47:17.106561 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.108019 kubelet[3349]: E0312 23:47:17.107828 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.108019 kubelet[3349]: W0312 23:47:17.107871 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.108019 kubelet[3349]: E0312 23:47:17.107970 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.109510 kubelet[3349]: E0312 23:47:17.109462 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.109642 kubelet[3349]: W0312 23:47:17.109613 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.109696 kubelet[3349]: E0312 23:47:17.109649 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.156339 kubelet[3349]: E0312 23:47:17.156290 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.156339 kubelet[3349]: W0312 23:47:17.156329 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.156670 kubelet[3349]: E0312 23:47:17.156360 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.157205 kubelet[3349]: E0312 23:47:17.157119 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.157205 kubelet[3349]: W0312 23:47:17.157156 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.157205 kubelet[3349]: E0312 23:47:17.157186 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.158169 kubelet[3349]: E0312 23:47:17.158127 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.158169 kubelet[3349]: W0312 23:47:17.158163 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.158329 kubelet[3349]: E0312 23:47:17.158193 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.159435 kubelet[3349]: E0312 23:47:17.159398 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.159435 kubelet[3349]: W0312 23:47:17.159434 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.159693 kubelet[3349]: E0312 23:47:17.159464 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.160150 kubelet[3349]: E0312 23:47:17.160092 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.160150 kubelet[3349]: W0312 23:47:17.160122 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.160150 kubelet[3349]: E0312 23:47:17.160148 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.161368 kubelet[3349]: E0312 23:47:17.161164 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.161368 kubelet[3349]: W0312 23:47:17.161225 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.161368 kubelet[3349]: E0312 23:47:17.161257 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.162208 kubelet[3349]: E0312 23:47:17.162164 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.162208 kubelet[3349]: W0312 23:47:17.162199 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.162403 kubelet[3349]: E0312 23:47:17.162229 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.163049 kubelet[3349]: E0312 23:47:17.162698 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.163049 kubelet[3349]: W0312 23:47:17.162758 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.163049 kubelet[3349]: E0312 23:47:17.162784 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.164865 kubelet[3349]: E0312 23:47:17.164569 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.164865 kubelet[3349]: W0312 23:47:17.164622 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.164865 kubelet[3349]: E0312 23:47:17.164652 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.165649 kubelet[3349]: E0312 23:47:17.165282 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.165649 kubelet[3349]: W0312 23:47:17.165309 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.165649 kubelet[3349]: E0312 23:47:17.165426 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.166257 kubelet[3349]: E0312 23:47:17.165940 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.166257 kubelet[3349]: W0312 23:47:17.165964 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.166257 kubelet[3349]: E0312 23:47:17.166016 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.166858 kubelet[3349]: E0312 23:47:17.166681 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.166858 kubelet[3349]: W0312 23:47:17.166705 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.166858 kubelet[3349]: E0312 23:47:17.166731 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.167421 kubelet[3349]: E0312 23:47:17.167333 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.167770 kubelet[3349]: W0312 23:47:17.167565 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.167770 kubelet[3349]: E0312 23:47:17.167602 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.168106 kubelet[3349]: E0312 23:47:17.168083 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.168440 kubelet[3349]: W0312 23:47:17.168227 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.168440 kubelet[3349]: E0312 23:47:17.168259 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.168776 kubelet[3349]: E0312 23:47:17.168752 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.169043 kubelet[3349]: W0312 23:47:17.168913 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.169043 kubelet[3349]: E0312 23:47:17.168947 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.170714 kubelet[3349]: E0312 23:47:17.170671 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.170714 kubelet[3349]: W0312 23:47:17.170708 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.171150 kubelet[3349]: E0312 23:47:17.170739 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.171795 kubelet[3349]: E0312 23:47:17.171760 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.171888 kubelet[3349]: W0312 23:47:17.171799 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.171888 kubelet[3349]: E0312 23:47:17.171829 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.172920 kubelet[3349]: E0312 23:47:17.172871 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:47:17.172920 kubelet[3349]: W0312 23:47:17.172904 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:47:17.173098 kubelet[3349]: E0312 23:47:17.172932 3349 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:47:17.238495 containerd[2010]: time="2026-03-12T23:47:17.238419953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:17.240747 containerd[2010]: time="2026-03-12T23:47:17.240377045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 12 23:47:17.242930 containerd[2010]: time="2026-03-12T23:47:17.242872721Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:17.247922 containerd[2010]: time="2026-03-12T23:47:17.247533989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:17.250407 containerd[2010]: time="2026-03-12T23:47:17.250333937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.477933244s" Mar 12 23:47:17.250407 containerd[2010]: time="2026-03-12T23:47:17.250402037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 12 23:47:17.261353 containerd[2010]: time="2026-03-12T23:47:17.261288641Z" level=info msg="CreateContainer within sandbox \"4ec90756090568dc0819a9a81bff1cb9e5d354f0ff792f00f01176a650599ab5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 12 23:47:17.277507 containerd[2010]: time="2026-03-12T23:47:17.277419125Z" level=info msg="Container 5ce9dc064493dd20beb4f3e4e8f28fb2f435a86140dbaa407caced24c6a3eb22: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:17.298393 containerd[2010]: time="2026-03-12T23:47:17.298219325Z" level=info msg="CreateContainer within sandbox \"4ec90756090568dc0819a9a81bff1cb9e5d354f0ff792f00f01176a650599ab5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5ce9dc064493dd20beb4f3e4e8f28fb2f435a86140dbaa407caced24c6a3eb22\"" Mar 12 23:47:17.300172 containerd[2010]: time="2026-03-12T23:47:17.300115697Z" level=info msg="StartContainer for \"5ce9dc064493dd20beb4f3e4e8f28fb2f435a86140dbaa407caced24c6a3eb22\"" Mar 12 23:47:17.304147 containerd[2010]: time="2026-03-12T23:47:17.304078925Z" level=info msg="connecting to shim 5ce9dc064493dd20beb4f3e4e8f28fb2f435a86140dbaa407caced24c6a3eb22" address="unix:///run/containerd/s/b4ea61fdec462b7030b953f24df21b0bc27c430f57407947073a9186fa89900a" protocol=ttrpc version=3 Mar 12 23:47:17.346339 systemd[1]: Started cri-containerd-5ce9dc064493dd20beb4f3e4e8f28fb2f435a86140dbaa407caced24c6a3eb22.scope - libcontainer container 5ce9dc064493dd20beb4f3e4e8f28fb2f435a86140dbaa407caced24c6a3eb22. Mar 12 23:47:17.475327 containerd[2010]: time="2026-03-12T23:47:17.475193790Z" level=info msg="StartContainer for \"5ce9dc064493dd20beb4f3e4e8f28fb2f435a86140dbaa407caced24c6a3eb22\" returns successfully" Mar 12 23:47:17.506209 systemd[1]: cri-containerd-5ce9dc064493dd20beb4f3e4e8f28fb2f435a86140dbaa407caced24c6a3eb22.scope: Deactivated successfully. Mar 12 23:47:17.515030 containerd[2010]: time="2026-03-12T23:47:17.514798890Z" level=info msg="received container exit event container_id:\"5ce9dc064493dd20beb4f3e4e8f28fb2f435a86140dbaa407caced24c6a3eb22\" id:\"5ce9dc064493dd20beb4f3e4e8f28fb2f435a86140dbaa407caced24c6a3eb22\" pid:4259 exited_at:{seconds:1773359237 nanos:514023726}" Mar 12 23:47:17.562146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ce9dc064493dd20beb4f3e4e8f28fb2f435a86140dbaa407caced24c6a3eb22-rootfs.mount: Deactivated successfully. Mar 12 23:47:18.023679 containerd[2010]: time="2026-03-12T23:47:18.023594200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 12 23:47:18.824139 kubelet[3349]: E0312 23:47:18.824073 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6s59z" podUID="facd10a7-b796-431d-84d9-924988ee39fa" Mar 12 23:47:20.825240 kubelet[3349]: E0312 23:47:20.825136 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6s59z" podUID="facd10a7-b796-431d-84d9-924988ee39fa" Mar 12 23:47:22.824343 kubelet[3349]: E0312 23:47:22.824026 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6s59z" podUID="facd10a7-b796-431d-84d9-924988ee39fa" Mar 12 23:47:24.498850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1481235683.mount: Deactivated successfully. Mar 12 23:47:24.576102 containerd[2010]: time="2026-03-12T23:47:24.576035185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:24.578171 containerd[2010]: time="2026-03-12T23:47:24.578111845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 12 23:47:24.580602 containerd[2010]: time="2026-03-12T23:47:24.580507321Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:24.585119 containerd[2010]: time="2026-03-12T23:47:24.585039997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:24.586261 containerd[2010]: time="2026-03-12T23:47:24.586199041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.562546377s" Mar 12 23:47:24.586765 containerd[2010]: time="2026-03-12T23:47:24.586259953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 12 23:47:24.595235 containerd[2010]: time="2026-03-12T23:47:24.595155325Z" level=info msg="CreateContainer within sandbox \"4ec90756090568dc0819a9a81bff1cb9e5d354f0ff792f00f01176a650599ab5\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 12 23:47:24.616429 containerd[2010]: time="2026-03-12T23:47:24.616296409Z" level=info msg="Container 9550dbad79698f7554aa25d3f3222f8e016b527c766dec895002371519dad7bb: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:24.636829 containerd[2010]: time="2026-03-12T23:47:24.636753469Z" level=info msg="CreateContainer within sandbox \"4ec90756090568dc0819a9a81bff1cb9e5d354f0ff792f00f01176a650599ab5\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"9550dbad79698f7554aa25d3f3222f8e016b527c766dec895002371519dad7bb\"" Mar 12 23:47:24.638025 containerd[2010]: time="2026-03-12T23:47:24.637838089Z" level=info msg="StartContainer for \"9550dbad79698f7554aa25d3f3222f8e016b527c766dec895002371519dad7bb\"" Mar 12 23:47:24.641597 containerd[2010]: time="2026-03-12T23:47:24.641535289Z" level=info msg="connecting to shim 9550dbad79698f7554aa25d3f3222f8e016b527c766dec895002371519dad7bb" address="unix:///run/containerd/s/b4ea61fdec462b7030b953f24df21b0bc27c430f57407947073a9186fa89900a" protocol=ttrpc version=3 Mar 12 23:47:24.683478 systemd[1]: Started cri-containerd-9550dbad79698f7554aa25d3f3222f8e016b527c766dec895002371519dad7bb.scope - libcontainer container 9550dbad79698f7554aa25d3f3222f8e016b527c766dec895002371519dad7bb. Mar 12 23:47:24.801450 containerd[2010]: time="2026-03-12T23:47:24.801287822Z" level=info msg="StartContainer for \"9550dbad79698f7554aa25d3f3222f8e016b527c766dec895002371519dad7bb\" returns successfully" Mar 12 23:47:24.823781 kubelet[3349]: E0312 23:47:24.823649 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6s59z" podUID="facd10a7-b796-431d-84d9-924988ee39fa" Mar 12 23:47:24.990978 systemd[1]: cri-containerd-9550dbad79698f7554aa25d3f3222f8e016b527c766dec895002371519dad7bb.scope: Deactivated successfully. Mar 12 23:47:24.993600 containerd[2010]: time="2026-03-12T23:47:24.992835795Z" level=info msg="received container exit event container_id:\"9550dbad79698f7554aa25d3f3222f8e016b527c766dec895002371519dad7bb\" id:\"9550dbad79698f7554aa25d3f3222f8e016b527c766dec895002371519dad7bb\" pid:4315 exited_at:{seconds:1773359244 nanos:992390907}" Mar 12 23:47:25.497886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9550dbad79698f7554aa25d3f3222f8e016b527c766dec895002371519dad7bb-rootfs.mount: Deactivated successfully. Mar 12 23:47:26.066811 containerd[2010]: time="2026-03-12T23:47:26.066475536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 12 23:47:26.824061 kubelet[3349]: E0312 23:47:26.823648 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6s59z" podUID="facd10a7-b796-431d-84d9-924988ee39fa" Mar 12 23:47:28.824347 kubelet[3349]: E0312 23:47:28.824283 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6s59z" podUID="facd10a7-b796-431d-84d9-924988ee39fa" Mar 12 23:47:28.977182 containerd[2010]: time="2026-03-12T23:47:28.977107327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:28.979012 containerd[2010]: time="2026-03-12T23:47:28.978940399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 12 23:47:28.981015 containerd[2010]: time="2026-03-12T23:47:28.980460547Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:28.984052 containerd[2010]: time="2026-03-12T23:47:28.983979043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:28.985589 containerd[2010]: time="2026-03-12T23:47:28.985523287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.918988819s" Mar 12 23:47:28.985589 containerd[2010]: time="2026-03-12T23:47:28.985584067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 12 23:47:28.992963 containerd[2010]: time="2026-03-12T23:47:28.992905759Z" level=info msg="CreateContainer within sandbox \"4ec90756090568dc0819a9a81bff1cb9e5d354f0ff792f00f01176a650599ab5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 12 23:47:29.007041 containerd[2010]: time="2026-03-12T23:47:29.005338107Z" level=info msg="Container 7c6dc238106a740c16acf0591242481ffc30d9fb0a78dc91a2c4274ed934ca18: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:29.034623 containerd[2010]: time="2026-03-12T23:47:29.034570731Z" level=info msg="CreateContainer within sandbox \"4ec90756090568dc0819a9a81bff1cb9e5d354f0ff792f00f01176a650599ab5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7c6dc238106a740c16acf0591242481ffc30d9fb0a78dc91a2c4274ed934ca18\"" Mar 12 23:47:29.036213 containerd[2010]: time="2026-03-12T23:47:29.035965347Z" level=info msg="StartContainer for \"7c6dc238106a740c16acf0591242481ffc30d9fb0a78dc91a2c4274ed934ca18\"" Mar 12 23:47:29.042086 containerd[2010]: time="2026-03-12T23:47:29.041974419Z" level=info msg="connecting to shim 7c6dc238106a740c16acf0591242481ffc30d9fb0a78dc91a2c4274ed934ca18" address="unix:///run/containerd/s/b4ea61fdec462b7030b953f24df21b0bc27c430f57407947073a9186fa89900a" protocol=ttrpc version=3 Mar 12 23:47:29.093302 systemd[1]: Started cri-containerd-7c6dc238106a740c16acf0591242481ffc30d9fb0a78dc91a2c4274ed934ca18.scope - libcontainer container 7c6dc238106a740c16acf0591242481ffc30d9fb0a78dc91a2c4274ed934ca18. Mar 12 23:47:29.200769 containerd[2010]: time="2026-03-12T23:47:29.200581612Z" level=info msg="StartContainer for \"7c6dc238106a740c16acf0591242481ffc30d9fb0a78dc91a2c4274ed934ca18\" returns successfully" Mar 12 23:47:30.824119 kubelet[3349]: E0312 23:47:30.823847 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6s59z" podUID="facd10a7-b796-431d-84d9-924988ee39fa" Mar 12 23:47:30.863104 containerd[2010]: time="2026-03-12T23:47:30.862498520Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 23:47:30.876447 systemd[1]: cri-containerd-7c6dc238106a740c16acf0591242481ffc30d9fb0a78dc91a2c4274ed934ca18.scope: Deactivated successfully. Mar 12 23:47:30.878215 systemd[1]: cri-containerd-7c6dc238106a740c16acf0591242481ffc30d9fb0a78dc91a2c4274ed934ca18.scope: Consumed 996ms CPU time, 184.5M memory peak, 584K read from disk, 171.3M written to disk. Mar 12 23:47:30.885071 containerd[2010]: time="2026-03-12T23:47:30.884969312Z" level=info msg="received container exit event container_id:\"7c6dc238106a740c16acf0591242481ffc30d9fb0a78dc91a2c4274ed934ca18\" id:\"7c6dc238106a740c16acf0591242481ffc30d9fb0a78dc91a2c4274ed934ca18\" pid:4374 exited_at:{seconds:1773359250 nanos:884344316}" Mar 12 23:47:30.936385 kubelet[3349]: I0312 23:47:30.936338 3349 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 12 23:47:30.987657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c6dc238106a740c16acf0591242481ffc30d9fb0a78dc91a2c4274ed934ca18-rootfs.mount: Deactivated successfully. Mar 12 23:47:31.026958 systemd[1]: Created slice kubepods-burstable-pod6a3f4877_4e7d_4949_9822_2d134b1ffd89.slice - libcontainer container kubepods-burstable-pod6a3f4877_4e7d_4949_9822_2d134b1ffd89.slice. Mar 12 23:47:31.069550 kubelet[3349]: I0312 23:47:31.069492 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a3f4877-4e7d-4949-9822-2d134b1ffd89-config-volume\") pod \"coredns-66bc5c9577-gjz4s\" (UID: \"6a3f4877-4e7d-4949-9822-2d134b1ffd89\") " pod="kube-system/coredns-66bc5c9577-gjz4s" Mar 12 23:47:31.070642 kubelet[3349]: I0312 23:47:31.069807 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lndz6\" (UniqueName: \"kubernetes.io/projected/6a3f4877-4e7d-4949-9822-2d134b1ffd89-kube-api-access-lndz6\") pod \"coredns-66bc5c9577-gjz4s\" (UID: \"6a3f4877-4e7d-4949-9822-2d134b1ffd89\") " pod="kube-system/coredns-66bc5c9577-gjz4s" Mar 12 23:47:31.077504 systemd[1]: Created slice kubepods-burstable-podd46ee16e_4443_484c_b953_35f51dbb7a84.slice - libcontainer container kubepods-burstable-podd46ee16e_4443_484c_b953_35f51dbb7a84.slice. Mar 12 23:47:31.119236 systemd[1]: Created slice kubepods-besteffort-pod2d4c7489_5970_4abd_8680_102ae8ce66cd.slice - libcontainer container kubepods-besteffort-pod2d4c7489_5970_4abd_8680_102ae8ce66cd.slice. Mar 12 23:47:31.141972 systemd[1]: Created slice kubepods-besteffort-pod64a54bab_a76e_4249_995e_1d55d1566fc4.slice - libcontainer container kubepods-besteffort-pod64a54bab_a76e_4249_995e_1d55d1566fc4.slice. Mar 12 23:47:31.166591 systemd[1]: Created slice kubepods-besteffort-pod906c1834_12c3_4b7e_ad65_3928196b79d0.slice - libcontainer container kubepods-besteffort-pod906c1834_12c3_4b7e_ad65_3928196b79d0.slice. Mar 12 23:47:31.172258 kubelet[3349]: I0312 23:47:31.172182 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/abc90d13-06a6-4b38-88bd-93ea7ceb4e66-calico-apiserver-certs\") pod \"calico-apiserver-7c756fffd4-j2tp7\" (UID: \"abc90d13-06a6-4b38-88bd-93ea7ceb4e66\") " pod="calico-system/calico-apiserver-7c756fffd4-j2tp7" Mar 12 23:47:31.172850 kubelet[3349]: I0312 23:47:31.172535 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ggcq\" (UniqueName: \"kubernetes.io/projected/abc90d13-06a6-4b38-88bd-93ea7ceb4e66-kube-api-access-2ggcq\") pod \"calico-apiserver-7c756fffd4-j2tp7\" (UID: \"abc90d13-06a6-4b38-88bd-93ea7ceb4e66\") " pod="calico-system/calico-apiserver-7c756fffd4-j2tp7" Mar 12 23:47:31.172850 kubelet[3349]: I0312 23:47:31.172588 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/906c1834-12c3-4b7e-ad65-3928196b79d0-nginx-config\") pod \"whisker-778c498c7d-9j7qw\" (UID: \"906c1834-12c3-4b7e-ad65-3928196b79d0\") " pod="calico-system/whisker-778c498c7d-9j7qw" Mar 12 23:47:31.172850 kubelet[3349]: I0312 23:47:31.172628 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64a54bab-a76e-4249-995e-1d55d1566fc4-tigera-ca-bundle\") pod \"calico-kube-controllers-6cfb7fbb5f-qtttw\" (UID: \"64a54bab-a76e-4249-995e-1d55d1566fc4\") " pod="calico-system/calico-kube-controllers-6cfb7fbb5f-qtttw" Mar 12 23:47:31.172850 kubelet[3349]: I0312 23:47:31.172663 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d3835311-ddc2-442e-a8c0-72f59b9ff3ae-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-xmwzp\" (UID: \"d3835311-ddc2-442e-a8c0-72f59b9ff3ae\") " pod="calico-system/goldmane-cccfbd5cf-xmwzp" Mar 12 23:47:31.172850 kubelet[3349]: I0312 23:47:31.172714 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2d4c7489-5970-4abd-8680-102ae8ce66cd-calico-apiserver-certs\") pod \"calico-apiserver-7c756fffd4-8qfx6\" (UID: \"2d4c7489-5970-4abd-8680-102ae8ce66cd\") " pod="calico-system/calico-apiserver-7c756fffd4-8qfx6" Mar 12 23:47:31.173174 kubelet[3349]: I0312 23:47:31.172752 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3835311-ddc2-442e-a8c0-72f59b9ff3ae-config\") pod \"goldmane-cccfbd5cf-xmwzp\" (UID: \"d3835311-ddc2-442e-a8c0-72f59b9ff3ae\") " pod="calico-system/goldmane-cccfbd5cf-xmwzp" Mar 12 23:47:31.173174 kubelet[3349]: I0312 23:47:31.172789 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjznb\" (UniqueName: \"kubernetes.io/projected/d3835311-ddc2-442e-a8c0-72f59b9ff3ae-kube-api-access-hjznb\") pod \"goldmane-cccfbd5cf-xmwzp\" (UID: \"d3835311-ddc2-442e-a8c0-72f59b9ff3ae\") " pod="calico-system/goldmane-cccfbd5cf-xmwzp" Mar 12 23:47:31.173599 kubelet[3349]: I0312 23:47:31.173474 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78967\" (UniqueName: \"kubernetes.io/projected/906c1834-12c3-4b7e-ad65-3928196b79d0-kube-api-access-78967\") pod \"whisker-778c498c7d-9j7qw\" (UID: \"906c1834-12c3-4b7e-ad65-3928196b79d0\") " pod="calico-system/whisker-778c498c7d-9j7qw" Mar 12 23:47:31.173829 kubelet[3349]: I0312 23:47:31.173561 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3835311-ddc2-442e-a8c0-72f59b9ff3ae-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-xmwzp\" (UID: \"d3835311-ddc2-442e-a8c0-72f59b9ff3ae\") " pod="calico-system/goldmane-cccfbd5cf-xmwzp" Mar 12 23:47:31.174343 kubelet[3349]: I0312 23:47:31.174223 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/906c1834-12c3-4b7e-ad65-3928196b79d0-whisker-backend-key-pair\") pod \"whisker-778c498c7d-9j7qw\" (UID: \"906c1834-12c3-4b7e-ad65-3928196b79d0\") " pod="calico-system/whisker-778c498c7d-9j7qw" Mar 12 23:47:31.178538 kubelet[3349]: I0312 23:47:31.178287 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d46ee16e-4443-484c-b953-35f51dbb7a84-config-volume\") pod \"coredns-66bc5c9577-kvrxm\" (UID: \"d46ee16e-4443-484c-b953-35f51dbb7a84\") " pod="kube-system/coredns-66bc5c9577-kvrxm" Mar 12 23:47:31.179815 kubelet[3349]: I0312 23:47:31.179733 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/906c1834-12c3-4b7e-ad65-3928196b79d0-whisker-ca-bundle\") pod \"whisker-778c498c7d-9j7qw\" (UID: \"906c1834-12c3-4b7e-ad65-3928196b79d0\") " pod="calico-system/whisker-778c498c7d-9j7qw" Mar 12 23:47:31.182027 kubelet[3349]: I0312 23:47:31.181918 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8rk4\" (UniqueName: \"kubernetes.io/projected/2d4c7489-5970-4abd-8680-102ae8ce66cd-kube-api-access-f8rk4\") pod \"calico-apiserver-7c756fffd4-8qfx6\" (UID: \"2d4c7489-5970-4abd-8680-102ae8ce66cd\") " pod="calico-system/calico-apiserver-7c756fffd4-8qfx6" Mar 12 23:47:31.182700 kubelet[3349]: I0312 23:47:31.182627 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46hhd\" (UniqueName: \"kubernetes.io/projected/d46ee16e-4443-484c-b953-35f51dbb7a84-kube-api-access-46hhd\") pod \"coredns-66bc5c9577-kvrxm\" (UID: \"d46ee16e-4443-484c-b953-35f51dbb7a84\") " pod="kube-system/coredns-66bc5c9577-kvrxm" Mar 12 23:47:31.183212 kubelet[3349]: I0312 23:47:31.182868 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hptft\" (UniqueName: \"kubernetes.io/projected/64a54bab-a76e-4249-995e-1d55d1566fc4-kube-api-access-hptft\") pod \"calico-kube-controllers-6cfb7fbb5f-qtttw\" (UID: \"64a54bab-a76e-4249-995e-1d55d1566fc4\") " pod="calico-system/calico-kube-controllers-6cfb7fbb5f-qtttw" Mar 12 23:47:31.202982 systemd[1]: Created slice kubepods-besteffort-podabc90d13_06a6_4b38_88bd_93ea7ceb4e66.slice - libcontainer container kubepods-besteffort-podabc90d13_06a6_4b38_88bd_93ea7ceb4e66.slice. Mar 12 23:47:31.228297 systemd[1]: Created slice kubepods-besteffort-podd3835311_ddc2_442e_a8c0_72f59b9ff3ae.slice - libcontainer container kubepods-besteffort-podd3835311_ddc2_442e_a8c0_72f59b9ff3ae.slice. Mar 12 23:47:31.280364 containerd[2010]: time="2026-03-12T23:47:31.280306962Z" level=info msg="CreateContainer within sandbox \"4ec90756090568dc0819a9a81bff1cb9e5d354f0ff792f00f01176a650599ab5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 12 23:47:31.331533 containerd[2010]: time="2026-03-12T23:47:31.331402267Z" level=info msg="Container 918a033cf44b95e596a77fd9521fcee79ae591fde40bcb7adc246e9cfe1b7598: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:31.378776 containerd[2010]: time="2026-03-12T23:47:31.378701515Z" level=info msg="CreateContainer within sandbox \"4ec90756090568dc0819a9a81bff1cb9e5d354f0ff792f00f01176a650599ab5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"918a033cf44b95e596a77fd9521fcee79ae591fde40bcb7adc246e9cfe1b7598\"" Mar 12 23:47:31.404239 containerd[2010]: time="2026-03-12T23:47:31.403437343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gjz4s,Uid:6a3f4877-4e7d-4949-9822-2d134b1ffd89,Namespace:kube-system,Attempt:0,}" Mar 12 23:47:31.417140 containerd[2010]: time="2026-03-12T23:47:31.416782591Z" level=info msg="StartContainer for \"918a033cf44b95e596a77fd9521fcee79ae591fde40bcb7adc246e9cfe1b7598\"" Mar 12 23:47:31.432019 containerd[2010]: time="2026-03-12T23:47:31.430871707Z" level=info msg="connecting to shim 918a033cf44b95e596a77fd9521fcee79ae591fde40bcb7adc246e9cfe1b7598" address="unix:///run/containerd/s/b4ea61fdec462b7030b953f24df21b0bc27c430f57407947073a9186fa89900a" protocol=ttrpc version=3 Mar 12 23:47:31.442344 containerd[2010]: time="2026-03-12T23:47:31.442295215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kvrxm,Uid:d46ee16e-4443-484c-b953-35f51dbb7a84,Namespace:kube-system,Attempt:0,}" Mar 12 23:47:31.468224 containerd[2010]: time="2026-03-12T23:47:31.468125467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cfb7fbb5f-qtttw,Uid:64a54bab-a76e-4249-995e-1d55d1566fc4,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:31.493653 containerd[2010]: time="2026-03-12T23:47:31.493582543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-778c498c7d-9j7qw,Uid:906c1834-12c3-4b7e-ad65-3928196b79d0,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:31.512372 systemd[1]: Started cri-containerd-918a033cf44b95e596a77fd9521fcee79ae591fde40bcb7adc246e9cfe1b7598.scope - libcontainer container 918a033cf44b95e596a77fd9521fcee79ae591fde40bcb7adc246e9cfe1b7598. Mar 12 23:47:31.528589 containerd[2010]: time="2026-03-12T23:47:31.528317492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c756fffd4-j2tp7,Uid:abc90d13-06a6-4b38-88bd-93ea7ceb4e66,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:31.575224 containerd[2010]: time="2026-03-12T23:47:31.575159888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-xmwzp,Uid:d3835311-ddc2-442e-a8c0-72f59b9ff3ae,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:31.737199 containerd[2010]: time="2026-03-12T23:47:31.735695757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c756fffd4-8qfx6,Uid:2d4c7489-5970-4abd-8680-102ae8ce66cd,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:31.964437 containerd[2010]: time="2026-03-12T23:47:31.964372270Z" level=info msg="StartContainer for \"918a033cf44b95e596a77fd9521fcee79ae591fde40bcb7adc246e9cfe1b7598\" returns successfully" Mar 12 23:47:32.061968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3661051064.mount: Deactivated successfully. Mar 12 23:47:32.119122 containerd[2010]: time="2026-03-12T23:47:32.118984903Z" level=error msg="Failed to destroy network for sandbox \"a2a4765c30d1cd46100b7c1fb229dad249e25c10424c89fc4833a4faed649a1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.127468 containerd[2010]: time="2026-03-12T23:47:32.122409211Z" level=error msg="Failed to destroy network for sandbox \"d0e6857d216dcb54d4ecd7cfe0ba032e71df22876ed2d5dbc2f215b6cf228be2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.127784 containerd[2010]: time="2026-03-12T23:47:32.125533783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kvrxm,Uid:d46ee16e-4443-484c-b953-35f51dbb7a84,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2a4765c30d1cd46100b7c1fb229dad249e25c10424c89fc4833a4faed649a1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.128638 kubelet[3349]: E0312 23:47:32.128081 3349 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2a4765c30d1cd46100b7c1fb229dad249e25c10424c89fc4833a4faed649a1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.128638 kubelet[3349]: E0312 23:47:32.128166 3349 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2a4765c30d1cd46100b7c1fb229dad249e25c10424c89fc4833a4faed649a1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kvrxm" Mar 12 23:47:32.128638 kubelet[3349]: E0312 23:47:32.128201 3349 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2a4765c30d1cd46100b7c1fb229dad249e25c10424c89fc4833a4faed649a1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kvrxm" Mar 12 23:47:32.132669 kubelet[3349]: E0312 23:47:32.128284 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-kvrxm_kube-system(d46ee16e-4443-484c-b953-35f51dbb7a84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-kvrxm_kube-system(d46ee16e-4443-484c-b953-35f51dbb7a84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2a4765c30d1cd46100b7c1fb229dad249e25c10424c89fc4833a4faed649a1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-kvrxm" podUID="d46ee16e-4443-484c-b953-35f51dbb7a84" Mar 12 23:47:32.131051 systemd[1]: run-netns-cni\x2dc1551d6c\x2d0b77\x2db55f\x2d6965\x2d85506304f566.mount: Deactivated successfully. Mar 12 23:47:32.148599 containerd[2010]: time="2026-03-12T23:47:32.143489827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c756fffd4-j2tp7,Uid:abc90d13-06a6-4b38-88bd-93ea7ceb4e66,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0e6857d216dcb54d4ecd7cfe0ba032e71df22876ed2d5dbc2f215b6cf228be2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.149144 kubelet[3349]: E0312 23:47:32.149089 3349 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0e6857d216dcb54d4ecd7cfe0ba032e71df22876ed2d5dbc2f215b6cf228be2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.150918 systemd[1]: run-netns-cni\x2d1ef225b0\x2d52c2\x2dad56\x2d0c66\x2d0c9ad2d97ccf.mount: Deactivated successfully. Mar 12 23:47:32.152812 kubelet[3349]: E0312 23:47:32.152412 3349 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0e6857d216dcb54d4ecd7cfe0ba032e71df22876ed2d5dbc2f215b6cf228be2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c756fffd4-j2tp7" Mar 12 23:47:32.152812 kubelet[3349]: E0312 23:47:32.152461 3349 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0e6857d216dcb54d4ecd7cfe0ba032e71df22876ed2d5dbc2f215b6cf228be2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c756fffd4-j2tp7" Mar 12 23:47:32.152812 kubelet[3349]: E0312 23:47:32.152566 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c756fffd4-j2tp7_calico-system(abc90d13-06a6-4b38-88bd-93ea7ceb4e66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c756fffd4-j2tp7_calico-system(abc90d13-06a6-4b38-88bd-93ea7ceb4e66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0e6857d216dcb54d4ecd7cfe0ba032e71df22876ed2d5dbc2f215b6cf228be2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7c756fffd4-j2tp7" podUID="abc90d13-06a6-4b38-88bd-93ea7ceb4e66" Mar 12 23:47:32.184074 containerd[2010]: time="2026-03-12T23:47:32.183942571Z" level=error msg="Failed to destroy network for sandbox \"b53bd1ffe11e2c551f679b9f56c92fac68b0b6b9736e0d03a6d84db5b2a72567\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.193246 containerd[2010]: time="2026-03-12T23:47:32.193108387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gjz4s,Uid:6a3f4877-4e7d-4949-9822-2d134b1ffd89,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b53bd1ffe11e2c551f679b9f56c92fac68b0b6b9736e0d03a6d84db5b2a72567\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.194011 kubelet[3349]: E0312 23:47:32.193866 3349 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b53bd1ffe11e2c551f679b9f56c92fac68b0b6b9736e0d03a6d84db5b2a72567\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.194011 kubelet[3349]: E0312 23:47:32.193939 3349 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b53bd1ffe11e2c551f679b9f56c92fac68b0b6b9736e0d03a6d84db5b2a72567\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gjz4s" Mar 12 23:47:32.194636 kubelet[3349]: E0312 23:47:32.193982 3349 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b53bd1ffe11e2c551f679b9f56c92fac68b0b6b9736e0d03a6d84db5b2a72567\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gjz4s" Mar 12 23:47:32.194636 kubelet[3349]: E0312 23:47:32.194291 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-gjz4s_kube-system(6a3f4877-4e7d-4949-9822-2d134b1ffd89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-gjz4s_kube-system(6a3f4877-4e7d-4949-9822-2d134b1ffd89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b53bd1ffe11e2c551f679b9f56c92fac68b0b6b9736e0d03a6d84db5b2a72567\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-gjz4s" podUID="6a3f4877-4e7d-4949-9822-2d134b1ffd89" Mar 12 23:47:32.194519 systemd[1]: run-netns-cni\x2df43b5687\x2d1aee\x2dc461\x2d8674\x2d74a1695e1114.mount: Deactivated successfully. Mar 12 23:47:32.202308 containerd[2010]: time="2026-03-12T23:47:32.201979051Z" level=error msg="Failed to destroy network for sandbox \"f5926fe61f7b8164486259c072dc41b6cbfa7d8e7b4e4bed71c9b89031d24416\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.211040 containerd[2010]: time="2026-03-12T23:47:32.210237787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cfb7fbb5f-qtttw,Uid:64a54bab-a76e-4249-995e-1d55d1566fc4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5926fe61f7b8164486259c072dc41b6cbfa7d8e7b4e4bed71c9b89031d24416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.211319 systemd[1]: run-netns-cni\x2dd446b819\x2d77ac\x2dbef8\x2dbdf9\x2dd251be8f366b.mount: Deactivated successfully. Mar 12 23:47:32.213824 kubelet[3349]: E0312 23:47:32.213770 3349 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5926fe61f7b8164486259c072dc41b6cbfa7d8e7b4e4bed71c9b89031d24416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.214474 kubelet[3349]: E0312 23:47:32.214133 3349 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5926fe61f7b8164486259c072dc41b6cbfa7d8e7b4e4bed71c9b89031d24416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cfb7fbb5f-qtttw" Mar 12 23:47:32.214474 kubelet[3349]: E0312 23:47:32.214177 3349 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5926fe61f7b8164486259c072dc41b6cbfa7d8e7b4e4bed71c9b89031d24416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cfb7fbb5f-qtttw" Mar 12 23:47:32.217247 kubelet[3349]: E0312 23:47:32.215048 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cfb7fbb5f-qtttw_calico-system(64a54bab-a76e-4249-995e-1d55d1566fc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cfb7fbb5f-qtttw_calico-system(64a54bab-a76e-4249-995e-1d55d1566fc4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5926fe61f7b8164486259c072dc41b6cbfa7d8e7b4e4bed71c9b89031d24416\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cfb7fbb5f-qtttw" podUID="64a54bab-a76e-4249-995e-1d55d1566fc4" Mar 12 23:47:32.221978 containerd[2010]: time="2026-03-12T23:47:32.221868139Z" level=error msg="Failed to destroy network for sandbox \"554e2f2f7e9bdde12c9fcf08dcac3657a06acf0febe0205fb1b5ecb3ad72f09d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.234273 containerd[2010]: time="2026-03-12T23:47:32.234193411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-xmwzp,Uid:d3835311-ddc2-442e-a8c0-72f59b9ff3ae,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"554e2f2f7e9bdde12c9fcf08dcac3657a06acf0febe0205fb1b5ecb3ad72f09d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.235573 kubelet[3349]: E0312 23:47:32.234604 3349 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"554e2f2f7e9bdde12c9fcf08dcac3657a06acf0febe0205fb1b5ecb3ad72f09d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.235573 kubelet[3349]: E0312 23:47:32.234671 3349 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"554e2f2f7e9bdde12c9fcf08dcac3657a06acf0febe0205fb1b5ecb3ad72f09d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-xmwzp" Mar 12 23:47:32.235573 kubelet[3349]: E0312 23:47:32.234704 3349 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"554e2f2f7e9bdde12c9fcf08dcac3657a06acf0febe0205fb1b5ecb3ad72f09d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-xmwzp" Mar 12 23:47:32.235804 kubelet[3349]: E0312 23:47:32.234783 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-xmwzp_calico-system(d3835311-ddc2-442e-a8c0-72f59b9ff3ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-xmwzp_calico-system(d3835311-ddc2-442e-a8c0-72f59b9ff3ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"554e2f2f7e9bdde12c9fcf08dcac3657a06acf0febe0205fb1b5ecb3ad72f09d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-xmwzp" podUID="d3835311-ddc2-442e-a8c0-72f59b9ff3ae" Mar 12 23:47:32.237786 containerd[2010]: time="2026-03-12T23:47:32.237524275Z" level=error msg="Failed to destroy network for sandbox \"189847d983cfb9116dfbf9ac6866bf5bff6ce571ef222c93b146a6ed02e8a03b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.244084 containerd[2010]: time="2026-03-12T23:47:32.242174875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-778c498c7d-9j7qw,Uid:906c1834-12c3-4b7e-ad65-3928196b79d0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"189847d983cfb9116dfbf9ac6866bf5bff6ce571ef222c93b146a6ed02e8a03b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.244380 kubelet[3349]: E0312 23:47:32.243204 3349 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"189847d983cfb9116dfbf9ac6866bf5bff6ce571ef222c93b146a6ed02e8a03b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.244380 kubelet[3349]: E0312 23:47:32.243270 3349 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"189847d983cfb9116dfbf9ac6866bf5bff6ce571ef222c93b146a6ed02e8a03b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-778c498c7d-9j7qw" Mar 12 23:47:32.244380 kubelet[3349]: E0312 23:47:32.243310 3349 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"189847d983cfb9116dfbf9ac6866bf5bff6ce571ef222c93b146a6ed02e8a03b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-778c498c7d-9j7qw" Mar 12 23:47:32.244670 kubelet[3349]: E0312 23:47:32.243389 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-778c498c7d-9j7qw_calico-system(906c1834-12c3-4b7e-ad65-3928196b79d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-778c498c7d-9j7qw_calico-system(906c1834-12c3-4b7e-ad65-3928196b79d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"189847d983cfb9116dfbf9ac6866bf5bff6ce571ef222c93b146a6ed02e8a03b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-778c498c7d-9j7qw" podUID="906c1834-12c3-4b7e-ad65-3928196b79d0" Mar 12 23:47:32.296980 kubelet[3349]: I0312 23:47:32.296794 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-md969" podStartSLOduration=4.032481951 podStartE2EDuration="19.296767651s" podCreationTimestamp="2026-03-12 23:47:13 +0000 UTC" firstStartedPulling="2026-03-12 23:47:13.723246687 +0000 UTC m=+26.222000399" lastFinishedPulling="2026-03-12 23:47:28.987532399 +0000 UTC m=+41.486286099" observedRunningTime="2026-03-12 23:47:32.293846527 +0000 UTC m=+44.792600251" watchObservedRunningTime="2026-03-12 23:47:32.296767651 +0000 UTC m=+44.795521363" Mar 12 23:47:32.316680 containerd[2010]: time="2026-03-12T23:47:32.314741383Z" level=error msg="Failed to destroy network for sandbox \"728246a0792a94900b0023879134d036d82b5970bb0b28fa35688324c92fef27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.322160 containerd[2010]: time="2026-03-12T23:47:32.322061696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c756fffd4-8qfx6,Uid:2d4c7489-5970-4abd-8680-102ae8ce66cd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"728246a0792a94900b0023879134d036d82b5970bb0b28fa35688324c92fef27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.323633 kubelet[3349]: E0312 23:47:32.323192 3349 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"728246a0792a94900b0023879134d036d82b5970bb0b28fa35688324c92fef27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:32.323633 kubelet[3349]: E0312 23:47:32.323302 3349 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"728246a0792a94900b0023879134d036d82b5970bb0b28fa35688324c92fef27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c756fffd4-8qfx6" Mar 12 23:47:32.323633 kubelet[3349]: E0312 23:47:32.323365 3349 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"728246a0792a94900b0023879134d036d82b5970bb0b28fa35688324c92fef27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c756fffd4-8qfx6" Mar 12 23:47:32.324167 kubelet[3349]: E0312 23:47:32.323632 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c756fffd4-8qfx6_calico-system(2d4c7489-5970-4abd-8680-102ae8ce66cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c756fffd4-8qfx6_calico-system(2d4c7489-5970-4abd-8680-102ae8ce66cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"728246a0792a94900b0023879134d036d82b5970bb0b28fa35688324c92fef27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7c756fffd4-8qfx6" podUID="2d4c7489-5970-4abd-8680-102ae8ce66cd" Mar 12 23:47:32.840149 systemd[1]: Created slice kubepods-besteffort-podfacd10a7_b796_431d_84d9_924988ee39fa.slice - libcontainer container kubepods-besteffort-podfacd10a7_b796_431d_84d9_924988ee39fa.slice. Mar 12 23:47:32.851960 containerd[2010]: time="2026-03-12T23:47:32.851896738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6s59z,Uid:facd10a7-b796-431d-84d9-924988ee39fa,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:32.982099 systemd[1]: run-netns-cni\x2de007bd9e\x2d3739\x2db1ed\x2d3829\x2d047843d40ea2.mount: Deactivated successfully. Mar 12 23:47:32.982285 systemd[1]: run-netns-cni\x2d3687e933\x2de68d\x2d4d29\x2d3415\x2d7660f408657d.mount: Deactivated successfully. Mar 12 23:47:32.982406 systemd[1]: run-netns-cni\x2daa03a5b5\x2d714e\x2d9149\x2d75e5\x2dc1b5dc1c7cb2.mount: Deactivated successfully. Mar 12 23:47:33.127974 systemd-networkd[1821]: cali3b4486d0807: Link UP Mar 12 23:47:33.129366 systemd-networkd[1821]: cali3b4486d0807: Gained carrier Mar 12 23:47:33.142821 (udev-worker)[4697]: Network interface NamePolicy= disabled on kernel command line. Mar 12 23:47:33.165544 containerd[2010]: 2026-03-12 23:47:32.902 [ERROR][4674] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 23:47:33.165544 containerd[2010]: 2026-03-12 23:47:32.942 [INFO][4674] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--143-k8s-csi--node--driver--6s59z-eth0 csi-node-driver- calico-system facd10a7-b796-431d-84d9-924988ee39fa 732 0 2026-03-12 23:47:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-24-143 csi-node-driver-6s59z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3b4486d0807 [] [] }} ContainerID="b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" Namespace="calico-system" Pod="csi-node-driver-6s59z" WorkloadEndpoint="ip--172--31--24--143-k8s-csi--node--driver--6s59z-" Mar 12 23:47:33.165544 containerd[2010]: 2026-03-12 23:47:32.942 [INFO][4674] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" Namespace="calico-system" Pod="csi-node-driver-6s59z" WorkloadEndpoint="ip--172--31--24--143-k8s-csi--node--driver--6s59z-eth0" Mar 12 23:47:33.165544 containerd[2010]: 2026-03-12 23:47:33.038 [INFO][4688] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" HandleID="k8s-pod-network.b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" Workload="ip--172--31--24--143-k8s-csi--node--driver--6s59z-eth0" Mar 12 23:47:33.166715 containerd[2010]: 2026-03-12 23:47:33.054 [INFO][4688] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" HandleID="k8s-pod-network.b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" Workload="ip--172--31--24--143-k8s-csi--node--driver--6s59z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001037f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-143", "pod":"csi-node-driver-6s59z", "timestamp":"2026-03-12 23:47:33.038450263 +0000 UTC"}, Hostname:"ip-172-31-24-143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003726e0)} Mar 12 23:47:33.166715 containerd[2010]: 2026-03-12 23:47:33.054 [INFO][4688] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 23:47:33.166715 containerd[2010]: 2026-03-12 23:47:33.054 [INFO][4688] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 23:47:33.166715 containerd[2010]: 2026-03-12 23:47:33.055 [INFO][4688] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-143' Mar 12 23:47:33.166715 containerd[2010]: 2026-03-12 23:47:33.059 [INFO][4688] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" host="ip-172-31-24-143" Mar 12 23:47:33.166715 containerd[2010]: 2026-03-12 23:47:33.067 [INFO][4688] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-143" Mar 12 23:47:33.166715 containerd[2010]: 2026-03-12 23:47:33.075 [INFO][4688] ipam/ipam.go 526: Trying affinity for 192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:33.166715 containerd[2010]: 2026-03-12 23:47:33.078 [INFO][4688] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:33.166715 containerd[2010]: 2026-03-12 23:47:33.082 [INFO][4688] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:33.167344 containerd[2010]: 2026-03-12 23:47:33.083 [INFO][4688] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.64/26 handle="k8s-pod-network.b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" host="ip-172-31-24-143" Mar 12 23:47:33.167344 containerd[2010]: 2026-03-12 23:47:33.085 [INFO][4688] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47 Mar 12 23:47:33.167344 containerd[2010]: 2026-03-12 23:47:33.095 [INFO][4688] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.64/26 handle="k8s-pod-network.b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" host="ip-172-31-24-143" Mar 12 23:47:33.167344 containerd[2010]: 2026-03-12 23:47:33.105 [INFO][4688] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.65/26] block=192.168.37.64/26 handle="k8s-pod-network.b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" host="ip-172-31-24-143" Mar 12 23:47:33.167344 containerd[2010]: 2026-03-12 23:47:33.105 [INFO][4688] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.65/26] handle="k8s-pod-network.b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" host="ip-172-31-24-143" Mar 12 23:47:33.167344 containerd[2010]: 2026-03-12 23:47:33.105 [INFO][4688] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 23:47:33.167344 containerd[2010]: 2026-03-12 23:47:33.105 [INFO][4688] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.65/26] IPv6=[] ContainerID="b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" HandleID="k8s-pod-network.b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" Workload="ip--172--31--24--143-k8s-csi--node--driver--6s59z-eth0" Mar 12 23:47:33.167804 containerd[2010]: 2026-03-12 23:47:33.114 [INFO][4674] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" Namespace="calico-system" Pod="csi-node-driver-6s59z" WorkloadEndpoint="ip--172--31--24--143-k8s-csi--node--driver--6s59z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--143-k8s-csi--node--driver--6s59z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"facd10a7-b796-431d-84d9-924988ee39fa", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 47, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-143", ContainerID:"", Pod:"csi-node-driver-6s59z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b4486d0807", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:33.168432 containerd[2010]: 2026-03-12 23:47:33.114 [INFO][4674] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.65/32] ContainerID="b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" Namespace="calico-system" Pod="csi-node-driver-6s59z" WorkloadEndpoint="ip--172--31--24--143-k8s-csi--node--driver--6s59z-eth0" Mar 12 23:47:33.168432 containerd[2010]: 2026-03-12 23:47:33.114 [INFO][4674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b4486d0807 ContainerID="b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" Namespace="calico-system" Pod="csi-node-driver-6s59z" WorkloadEndpoint="ip--172--31--24--143-k8s-csi--node--driver--6s59z-eth0" Mar 12 23:47:33.168432 containerd[2010]: 2026-03-12 23:47:33.129 [INFO][4674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" Namespace="calico-system" Pod="csi-node-driver-6s59z" WorkloadEndpoint="ip--172--31--24--143-k8s-csi--node--driver--6s59z-eth0" Mar 12 23:47:33.168744 containerd[2010]: 2026-03-12 23:47:33.130 [INFO][4674] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" Namespace="calico-system" Pod="csi-node-driver-6s59z" WorkloadEndpoint="ip--172--31--24--143-k8s-csi--node--driver--6s59z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--143-k8s-csi--node--driver--6s59z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"facd10a7-b796-431d-84d9-924988ee39fa", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 47, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-143", ContainerID:"b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47", Pod:"csi-node-driver-6s59z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b4486d0807", MAC:"a2:a7:c9:bf:98:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:33.168953 containerd[2010]: 2026-03-12 23:47:33.155 [INFO][4674] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" Namespace="calico-system" Pod="csi-node-driver-6s59z" WorkloadEndpoint="ip--172--31--24--143-k8s-csi--node--driver--6s59z-eth0" Mar 12 23:47:33.273381 containerd[2010]: time="2026-03-12T23:47:33.273126848Z" level=info msg="connecting to shim b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47" address="unix:///run/containerd/s/4d214a1ac80d7883309853ce57fdfadebb6948cc7fce86f88bb5dd08580a303b" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:33.352308 systemd[1]: Started cri-containerd-b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47.scope - libcontainer container b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47. Mar 12 23:47:33.416221 kubelet[3349]: I0312 23:47:33.415607 3349 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/906c1834-12c3-4b7e-ad65-3928196b79d0-whisker-ca-bundle\") pod \"906c1834-12c3-4b7e-ad65-3928196b79d0\" (UID: \"906c1834-12c3-4b7e-ad65-3928196b79d0\") " Mar 12 23:47:33.416221 kubelet[3349]: I0312 23:47:33.415676 3349 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/906c1834-12c3-4b7e-ad65-3928196b79d0-whisker-backend-key-pair\") pod \"906c1834-12c3-4b7e-ad65-3928196b79d0\" (UID: \"906c1834-12c3-4b7e-ad65-3928196b79d0\") " Mar 12 23:47:33.416221 kubelet[3349]: I0312 23:47:33.415715 3349 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/906c1834-12c3-4b7e-ad65-3928196b79d0-nginx-config\") pod \"906c1834-12c3-4b7e-ad65-3928196b79d0\" (UID: \"906c1834-12c3-4b7e-ad65-3928196b79d0\") " Mar 12 23:47:33.416221 kubelet[3349]: I0312 23:47:33.415760 3349 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78967\" (UniqueName: \"kubernetes.io/projected/906c1834-12c3-4b7e-ad65-3928196b79d0-kube-api-access-78967\") pod \"906c1834-12c3-4b7e-ad65-3928196b79d0\" (UID: \"906c1834-12c3-4b7e-ad65-3928196b79d0\") " Mar 12 23:47:33.428056 kubelet[3349]: I0312 23:47:33.427560 3349 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/906c1834-12c3-4b7e-ad65-3928196b79d0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "906c1834-12c3-4b7e-ad65-3928196b79d0" (UID: "906c1834-12c3-4b7e-ad65-3928196b79d0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 23:47:33.430647 kubelet[3349]: I0312 23:47:33.430591 3349 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/906c1834-12c3-4b7e-ad65-3928196b79d0-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "906c1834-12c3-4b7e-ad65-3928196b79d0" (UID: "906c1834-12c3-4b7e-ad65-3928196b79d0"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 23:47:33.437841 systemd[1]: var-lib-kubelet-pods-906c1834\x2d12c3\x2d4b7e\x2dad65\x2d3928196b79d0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d78967.mount: Deactivated successfully. Mar 12 23:47:33.448015 kubelet[3349]: I0312 23:47:33.447675 3349 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/906c1834-12c3-4b7e-ad65-3928196b79d0-kube-api-access-78967" (OuterVolumeSpecName: "kube-api-access-78967") pod "906c1834-12c3-4b7e-ad65-3928196b79d0" (UID: "906c1834-12c3-4b7e-ad65-3928196b79d0"). InnerVolumeSpecName "kube-api-access-78967". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 23:47:33.449326 kubelet[3349]: I0312 23:47:33.449270 3349 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906c1834-12c3-4b7e-ad65-3928196b79d0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "906c1834-12c3-4b7e-ad65-3928196b79d0" (UID: "906c1834-12c3-4b7e-ad65-3928196b79d0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 23:47:33.449511 systemd[1]: var-lib-kubelet-pods-906c1834\x2d12c3\x2d4b7e\x2dad65\x2d3928196b79d0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 12 23:47:33.459638 containerd[2010]: time="2026-03-12T23:47:33.459519789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6s59z,Uid:facd10a7-b796-431d-84d9-924988ee39fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47\"" Mar 12 23:47:33.462811 containerd[2010]: time="2026-03-12T23:47:33.462472581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 12 23:47:33.516701 kubelet[3349]: I0312 23:47:33.516561 3349 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/906c1834-12c3-4b7e-ad65-3928196b79d0-whisker-ca-bundle\") on node \"ip-172-31-24-143\" DevicePath \"\"" Mar 12 23:47:33.517137 kubelet[3349]: I0312 23:47:33.516983 3349 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/906c1834-12c3-4b7e-ad65-3928196b79d0-whisker-backend-key-pair\") on node \"ip-172-31-24-143\" DevicePath \"\"" Mar 12 23:47:33.517137 kubelet[3349]: I0312 23:47:33.517067 3349 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/906c1834-12c3-4b7e-ad65-3928196b79d0-nginx-config\") on node \"ip-172-31-24-143\" DevicePath \"\"" Mar 12 23:47:33.517137 kubelet[3349]: I0312 23:47:33.517096 3349 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-78967\" (UniqueName: \"kubernetes.io/projected/906c1834-12c3-4b7e-ad65-3928196b79d0-kube-api-access-78967\") on node \"ip-172-31-24-143\" DevicePath \"\"" Mar 12 23:47:33.838092 systemd[1]: Removed slice kubepods-besteffort-pod906c1834_12c3_4b7e_ad65_3928196b79d0.slice - libcontainer container kubepods-besteffort-pod906c1834_12c3_4b7e_ad65_3928196b79d0.slice. Mar 12 23:47:34.400880 systemd[1]: Created slice kubepods-besteffort-podda2822f6_b520_4fa6_b494_5158b4a0788c.slice - libcontainer container kubepods-besteffort-podda2822f6_b520_4fa6_b494_5158b4a0788c.slice. Mar 12 23:47:34.423668 kubelet[3349]: I0312 23:47:34.423572 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ldzl\" (UniqueName: \"kubernetes.io/projected/da2822f6-b520-4fa6-b494-5158b4a0788c-kube-api-access-5ldzl\") pod \"whisker-665b8996c-9lnvz\" (UID: \"da2822f6-b520-4fa6-b494-5158b4a0788c\") " pod="calico-system/whisker-665b8996c-9lnvz" Mar 12 23:47:34.424622 kubelet[3349]: I0312 23:47:34.423747 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da2822f6-b520-4fa6-b494-5158b4a0788c-whisker-backend-key-pair\") pod \"whisker-665b8996c-9lnvz\" (UID: \"da2822f6-b520-4fa6-b494-5158b4a0788c\") " pod="calico-system/whisker-665b8996c-9lnvz" Mar 12 23:47:34.424622 kubelet[3349]: I0312 23:47:34.423813 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/da2822f6-b520-4fa6-b494-5158b4a0788c-nginx-config\") pod \"whisker-665b8996c-9lnvz\" (UID: \"da2822f6-b520-4fa6-b494-5158b4a0788c\") " pod="calico-system/whisker-665b8996c-9lnvz" Mar 12 23:47:34.424622 kubelet[3349]: I0312 23:47:34.423852 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da2822f6-b520-4fa6-b494-5158b4a0788c-whisker-ca-bundle\") pod \"whisker-665b8996c-9lnvz\" (UID: \"da2822f6-b520-4fa6-b494-5158b4a0788c\") " pod="calico-system/whisker-665b8996c-9lnvz" Mar 12 23:47:34.517252 systemd-networkd[1821]: cali3b4486d0807: Gained IPv6LL Mar 12 23:47:34.720920 containerd[2010]: time="2026-03-12T23:47:34.720605123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-665b8996c-9lnvz,Uid:da2822f6-b520-4fa6-b494-5158b4a0788c,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:35.046045 systemd-networkd[1821]: cali075d1f80b1d: Link UP Mar 12 23:47:35.048723 systemd-networkd[1821]: cali075d1f80b1d: Gained carrier Mar 12 23:47:35.050280 (udev-worker)[4696]: Network interface NamePolicy= disabled on kernel command line. Mar 12 23:47:35.091064 containerd[2010]: 2026-03-12 23:47:34.851 [INFO][4873] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--143-k8s-whisker--665b8996c--9lnvz-eth0 whisker-665b8996c- calico-system da2822f6-b520-4fa6-b494-5158b4a0788c 937 0 2026-03-12 23:47:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:665b8996c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-24-143 whisker-665b8996c-9lnvz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali075d1f80b1d [] [] }} ContainerID="0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" Namespace="calico-system" Pod="whisker-665b8996c-9lnvz" WorkloadEndpoint="ip--172--31--24--143-k8s-whisker--665b8996c--9lnvz-" Mar 12 23:47:35.091064 containerd[2010]: 2026-03-12 23:47:34.851 [INFO][4873] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" Namespace="calico-system" Pod="whisker-665b8996c-9lnvz" WorkloadEndpoint="ip--172--31--24--143-k8s-whisker--665b8996c--9lnvz-eth0" Mar 12 23:47:35.091064 containerd[2010]: 2026-03-12 23:47:34.955 [INFO][4890] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" HandleID="k8s-pod-network.0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" Workload="ip--172--31--24--143-k8s-whisker--665b8996c--9lnvz-eth0" Mar 12 23:47:35.093214 containerd[2010]: 2026-03-12 23:47:34.973 [INFO][4890] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" HandleID="k8s-pod-network.0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" Workload="ip--172--31--24--143-k8s-whisker--665b8996c--9lnvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003f64a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-143", "pod":"whisker-665b8996c-9lnvz", "timestamp":"2026-03-12 23:47:34.955318501 +0000 UTC"}, Hostname:"ip-172-31-24-143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000b8c60)} Mar 12 23:47:35.093214 containerd[2010]: 2026-03-12 23:47:34.973 [INFO][4890] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 23:47:35.093214 containerd[2010]: 2026-03-12 23:47:34.974 [INFO][4890] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 23:47:35.093214 containerd[2010]: 2026-03-12 23:47:34.974 [INFO][4890] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-143' Mar 12 23:47:35.093214 containerd[2010]: 2026-03-12 23:47:34.979 [INFO][4890] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" host="ip-172-31-24-143" Mar 12 23:47:35.093214 containerd[2010]: 2026-03-12 23:47:34.989 [INFO][4890] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-143" Mar 12 23:47:35.093214 containerd[2010]: 2026-03-12 23:47:34.998 [INFO][4890] ipam/ipam.go 526: Trying affinity for 192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:35.093214 containerd[2010]: 2026-03-12 23:47:35.002 [INFO][4890] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:35.093214 containerd[2010]: 2026-03-12 23:47:35.008 [INFO][4890] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:35.093695 containerd[2010]: 2026-03-12 23:47:35.008 [INFO][4890] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.64/26 handle="k8s-pod-network.0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" host="ip-172-31-24-143" Mar 12 23:47:35.093695 containerd[2010]: 2026-03-12 23:47:35.010 [INFO][4890] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1 Mar 12 23:47:35.093695 containerd[2010]: 2026-03-12 23:47:35.021 [INFO][4890] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.64/26 handle="k8s-pod-network.0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" host="ip-172-31-24-143" Mar 12 23:47:35.093695 containerd[2010]: 2026-03-12 23:47:35.033 [INFO][4890] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.66/26] block=192.168.37.64/26 handle="k8s-pod-network.0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" host="ip-172-31-24-143" Mar 12 23:47:35.093695 containerd[2010]: 2026-03-12 23:47:35.033 [INFO][4890] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.66/26] handle="k8s-pod-network.0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" host="ip-172-31-24-143" Mar 12 23:47:35.093695 containerd[2010]: 2026-03-12 23:47:35.033 [INFO][4890] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 23:47:35.093695 containerd[2010]: 2026-03-12 23:47:35.033 [INFO][4890] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.66/26] IPv6=[] ContainerID="0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" HandleID="k8s-pod-network.0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" Workload="ip--172--31--24--143-k8s-whisker--665b8996c--9lnvz-eth0" Mar 12 23:47:35.097169 containerd[2010]: 2026-03-12 23:47:35.038 [INFO][4873] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" Namespace="calico-system" Pod="whisker-665b8996c-9lnvz" WorkloadEndpoint="ip--172--31--24--143-k8s-whisker--665b8996c--9lnvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--143-k8s-whisker--665b8996c--9lnvz-eth0", GenerateName:"whisker-665b8996c-", Namespace:"calico-system", SelfLink:"", UID:"da2822f6-b520-4fa6-b494-5158b4a0788c", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 47, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"665b8996c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-143", ContainerID:"", Pod:"whisker-665b8996c-9lnvz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.37.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali075d1f80b1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:35.097169 containerd[2010]: 2026-03-12 23:47:35.039 [INFO][4873] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.66/32] ContainerID="0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" Namespace="calico-system" Pod="whisker-665b8996c-9lnvz" WorkloadEndpoint="ip--172--31--24--143-k8s-whisker--665b8996c--9lnvz-eth0" Mar 12 23:47:35.098673 containerd[2010]: 2026-03-12 23:47:35.039 [INFO][4873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali075d1f80b1d ContainerID="0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" Namespace="calico-system" Pod="whisker-665b8996c-9lnvz" WorkloadEndpoint="ip--172--31--24--143-k8s-whisker--665b8996c--9lnvz-eth0" Mar 12 23:47:35.098673 containerd[2010]: 2026-03-12 23:47:35.050 [INFO][4873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" Namespace="calico-system" Pod="whisker-665b8996c-9lnvz" WorkloadEndpoint="ip--172--31--24--143-k8s-whisker--665b8996c--9lnvz-eth0" Mar 12 23:47:35.098799 containerd[2010]: 2026-03-12 23:47:35.050 [INFO][4873] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" Namespace="calico-system" Pod="whisker-665b8996c-9lnvz" WorkloadEndpoint="ip--172--31--24--143-k8s-whisker--665b8996c--9lnvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--143-k8s-whisker--665b8996c--9lnvz-eth0", GenerateName:"whisker-665b8996c-", Namespace:"calico-system", SelfLink:"", UID:"da2822f6-b520-4fa6-b494-5158b4a0788c", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 47, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"665b8996c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-143", ContainerID:"0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1", Pod:"whisker-665b8996c-9lnvz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.37.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali075d1f80b1d", MAC:"c2:b2:fd:72:f4:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:35.098944 containerd[2010]: 2026-03-12 23:47:35.079 [INFO][4873] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" Namespace="calico-system" Pod="whisker-665b8996c-9lnvz" WorkloadEndpoint="ip--172--31--24--143-k8s-whisker--665b8996c--9lnvz-eth0" Mar 12 23:47:35.205126 containerd[2010]: time="2026-03-12T23:47:35.205036486Z" level=info msg="connecting to shim 0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1" address="unix:///run/containerd/s/1c7d51865f268f3b6ef866c38b062a7a7ae60c6b95ffbcea184f73feae4db2c0" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:35.292428 systemd[1]: Started cri-containerd-0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1.scope - libcontainer container 0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1. Mar 12 23:47:35.391021 containerd[2010]: time="2026-03-12T23:47:35.390943355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-665b8996c-9lnvz,Uid:da2822f6-b520-4fa6-b494-5158b4a0788c,Namespace:calico-system,Attempt:0,} returns sandbox id \"0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1\"" Mar 12 23:47:35.830505 kubelet[3349]: I0312 23:47:35.830438 3349 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="906c1834-12c3-4b7e-ad65-3928196b79d0" path="/var/lib/kubelet/pods/906c1834-12c3-4b7e-ad65-3928196b79d0/volumes" Mar 12 23:47:35.936232 systemd-networkd[1821]: vxlan.calico: Link UP Mar 12 23:47:35.936252 systemd-networkd[1821]: vxlan.calico: Gained carrier Mar 12 23:47:36.180159 systemd-networkd[1821]: cali075d1f80b1d: Gained IPv6LL Mar 12 23:47:36.320422 containerd[2010]: time="2026-03-12T23:47:36.320355683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:36.324137 containerd[2010]: time="2026-03-12T23:47:36.324071939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 12 23:47:36.325048 containerd[2010]: time="2026-03-12T23:47:36.324801275Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:36.329945 containerd[2010]: time="2026-03-12T23:47:36.329857223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:36.331654 containerd[2010]: time="2026-03-12T23:47:36.331413191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 2.868879386s" Mar 12 23:47:36.331654 containerd[2010]: time="2026-03-12T23:47:36.331476503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 12 23:47:36.336467 containerd[2010]: time="2026-03-12T23:47:36.336344459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 12 23:47:36.346154 containerd[2010]: time="2026-03-12T23:47:36.345790284Z" level=info msg="CreateContainer within sandbox \"b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 12 23:47:36.376902 containerd[2010]: time="2026-03-12T23:47:36.376786116Z" level=info msg="Container c7d36ea1dab61b75df568e4ff4c60f722ab8e1504967cbc6ebf6a8296898c744: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:36.391985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount485469030.mount: Deactivated successfully. Mar 12 23:47:36.411037 containerd[2010]: time="2026-03-12T23:47:36.410445120Z" level=info msg="CreateContainer within sandbox \"b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c7d36ea1dab61b75df568e4ff4c60f722ab8e1504967cbc6ebf6a8296898c744\"" Mar 12 23:47:36.412063 containerd[2010]: time="2026-03-12T23:47:36.412019760Z" level=info msg="StartContainer for \"c7d36ea1dab61b75df568e4ff4c60f722ab8e1504967cbc6ebf6a8296898c744\"" Mar 12 23:47:36.418053 containerd[2010]: time="2026-03-12T23:47:36.417545292Z" level=info msg="connecting to shim c7d36ea1dab61b75df568e4ff4c60f722ab8e1504967cbc6ebf6a8296898c744" address="unix:///run/containerd/s/4d214a1ac80d7883309853ce57fdfadebb6948cc7fce86f88bb5dd08580a303b" protocol=ttrpc version=3 Mar 12 23:47:36.476568 systemd[1]: Started cri-containerd-c7d36ea1dab61b75df568e4ff4c60f722ab8e1504967cbc6ebf6a8296898c744.scope - libcontainer container c7d36ea1dab61b75df568e4ff4c60f722ab8e1504967cbc6ebf6a8296898c744. Mar 12 23:47:36.663217 containerd[2010]: time="2026-03-12T23:47:36.663163357Z" level=info msg="StartContainer for \"c7d36ea1dab61b75df568e4ff4c60f722ab8e1504967cbc6ebf6a8296898c744\" returns successfully" Mar 12 23:47:37.652197 systemd-networkd[1821]: vxlan.calico: Gained IPv6LL Mar 12 23:47:37.808068 containerd[2010]: time="2026-03-12T23:47:37.807539583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:37.810099 containerd[2010]: time="2026-03-12T23:47:37.809799147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 12 23:47:37.812245 containerd[2010]: time="2026-03-12T23:47:37.812176767Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:37.818743 containerd[2010]: time="2026-03-12T23:47:37.818692875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:37.820236 containerd[2010]: time="2026-03-12T23:47:37.819939183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.483529132s" Mar 12 23:47:37.820236 containerd[2010]: time="2026-03-12T23:47:37.820007847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 12 23:47:37.823122 containerd[2010]: time="2026-03-12T23:47:37.823056111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 12 23:47:37.835354 containerd[2010]: time="2026-03-12T23:47:37.835236591Z" level=info msg="CreateContainer within sandbox \"0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 12 23:47:37.855316 containerd[2010]: time="2026-03-12T23:47:37.854804943Z" level=info msg="Container e5cbc260f61858213548a8a693547f46d0bb824b85c2532f38b321d048f6d0f9: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:37.867756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3652753793.mount: Deactivated successfully. Mar 12 23:47:37.877749 containerd[2010]: time="2026-03-12T23:47:37.877593903Z" level=info msg="CreateContainer within sandbox \"0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e5cbc260f61858213548a8a693547f46d0bb824b85c2532f38b321d048f6d0f9\"" Mar 12 23:47:37.878632 containerd[2010]: time="2026-03-12T23:47:37.878508591Z" level=info msg="StartContainer for \"e5cbc260f61858213548a8a693547f46d0bb824b85c2532f38b321d048f6d0f9\"" Mar 12 23:47:37.882447 containerd[2010]: time="2026-03-12T23:47:37.882084855Z" level=info msg="connecting to shim e5cbc260f61858213548a8a693547f46d0bb824b85c2532f38b321d048f6d0f9" address="unix:///run/containerd/s/1c7d51865f268f3b6ef866c38b062a7a7ae60c6b95ffbcea184f73feae4db2c0" protocol=ttrpc version=3 Mar 12 23:47:37.925308 systemd[1]: Started cri-containerd-e5cbc260f61858213548a8a693547f46d0bb824b85c2532f38b321d048f6d0f9.scope - libcontainer container e5cbc260f61858213548a8a693547f46d0bb824b85c2532f38b321d048f6d0f9. Mar 12 23:47:38.045296 containerd[2010]: time="2026-03-12T23:47:38.045225960Z" level=info msg="StartContainer for \"e5cbc260f61858213548a8a693547f46d0bb824b85c2532f38b321d048f6d0f9\" returns successfully" Mar 12 23:47:39.252059 containerd[2010]: time="2026-03-12T23:47:39.251952122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:39.255047 containerd[2010]: time="2026-03-12T23:47:39.254931902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 12 23:47:39.257539 containerd[2010]: time="2026-03-12T23:47:39.257458958Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:39.262732 containerd[2010]: time="2026-03-12T23:47:39.262079522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:39.263572 containerd[2010]: time="2026-03-12T23:47:39.263475878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.440361135s" Mar 12 23:47:39.263686 containerd[2010]: time="2026-03-12T23:47:39.263574218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 12 23:47:39.266328 containerd[2010]: time="2026-03-12T23:47:39.266273678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 12 23:47:39.277530 containerd[2010]: time="2026-03-12T23:47:39.277470194Z" level=info msg="CreateContainer within sandbox \"b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 12 23:47:39.297024 containerd[2010]: time="2026-03-12T23:47:39.296273750Z" level=info msg="Container 03eaade83e8450cf245bbf6a244c85c3e6754be76c2bbb812c21a9659f6045ea: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:39.322496 containerd[2010]: time="2026-03-12T23:47:39.322420514Z" level=info msg="CreateContainer within sandbox \"b57439701d216823fccc38f4c3e25b81d86bf9467adfb56465c3531c6d71fe47\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"03eaade83e8450cf245bbf6a244c85c3e6754be76c2bbb812c21a9659f6045ea\"" Mar 12 23:47:39.323820 containerd[2010]: time="2026-03-12T23:47:39.323678978Z" level=info msg="StartContainer for \"03eaade83e8450cf245bbf6a244c85c3e6754be76c2bbb812c21a9659f6045ea\"" Mar 12 23:47:39.334015 containerd[2010]: time="2026-03-12T23:47:39.333840878Z" level=info msg="connecting to shim 03eaade83e8450cf245bbf6a244c85c3e6754be76c2bbb812c21a9659f6045ea" address="unix:///run/containerd/s/4d214a1ac80d7883309853ce57fdfadebb6948cc7fce86f88bb5dd08580a303b" protocol=ttrpc version=3 Mar 12 23:47:39.388290 systemd[1]: Started cri-containerd-03eaade83e8450cf245bbf6a244c85c3e6754be76c2bbb812c21a9659f6045ea.scope - libcontainer container 03eaade83e8450cf245bbf6a244c85c3e6754be76c2bbb812c21a9659f6045ea. Mar 12 23:47:39.512745 containerd[2010]: time="2026-03-12T23:47:39.512483343Z" level=info msg="StartContainer for \"03eaade83e8450cf245bbf6a244c85c3e6754be76c2bbb812c21a9659f6045ea\" returns successfully" Mar 12 23:47:39.787753 ntpd[2185]: Listen normally on 6 vxlan.calico 192.168.37.64:123 Mar 12 23:47:39.788524 ntpd[2185]: 12 Mar 23:47:39 ntpd[2185]: Listen normally on 6 vxlan.calico 192.168.37.64:123 Mar 12 23:47:39.788524 ntpd[2185]: 12 Mar 23:47:39 ntpd[2185]: Listen normally on 7 cali3b4486d0807 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 12 23:47:39.788524 ntpd[2185]: 12 Mar 23:47:39 ntpd[2185]: Listen normally on 8 cali075d1f80b1d [fe80::ecee:eeff:feee:eeee%5]:123 Mar 12 23:47:39.788524 ntpd[2185]: 12 Mar 23:47:39 ntpd[2185]: Listen normally on 9 vxlan.calico [fe80::6476:4aff:feea:7bc2%6]:123 Mar 12 23:47:39.787830 ntpd[2185]: Listen normally on 7 cali3b4486d0807 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 12 23:47:39.787892 ntpd[2185]: Listen normally on 8 cali075d1f80b1d [fe80::ecee:eeff:feee:eeee%5]:123 Mar 12 23:47:39.787941 ntpd[2185]: Listen normally on 9 vxlan.calico [fe80::6476:4aff:feea:7bc2%6]:123 Mar 12 23:47:39.956801 kubelet[3349]: I0312 23:47:39.956739 3349 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 12 23:47:39.956801 kubelet[3349]: I0312 23:47:39.956805 3349 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 12 23:47:40.337947 kubelet[3349]: I0312 23:47:40.337653 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6s59z" podStartSLOduration=21.533306242 podStartE2EDuration="27.337478595s" podCreationTimestamp="2026-03-12 23:47:13 +0000 UTC" firstStartedPulling="2026-03-12 23:47:33.461695941 +0000 UTC m=+45.960449641" lastFinishedPulling="2026-03-12 23:47:39.265868282 +0000 UTC m=+51.764621994" observedRunningTime="2026-03-12 23:47:40.335865459 +0000 UTC m=+52.834619195" watchObservedRunningTime="2026-03-12 23:47:40.337478595 +0000 UTC m=+52.836232307" Mar 12 23:47:40.871342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4127606595.mount: Deactivated successfully. Mar 12 23:47:40.927298 containerd[2010]: time="2026-03-12T23:47:40.927235962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:40.929698 containerd[2010]: time="2026-03-12T23:47:40.929331330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 12 23:47:40.932023 containerd[2010]: time="2026-03-12T23:47:40.931958538Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:40.938867 containerd[2010]: time="2026-03-12T23:47:40.938767734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:40.940367 containerd[2010]: time="2026-03-12T23:47:40.940309146Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.672901624s" Mar 12 23:47:40.940515 containerd[2010]: time="2026-03-12T23:47:40.940393530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 12 23:47:40.948695 containerd[2010]: time="2026-03-12T23:47:40.948628122Z" level=info msg="CreateContainer within sandbox \"0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 12 23:47:40.968931 containerd[2010]: time="2026-03-12T23:47:40.967461294Z" level=info msg="Container e91ec50fe8a09f75bacdd5a8b526cabc5cddf489d7754339cb78c7581ad70a52: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:40.981737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount307760865.mount: Deactivated successfully. Mar 12 23:47:40.989120 containerd[2010]: time="2026-03-12T23:47:40.988969027Z" level=info msg="CreateContainer within sandbox \"0fa705f79785e4650c68b873603fb1a9718b84e1a6f0a018facb3b94e12e67d1\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e91ec50fe8a09f75bacdd5a8b526cabc5cddf489d7754339cb78c7581ad70a52\"" Mar 12 23:47:40.990079 containerd[2010]: time="2026-03-12T23:47:40.990029863Z" level=info msg="StartContainer for \"e91ec50fe8a09f75bacdd5a8b526cabc5cddf489d7754339cb78c7581ad70a52\"" Mar 12 23:47:40.995288 containerd[2010]: time="2026-03-12T23:47:40.994958899Z" level=info msg="connecting to shim e91ec50fe8a09f75bacdd5a8b526cabc5cddf489d7754339cb78c7581ad70a52" address="unix:///run/containerd/s/1c7d51865f268f3b6ef866c38b062a7a7ae60c6b95ffbcea184f73feae4db2c0" protocol=ttrpc version=3 Mar 12 23:47:41.041319 systemd[1]: Started cri-containerd-e91ec50fe8a09f75bacdd5a8b526cabc5cddf489d7754339cb78c7581ad70a52.scope - libcontainer container e91ec50fe8a09f75bacdd5a8b526cabc5cddf489d7754339cb78c7581ad70a52. Mar 12 23:47:41.136712 containerd[2010]: time="2026-03-12T23:47:41.135588615Z" level=info msg="StartContainer for \"e91ec50fe8a09f75bacdd5a8b526cabc5cddf489d7754339cb78c7581ad70a52\" returns successfully" Mar 12 23:47:42.829503 containerd[2010]: time="2026-03-12T23:47:42.829061948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gjz4s,Uid:6a3f4877-4e7d-4949-9822-2d134b1ffd89,Namespace:kube-system,Attempt:0,}" Mar 12 23:47:43.036797 systemd-networkd[1821]: cali98ce76aad37: Link UP Mar 12 23:47:43.039709 systemd-networkd[1821]: cali98ce76aad37: Gained carrier Mar 12 23:47:43.048219 (udev-worker)[5241]: Network interface NamePolicy= disabled on kernel command line. Mar 12 23:47:43.056948 kubelet[3349]: I0312 23:47:43.056850 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-665b8996c-9lnvz" podStartSLOduration=3.508703066 podStartE2EDuration="9.056826629s" podCreationTimestamp="2026-03-12 23:47:34 +0000 UTC" firstStartedPulling="2026-03-12 23:47:35.394046831 +0000 UTC m=+47.892800543" lastFinishedPulling="2026-03-12 23:47:40.942170406 +0000 UTC m=+53.440924106" observedRunningTime="2026-03-12 23:47:41.35490208 +0000 UTC m=+53.853655828" watchObservedRunningTime="2026-03-12 23:47:43.056826629 +0000 UTC m=+55.555580341" Mar 12 23:47:43.068529 containerd[2010]: 2026-03-12 23:47:42.911 [INFO][5223] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--143-k8s-coredns--66bc5c9577--gjz4s-eth0 coredns-66bc5c9577- kube-system 6a3f4877-4e7d-4949-9822-2d134b1ffd89 869 0 2026-03-12 23:46:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-143 coredns-66bc5c9577-gjz4s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali98ce76aad37 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" Namespace="kube-system" Pod="coredns-66bc5c9577-gjz4s" WorkloadEndpoint="ip--172--31--24--143-k8s-coredns--66bc5c9577--gjz4s-" Mar 12 23:47:43.068529 containerd[2010]: 2026-03-12 23:47:42.911 [INFO][5223] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" Namespace="kube-system" Pod="coredns-66bc5c9577-gjz4s" WorkloadEndpoint="ip--172--31--24--143-k8s-coredns--66bc5c9577--gjz4s-eth0" Mar 12 23:47:43.068529 containerd[2010]: 2026-03-12 23:47:42.960 [INFO][5234] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" HandleID="k8s-pod-network.7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" Workload="ip--172--31--24--143-k8s-coredns--66bc5c9577--gjz4s-eth0" Mar 12 23:47:43.068856 containerd[2010]: 2026-03-12 23:47:42.976 [INFO][5234] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" HandleID="k8s-pod-network.7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" Workload="ip--172--31--24--143-k8s-coredns--66bc5c9577--gjz4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f9e80), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-143", "pod":"coredns-66bc5c9577-gjz4s", "timestamp":"2026-03-12 23:47:42.960588416 +0000 UTC"}, Hostname:"ip-172-31-24-143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001878c0)} Mar 12 23:47:43.068856 containerd[2010]: 2026-03-12 23:47:42.976 [INFO][5234] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 23:47:43.068856 containerd[2010]: 2026-03-12 23:47:42.976 [INFO][5234] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 23:47:43.068856 containerd[2010]: 2026-03-12 23:47:42.976 [INFO][5234] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-143' Mar 12 23:47:43.068856 containerd[2010]: 2026-03-12 23:47:42.980 [INFO][5234] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" host="ip-172-31-24-143" Mar 12 23:47:43.068856 containerd[2010]: 2026-03-12 23:47:42.988 [INFO][5234] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-143" Mar 12 23:47:43.068856 containerd[2010]: 2026-03-12 23:47:42.996 [INFO][5234] ipam/ipam.go 526: Trying affinity for 192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:43.068856 containerd[2010]: 2026-03-12 23:47:42.999 [INFO][5234] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:43.068856 containerd[2010]: 2026-03-12 23:47:43.003 [INFO][5234] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:43.069440 containerd[2010]: 2026-03-12 23:47:43.003 [INFO][5234] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.64/26 handle="k8s-pod-network.7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" host="ip-172-31-24-143" Mar 12 23:47:43.069440 containerd[2010]: 2026-03-12 23:47:43.007 [INFO][5234] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d Mar 12 23:47:43.069440 containerd[2010]: 2026-03-12 23:47:43.014 [INFO][5234] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.64/26 handle="k8s-pod-network.7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" host="ip-172-31-24-143" Mar 12 23:47:43.069440 containerd[2010]: 2026-03-12 23:47:43.025 [INFO][5234] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.67/26] block=192.168.37.64/26 handle="k8s-pod-network.7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" host="ip-172-31-24-143" Mar 12 23:47:43.069440 containerd[2010]: 2026-03-12 23:47:43.025 [INFO][5234] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.67/26] handle="k8s-pod-network.7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" host="ip-172-31-24-143" Mar 12 23:47:43.069440 containerd[2010]: 2026-03-12 23:47:43.025 [INFO][5234] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 23:47:43.069440 containerd[2010]: 2026-03-12 23:47:43.025 [INFO][5234] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.67/26] IPv6=[] ContainerID="7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" HandleID="k8s-pod-network.7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" Workload="ip--172--31--24--143-k8s-coredns--66bc5c9577--gjz4s-eth0" Mar 12 23:47:43.074227 containerd[2010]: 2026-03-12 23:47:43.029 [INFO][5223] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" Namespace="kube-system" Pod="coredns-66bc5c9577-gjz4s" WorkloadEndpoint="ip--172--31--24--143-k8s-coredns--66bc5c9577--gjz4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--143-k8s-coredns--66bc5c9577--gjz4s-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6a3f4877-4e7d-4949-9822-2d134b1ffd89", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-143", ContainerID:"", Pod:"coredns-66bc5c9577-gjz4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali98ce76aad37", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:43.074227 containerd[2010]: 2026-03-12 23:47:43.030 [INFO][5223] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.67/32] ContainerID="7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" Namespace="kube-system" Pod="coredns-66bc5c9577-gjz4s" WorkloadEndpoint="ip--172--31--24--143-k8s-coredns--66bc5c9577--gjz4s-eth0" Mar 12 23:47:43.074227 containerd[2010]: 2026-03-12 23:47:43.030 [INFO][5223] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98ce76aad37 ContainerID="7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" Namespace="kube-system" Pod="coredns-66bc5c9577-gjz4s" WorkloadEndpoint="ip--172--31--24--143-k8s-coredns--66bc5c9577--gjz4s-eth0" Mar 12 23:47:43.074227 containerd[2010]: 2026-03-12 23:47:43.035 [INFO][5223] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" Namespace="kube-system" Pod="coredns-66bc5c9577-gjz4s" WorkloadEndpoint="ip--172--31--24--143-k8s-coredns--66bc5c9577--gjz4s-eth0" Mar 12 23:47:43.074227 containerd[2010]: 2026-03-12 23:47:43.036 [INFO][5223] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" Namespace="kube-system" Pod="coredns-66bc5c9577-gjz4s" WorkloadEndpoint="ip--172--31--24--143-k8s-coredns--66bc5c9577--gjz4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--143-k8s-coredns--66bc5c9577--gjz4s-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6a3f4877-4e7d-4949-9822-2d134b1ffd89", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-143", ContainerID:"7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d", Pod:"coredns-66bc5c9577-gjz4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali98ce76aad37", MAC:"62:5a:e7:c0:bf:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:43.074227 containerd[2010]: 2026-03-12 23:47:43.058 [INFO][5223] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" Namespace="kube-system" Pod="coredns-66bc5c9577-gjz4s" WorkloadEndpoint="ip--172--31--24--143-k8s-coredns--66bc5c9577--gjz4s-eth0" Mar 12 23:47:43.142636 containerd[2010]: time="2026-03-12T23:47:43.141308021Z" level=info msg="connecting to shim 7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d" address="unix:///run/containerd/s/3d747770735c50f657334de0bd2b086a38471e3ec34ced44b6d0d5c914f5b1be" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:43.201436 systemd[1]: Started cri-containerd-7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d.scope - libcontainer container 7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d. Mar 12 23:47:43.295666 containerd[2010]: time="2026-03-12T23:47:43.295497198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gjz4s,Uid:6a3f4877-4e7d-4949-9822-2d134b1ffd89,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d\"" Mar 12 23:47:43.325715 containerd[2010]: time="2026-03-12T23:47:43.325409970Z" level=info msg="CreateContainer within sandbox \"7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 23:47:43.358175 containerd[2010]: time="2026-03-12T23:47:43.358067046Z" level=info msg="Container 918044979c1c46c41ad21b9279f2ffba4a7db1c12b13bb4b06dd0ce95a8b376f: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:43.373307 containerd[2010]: time="2026-03-12T23:47:43.373232502Z" level=info msg="CreateContainer within sandbox \"7a317c40c0fe66f5e9b8fb0b39aa6b6676d77c457063ddbb0d511420857eb40d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"918044979c1c46c41ad21b9279f2ffba4a7db1c12b13bb4b06dd0ce95a8b376f\"" Mar 12 23:47:43.375034 containerd[2010]: time="2026-03-12T23:47:43.374333094Z" level=info msg="StartContainer for \"918044979c1c46c41ad21b9279f2ffba4a7db1c12b13bb4b06dd0ce95a8b376f\"" Mar 12 23:47:43.379583 containerd[2010]: time="2026-03-12T23:47:43.379347474Z" level=info msg="connecting to shim 918044979c1c46c41ad21b9279f2ffba4a7db1c12b13bb4b06dd0ce95a8b376f" address="unix:///run/containerd/s/3d747770735c50f657334de0bd2b086a38471e3ec34ced44b6d0d5c914f5b1be" protocol=ttrpc version=3 Mar 12 23:47:43.414298 systemd[1]: Started cri-containerd-918044979c1c46c41ad21b9279f2ffba4a7db1c12b13bb4b06dd0ce95a8b376f.scope - libcontainer container 918044979c1c46c41ad21b9279f2ffba4a7db1c12b13bb4b06dd0ce95a8b376f. Mar 12 23:47:43.494754 containerd[2010]: time="2026-03-12T23:47:43.494593243Z" level=info msg="StartContainer for \"918044979c1c46c41ad21b9279f2ffba4a7db1c12b13bb4b06dd0ce95a8b376f\" returns successfully" Mar 12 23:47:43.833416 containerd[2010]: time="2026-03-12T23:47:43.833354037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c756fffd4-8qfx6,Uid:2d4c7489-5970-4abd-8680-102ae8ce66cd,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:43.840660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount225324326.mount: Deactivated successfully. Mar 12 23:47:44.235909 systemd-networkd[1821]: calida923888b60: Link UP Mar 12 23:47:44.241081 systemd-networkd[1821]: calida923888b60: Gained carrier Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:43.959 [INFO][5335] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--8qfx6-eth0 calico-apiserver-7c756fffd4- calico-system 2d4c7489-5970-4abd-8680-102ae8ce66cd 876 0 2026-03-12 23:47:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c756fffd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-143 calico-apiserver-7c756fffd4-8qfx6 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calida923888b60 [] [] }} ContainerID="c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" Namespace="calico-system" Pod="calico-apiserver-7c756fffd4-8qfx6" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--8qfx6-" Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:43.960 [INFO][5335] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" Namespace="calico-system" Pod="calico-apiserver-7c756fffd4-8qfx6" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--8qfx6-eth0" Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.027 [INFO][5348] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" HandleID="k8s-pod-network.c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" Workload="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--8qfx6-eth0" Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.057 [INFO][5348] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" HandleID="k8s-pod-network.c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" Workload="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--8qfx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-143", "pod":"calico-apiserver-7c756fffd4-8qfx6", "timestamp":"2026-03-12 23:47:44.027826722 +0000 UTC"}, Hostname:"ip-172-31-24-143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b4580)} Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.057 [INFO][5348] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.057 [INFO][5348] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.057 [INFO][5348] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-143' Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.081 [INFO][5348] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" host="ip-172-31-24-143" Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.118 [INFO][5348] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-143" Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.134 [INFO][5348] ipam/ipam.go 526: Trying affinity for 192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.141 [INFO][5348] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.156 [INFO][5348] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.156 [INFO][5348] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.64/26 handle="k8s-pod-network.c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" host="ip-172-31-24-143" Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.163 [INFO][5348] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38 Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.175 [INFO][5348] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.64/26 handle="k8s-pod-network.c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" host="ip-172-31-24-143" Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.208 [INFO][5348] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.68/26] block=192.168.37.64/26 handle="k8s-pod-network.c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" host="ip-172-31-24-143" Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.208 [INFO][5348] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.68/26] handle="k8s-pod-network.c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" host="ip-172-31-24-143" Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.208 [INFO][5348] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 23:47:44.308019 containerd[2010]: 2026-03-12 23:47:44.208 [INFO][5348] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.68/26] IPv6=[] ContainerID="c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" HandleID="k8s-pod-network.c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" Workload="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--8qfx6-eth0" Mar 12 23:47:44.309199 containerd[2010]: 2026-03-12 23:47:44.218 [INFO][5335] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" Namespace="calico-system" Pod="calico-apiserver-7c756fffd4-8qfx6" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--8qfx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--8qfx6-eth0", GenerateName:"calico-apiserver-7c756fffd4-", Namespace:"calico-system", SelfLink:"", UID:"2d4c7489-5970-4abd-8680-102ae8ce66cd", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c756fffd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-143", ContainerID:"", Pod:"calico-apiserver-7c756fffd4-8qfx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calida923888b60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:44.309199 containerd[2010]: 2026-03-12 23:47:44.220 [INFO][5335] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.68/32] ContainerID="c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" Namespace="calico-system" Pod="calico-apiserver-7c756fffd4-8qfx6" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--8qfx6-eth0" Mar 12 23:47:44.309199 containerd[2010]: 2026-03-12 23:47:44.220 [INFO][5335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida923888b60 ContainerID="c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" Namespace="calico-system" Pod="calico-apiserver-7c756fffd4-8qfx6" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--8qfx6-eth0" Mar 12 23:47:44.309199 containerd[2010]: 2026-03-12 23:47:44.240 [INFO][5335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" Namespace="calico-system" Pod="calico-apiserver-7c756fffd4-8qfx6" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--8qfx6-eth0" Mar 12 23:47:44.309199 containerd[2010]: 2026-03-12 23:47:44.244 [INFO][5335] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" Namespace="calico-system" Pod="calico-apiserver-7c756fffd4-8qfx6" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--8qfx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--8qfx6-eth0", GenerateName:"calico-apiserver-7c756fffd4-", Namespace:"calico-system", SelfLink:"", UID:"2d4c7489-5970-4abd-8680-102ae8ce66cd", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c756fffd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-143", ContainerID:"c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38", Pod:"calico-apiserver-7c756fffd4-8qfx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calida923888b60", MAC:"f6:79:10:41:8f:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:44.309199 containerd[2010]: 2026-03-12 23:47:44.301 [INFO][5335] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" Namespace="calico-system" Pod="calico-apiserver-7c756fffd4-8qfx6" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--8qfx6-eth0" Mar 12 23:47:44.400027 containerd[2010]: time="2026-03-12T23:47:44.398174312Z" level=info msg="connecting to shim c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38" address="unix:///run/containerd/s/0b327fa074cfc11ba9265453bd60b14bc0a6782f81663be35dcb9f48771d5a9b" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:44.516345 systemd[1]: Started cri-containerd-c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38.scope - libcontainer container c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38. Mar 12 23:47:44.570711 kubelet[3349]: I0312 23:47:44.570599 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gjz4s" podStartSLOduration=50.570553964 podStartE2EDuration="50.570553964s" podCreationTimestamp="2026-03-12 23:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:47:44.529914596 +0000 UTC m=+57.028668356" watchObservedRunningTime="2026-03-12 23:47:44.570553964 +0000 UTC m=+57.069307664" Mar 12 23:47:44.725422 containerd[2010]: time="2026-03-12T23:47:44.725348349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c756fffd4-8qfx6,Uid:2d4c7489-5970-4abd-8680-102ae8ce66cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38\"" Mar 12 23:47:44.730857 containerd[2010]: time="2026-03-12T23:47:44.730768905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 23:47:44.756285 systemd-networkd[1821]: cali98ce76aad37: Gained IPv6LL Mar 12 23:47:44.829872 containerd[2010]: time="2026-03-12T23:47:44.829777966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kvrxm,Uid:d46ee16e-4443-484c-b953-35f51dbb7a84,Namespace:kube-system,Attempt:0,}" Mar 12 23:47:45.012581 systemd[1]: Started sshd@7-172.31.24.143:22-4.153.228.146:44598.service - OpenSSH per-connection server daemon (4.153.228.146:44598). Mar 12 23:47:45.090024 systemd-networkd[1821]: calia75c1da2da7: Link UP Mar 12 23:47:45.093482 systemd-networkd[1821]: calia75c1da2da7: Gained carrier Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:44.925 [INFO][5432] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--143-k8s-coredns--66bc5c9577--kvrxm-eth0 coredns-66bc5c9577- kube-system d46ee16e-4443-484c-b953-35f51dbb7a84 875 0 2026-03-12 23:46:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-143 coredns-66bc5c9577-kvrxm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia75c1da2da7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" Namespace="kube-system" Pod="coredns-66bc5c9577-kvrxm" WorkloadEndpoint="ip--172--31--24--143-k8s-coredns--66bc5c9577--kvrxm-" Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:44.926 [INFO][5432] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" Namespace="kube-system" Pod="coredns-66bc5c9577-kvrxm" WorkloadEndpoint="ip--172--31--24--143-k8s-coredns--66bc5c9577--kvrxm-eth0" Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:44.980 [INFO][5447] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" HandleID="k8s-pod-network.4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" Workload="ip--172--31--24--143-k8s-coredns--66bc5c9577--kvrxm-eth0" Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:44.998 [INFO][5447] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" HandleID="k8s-pod-network.4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" Workload="ip--172--31--24--143-k8s-coredns--66bc5c9577--kvrxm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbdd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-143", "pod":"coredns-66bc5c9577-kvrxm", "timestamp":"2026-03-12 23:47:44.980891374 +0000 UTC"}, Hostname:"ip-172-31-24-143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004f6160)} Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:44.998 [INFO][5447] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:44.998 [INFO][5447] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:44.998 [INFO][5447] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-143' Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:45.003 [INFO][5447] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" host="ip-172-31-24-143" Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:45.016 [INFO][5447] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-143" Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:45.029 [INFO][5447] ipam/ipam.go 526: Trying affinity for 192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:45.034 [INFO][5447] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:45.040 [INFO][5447] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:45.041 [INFO][5447] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.64/26 handle="k8s-pod-network.4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" host="ip-172-31-24-143" Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:45.045 [INFO][5447] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0 Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:45.054 [INFO][5447] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.64/26 handle="k8s-pod-network.4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" host="ip-172-31-24-143" Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:45.072 [INFO][5447] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.69/26] block=192.168.37.64/26 handle="k8s-pod-network.4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" host="ip-172-31-24-143" Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:45.072 [INFO][5447] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.69/26] handle="k8s-pod-network.4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" host="ip-172-31-24-143" Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:45.072 [INFO][5447] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 23:47:45.132103 containerd[2010]: 2026-03-12 23:47:45.074 [INFO][5447] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.69/26] IPv6=[] ContainerID="4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" HandleID="k8s-pod-network.4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" Workload="ip--172--31--24--143-k8s-coredns--66bc5c9577--kvrxm-eth0" Mar 12 23:47:45.135949 containerd[2010]: 2026-03-12 23:47:45.083 [INFO][5432] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" Namespace="kube-system" Pod="coredns-66bc5c9577-kvrxm" WorkloadEndpoint="ip--172--31--24--143-k8s-coredns--66bc5c9577--kvrxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--143-k8s-coredns--66bc5c9577--kvrxm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d46ee16e-4443-484c-b953-35f51dbb7a84", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-143", ContainerID:"", Pod:"coredns-66bc5c9577-kvrxm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia75c1da2da7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:45.135949 containerd[2010]: 2026-03-12 23:47:45.083 [INFO][5432] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.69/32] ContainerID="4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" Namespace="kube-system" Pod="coredns-66bc5c9577-kvrxm" WorkloadEndpoint="ip--172--31--24--143-k8s-coredns--66bc5c9577--kvrxm-eth0" Mar 12 23:47:45.135949 containerd[2010]: 2026-03-12 23:47:45.083 [INFO][5432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia75c1da2da7 ContainerID="4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" Namespace="kube-system" Pod="coredns-66bc5c9577-kvrxm" WorkloadEndpoint="ip--172--31--24--143-k8s-coredns--66bc5c9577--kvrxm-eth0" Mar 12 23:47:45.135949 containerd[2010]: 2026-03-12 23:47:45.095 [INFO][5432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" Namespace="kube-system" Pod="coredns-66bc5c9577-kvrxm" WorkloadEndpoint="ip--172--31--24--143-k8s-coredns--66bc5c9577--kvrxm-eth0" Mar 12 23:47:45.135949 containerd[2010]: 2026-03-12 23:47:45.097 [INFO][5432] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" Namespace="kube-system" Pod="coredns-66bc5c9577-kvrxm" WorkloadEndpoint="ip--172--31--24--143-k8s-coredns--66bc5c9577--kvrxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--143-k8s-coredns--66bc5c9577--kvrxm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d46ee16e-4443-484c-b953-35f51dbb7a84", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-143", ContainerID:"4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0", Pod:"coredns-66bc5c9577-kvrxm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia75c1da2da7", MAC:"36:28:8c:84:92:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:45.135949 containerd[2010]: 2026-03-12 23:47:45.121 [INFO][5432] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" Namespace="kube-system" Pod="coredns-66bc5c9577-kvrxm" WorkloadEndpoint="ip--172--31--24--143-k8s-coredns--66bc5c9577--kvrxm-eth0" Mar 12 23:47:45.203222 containerd[2010]: time="2026-03-12T23:47:45.202907299Z" level=info msg="connecting to shim 4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0" address="unix:///run/containerd/s/a9e963ba21d8c456cc95c6d5a00c33b9aebd064e343b0be1b6b2bec6a74a8094" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:45.270680 systemd[1]: Started cri-containerd-4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0.scope - libcontainer container 4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0. Mar 12 23:47:45.365891 containerd[2010]: time="2026-03-12T23:47:45.364468196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kvrxm,Uid:d46ee16e-4443-484c-b953-35f51dbb7a84,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0\"" Mar 12 23:47:45.376799 containerd[2010]: time="2026-03-12T23:47:45.376328816Z" level=info msg="CreateContainer within sandbox \"4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 23:47:45.409020 containerd[2010]: time="2026-03-12T23:47:45.408754917Z" level=info msg="Container 79e651f02ac88e651dcbd33d4cae69e292edd3d1846b6f3f39bb21d07959ad7f: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:45.425879 containerd[2010]: time="2026-03-12T23:47:45.425798241Z" level=info msg="CreateContainer within sandbox \"4f943c7e183fa051d9b1acf05d550de2b0a2c2b7e0ed2485a6d76438da2732b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79e651f02ac88e651dcbd33d4cae69e292edd3d1846b6f3f39bb21d07959ad7f\"" Mar 12 23:47:45.427200 containerd[2010]: time="2026-03-12T23:47:45.426962589Z" level=info msg="StartContainer for \"79e651f02ac88e651dcbd33d4cae69e292edd3d1846b6f3f39bb21d07959ad7f\"" Mar 12 23:47:45.429464 containerd[2010]: time="2026-03-12T23:47:45.429312813Z" level=info msg="connecting to shim 79e651f02ac88e651dcbd33d4cae69e292edd3d1846b6f3f39bb21d07959ad7f" address="unix:///run/containerd/s/a9e963ba21d8c456cc95c6d5a00c33b9aebd064e343b0be1b6b2bec6a74a8094" protocol=ttrpc version=3 Mar 12 23:47:45.480519 systemd[1]: Started cri-containerd-79e651f02ac88e651dcbd33d4cae69e292edd3d1846b6f3f39bb21d07959ad7f.scope - libcontainer container 79e651f02ac88e651dcbd33d4cae69e292edd3d1846b6f3f39bb21d07959ad7f. Mar 12 23:47:45.557620 containerd[2010]: time="2026-03-12T23:47:45.557563941Z" level=info msg="StartContainer for \"79e651f02ac88e651dcbd33d4cae69e292edd3d1846b6f3f39bb21d07959ad7f\" returns successfully" Mar 12 23:47:45.562426 sshd[5455]: Accepted publickey for core from 4.153.228.146 port 44598 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:47:45.565957 sshd-session[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:47:45.582662 systemd-logind[1980]: New session 8 of user core. Mar 12 23:47:45.591978 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 23:47:45.780489 systemd-networkd[1821]: calida923888b60: Gained IPv6LL Mar 12 23:47:45.831848 containerd[2010]: time="2026-03-12T23:47:45.831200579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cfb7fbb5f-qtttw,Uid:64a54bab-a76e-4249-995e-1d55d1566fc4,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:45.837202 containerd[2010]: time="2026-03-12T23:47:45.836399891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-xmwzp,Uid:d3835311-ddc2-442e-a8c0-72f59b9ff3ae,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:45.846314 containerd[2010]: time="2026-03-12T23:47:45.846248051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c756fffd4-j2tp7,Uid:abc90d13-06a6-4b38-88bd-93ea7ceb4e66,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:46.183569 sshd[5554]: Connection closed by 4.153.228.146 port 44598 Mar 12 23:47:46.185969 sshd-session[5455]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:46.205472 systemd-logind[1980]: Session 8 logged out. Waiting for processes to exit. Mar 12 23:47:46.206363 systemd[1]: sshd@7-172.31.24.143:22-4.153.228.146:44598.service: Deactivated successfully. Mar 12 23:47:46.218407 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 23:47:46.227603 systemd-logind[1980]: Removed session 8. Mar 12 23:47:46.496548 systemd-networkd[1821]: calic270d103a47: Link UP Mar 12 23:47:46.501081 systemd-networkd[1821]: calic270d103a47: Gained carrier Mar 12 23:47:46.532794 kubelet[3349]: I0312 23:47:46.529367 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kvrxm" podStartSLOduration=52.529222534 podStartE2EDuration="52.529222534s" podCreationTimestamp="2026-03-12 23:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:47:46.52397335 +0000 UTC m=+59.022727074" watchObservedRunningTime="2026-03-12 23:47:46.529222534 +0000 UTC m=+59.028094326" Mar 12 23:47:46.548236 systemd-networkd[1821]: calia75c1da2da7: Gained IPv6LL Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.125 [INFO][5582] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--j2tp7-eth0 calico-apiserver-7c756fffd4- calico-system abc90d13-06a6-4b38-88bd-93ea7ceb4e66 880 0 2026-03-12 23:47:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c756fffd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-143 calico-apiserver-7c756fffd4-j2tp7 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calic270d103a47 [] [] }} ContainerID="b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" Namespace="calico-system" Pod="calico-apiserver-7c756fffd4-j2tp7" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--j2tp7-" Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.126 [INFO][5582] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" Namespace="calico-system" Pod="calico-apiserver-7c756fffd4-j2tp7" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--j2tp7-eth0" Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.332 [INFO][5610] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" HandleID="k8s-pod-network.b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" Workload="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--j2tp7-eth0" Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.365 [INFO][5610] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" HandleID="k8s-pod-network.b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" Workload="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--j2tp7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039e460), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-143", "pod":"calico-apiserver-7c756fffd4-j2tp7", "timestamp":"2026-03-12 23:47:46.332969781 +0000 UTC"}, Hostname:"ip-172-31-24-143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000452420)} Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.365 [INFO][5610] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.365 [INFO][5610] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.365 [INFO][5610] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-143' Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.371 [INFO][5610] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" host="ip-172-31-24-143" Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.384 [INFO][5610] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-143" Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.399 [INFO][5610] ipam/ipam.go 526: Trying affinity for 192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.408 [INFO][5610] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.419 [INFO][5610] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.419 [INFO][5610] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.64/26 handle="k8s-pod-network.b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" host="ip-172-31-24-143" Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.427 [INFO][5610] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3 Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.441 [INFO][5610] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.64/26 handle="k8s-pod-network.b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" host="ip-172-31-24-143" Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.471 [INFO][5610] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.70/26] block=192.168.37.64/26 handle="k8s-pod-network.b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" host="ip-172-31-24-143" Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.471 [INFO][5610] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.70/26] handle="k8s-pod-network.b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" host="ip-172-31-24-143" Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.471 [INFO][5610] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 23:47:46.602397 containerd[2010]: 2026-03-12 23:47:46.472 [INFO][5610] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.70/26] IPv6=[] ContainerID="b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" HandleID="k8s-pod-network.b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" Workload="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--j2tp7-eth0" Mar 12 23:47:46.607866 containerd[2010]: 2026-03-12 23:47:46.486 [INFO][5582] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" Namespace="calico-system" Pod="calico-apiserver-7c756fffd4-j2tp7" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--j2tp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--j2tp7-eth0", GenerateName:"calico-apiserver-7c756fffd4-", Namespace:"calico-system", SelfLink:"", UID:"abc90d13-06a6-4b38-88bd-93ea7ceb4e66", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c756fffd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-143", ContainerID:"", Pod:"calico-apiserver-7c756fffd4-j2tp7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic270d103a47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:46.607866 containerd[2010]: 2026-03-12 23:47:46.488 [INFO][5582] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.70/32] ContainerID="b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" Namespace="calico-system" Pod="calico-apiserver-7c756fffd4-j2tp7" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--j2tp7-eth0" Mar 12 23:47:46.607866 containerd[2010]: 2026-03-12 23:47:46.488 [INFO][5582] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic270d103a47 ContainerID="b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" Namespace="calico-system" Pod="calico-apiserver-7c756fffd4-j2tp7" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--j2tp7-eth0" Mar 12 23:47:46.607866 containerd[2010]: 2026-03-12 23:47:46.512 [INFO][5582] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" Namespace="calico-system" Pod="calico-apiserver-7c756fffd4-j2tp7" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--j2tp7-eth0" Mar 12 23:47:46.607866 containerd[2010]: 2026-03-12 23:47:46.521 [INFO][5582] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" Namespace="calico-system" Pod="calico-apiserver-7c756fffd4-j2tp7" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--j2tp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--j2tp7-eth0", GenerateName:"calico-apiserver-7c756fffd4-", Namespace:"calico-system", SelfLink:"", UID:"abc90d13-06a6-4b38-88bd-93ea7ceb4e66", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c756fffd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-143", ContainerID:"b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3", Pod:"calico-apiserver-7c756fffd4-j2tp7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic270d103a47", MAC:"2a:71:4d:5a:20:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:46.607866 containerd[2010]: 2026-03-12 23:47:46.594 [INFO][5582] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" Namespace="calico-system" Pod="calico-apiserver-7c756fffd4-j2tp7" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--apiserver--7c756fffd4--j2tp7-eth0" Mar 12 23:47:46.655964 systemd-networkd[1821]: cali4785379484d: Link UP Mar 12 23:47:46.659401 systemd-networkd[1821]: cali4785379484d: Gained carrier Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.172 [INFO][5571] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--143-k8s-calico--kube--controllers--6cfb7fbb5f--qtttw-eth0 calico-kube-controllers-6cfb7fbb5f- calico-system 64a54bab-a76e-4249-995e-1d55d1566fc4 872 0 2026-03-12 23:47:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cfb7fbb5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-24-143 calico-kube-controllers-6cfb7fbb5f-qtttw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4785379484d [] [] }} ContainerID="5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" Namespace="calico-system" Pod="calico-kube-controllers-6cfb7fbb5f-qtttw" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--kube--controllers--6cfb7fbb5f--qtttw-" Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.172 [INFO][5571] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" Namespace="calico-system" Pod="calico-kube-controllers-6cfb7fbb5f-qtttw" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--kube--controllers--6cfb7fbb5f--qtttw-eth0" Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.372 [INFO][5617] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" HandleID="k8s-pod-network.5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" Workload="ip--172--31--24--143-k8s-calico--kube--controllers--6cfb7fbb5f--qtttw-eth0" Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.408 [INFO][5617] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" HandleID="k8s-pod-network.5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" Workload="ip--172--31--24--143-k8s-calico--kube--controllers--6cfb7fbb5f--qtttw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003cbeb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-143", "pod":"calico-kube-controllers-6cfb7fbb5f-qtttw", "timestamp":"2026-03-12 23:47:46.372948153 +0000 UTC"}, Hostname:"ip-172-31-24-143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000330420)} Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.409 [INFO][5617] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.471 [INFO][5617] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.472 [INFO][5617] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-143' Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.485 [INFO][5617] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" host="ip-172-31-24-143" Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.507 [INFO][5617] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-143" Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.553 [INFO][5617] ipam/ipam.go 526: Trying affinity for 192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.559 [INFO][5617] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.566 [INFO][5617] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.566 [INFO][5617] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.64/26 handle="k8s-pod-network.5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" host="ip-172-31-24-143" Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.569 [INFO][5617] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220 Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.580 [INFO][5617] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.64/26 handle="k8s-pod-network.5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" host="ip-172-31-24-143" Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.614 [INFO][5617] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.71/26] block=192.168.37.64/26 handle="k8s-pod-network.5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" host="ip-172-31-24-143" Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.614 [INFO][5617] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.71/26] handle="k8s-pod-network.5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" host="ip-172-31-24-143" Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.614 [INFO][5617] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 23:47:46.733412 containerd[2010]: 2026-03-12 23:47:46.614 [INFO][5617] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.71/26] IPv6=[] ContainerID="5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" HandleID="k8s-pod-network.5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" Workload="ip--172--31--24--143-k8s-calico--kube--controllers--6cfb7fbb5f--qtttw-eth0" Mar 12 23:47:46.735589 containerd[2010]: 2026-03-12 23:47:46.635 [INFO][5571] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" Namespace="calico-system" Pod="calico-kube-controllers-6cfb7fbb5f-qtttw" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--kube--controllers--6cfb7fbb5f--qtttw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--143-k8s-calico--kube--controllers--6cfb7fbb5f--qtttw-eth0", GenerateName:"calico-kube-controllers-6cfb7fbb5f-", Namespace:"calico-system", SelfLink:"", UID:"64a54bab-a76e-4249-995e-1d55d1566fc4", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 47, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cfb7fbb5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-143", ContainerID:"", Pod:"calico-kube-controllers-6cfb7fbb5f-qtttw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4785379484d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:46.735589 containerd[2010]: 2026-03-12 23:47:46.636 [INFO][5571] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.71/32] ContainerID="5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" Namespace="calico-system" Pod="calico-kube-controllers-6cfb7fbb5f-qtttw" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--kube--controllers--6cfb7fbb5f--qtttw-eth0" Mar 12 23:47:46.735589 containerd[2010]: 2026-03-12 23:47:46.636 [INFO][5571] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4785379484d ContainerID="5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" Namespace="calico-system" Pod="calico-kube-controllers-6cfb7fbb5f-qtttw" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--kube--controllers--6cfb7fbb5f--qtttw-eth0" Mar 12 23:47:46.735589 containerd[2010]: 2026-03-12 23:47:46.661 [INFO][5571] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" Namespace="calico-system" Pod="calico-kube-controllers-6cfb7fbb5f-qtttw" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--kube--controllers--6cfb7fbb5f--qtttw-eth0" Mar 12 23:47:46.735589 containerd[2010]: 2026-03-12 23:47:46.663 [INFO][5571] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" Namespace="calico-system" Pod="calico-kube-controllers-6cfb7fbb5f-qtttw" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--kube--controllers--6cfb7fbb5f--qtttw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--143-k8s-calico--kube--controllers--6cfb7fbb5f--qtttw-eth0", GenerateName:"calico-kube-controllers-6cfb7fbb5f-", Namespace:"calico-system", SelfLink:"", UID:"64a54bab-a76e-4249-995e-1d55d1566fc4", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 47, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cfb7fbb5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-143", ContainerID:"5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220", Pod:"calico-kube-controllers-6cfb7fbb5f-qtttw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4785379484d", MAC:"d6:f5:57:e4:55:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:46.735589 containerd[2010]: 2026-03-12 23:47:46.717 [INFO][5571] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" Namespace="calico-system" Pod="calico-kube-controllers-6cfb7fbb5f-qtttw" WorkloadEndpoint="ip--172--31--24--143-k8s-calico--kube--controllers--6cfb7fbb5f--qtttw-eth0" Mar 12 23:47:46.762658 containerd[2010]: time="2026-03-12T23:47:46.762372479Z" level=info msg="connecting to shim b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3" address="unix:///run/containerd/s/312162ec10d487a81da9890a2a33a8141292214ccc9ffcd3adc246edfeeb4177" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:46.864310 containerd[2010]: time="2026-03-12T23:47:46.864230304Z" level=info msg="connecting to shim 5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220" address="unix:///run/containerd/s/9a8d8db35bdb6ac7a87d385c8043821eaa51297f8adde1973711fde099d04031" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:46.869093 systemd-networkd[1821]: calidb08d0afd73: Link UP Mar 12 23:47:46.881321 systemd-networkd[1821]: calidb08d0afd73: Gained carrier Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.193 [INFO][5588] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--143-k8s-goldmane--cccfbd5cf--xmwzp-eth0 goldmane-cccfbd5cf- calico-system d3835311-ddc2-442e-a8c0-72f59b9ff3ae 879 0 2026-03-12 23:47:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-24-143 goldmane-cccfbd5cf-xmwzp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidb08d0afd73 [] [] }} ContainerID="2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-xmwzp" WorkloadEndpoint="ip--172--31--24--143-k8s-goldmane--cccfbd5cf--xmwzp-" Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.196 [INFO][5588] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-xmwzp" WorkloadEndpoint="ip--172--31--24--143-k8s-goldmane--cccfbd5cf--xmwzp-eth0" Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.387 [INFO][5623] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" HandleID="k8s-pod-network.2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" Workload="ip--172--31--24--143-k8s-goldmane--cccfbd5cf--xmwzp-eth0" Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.436 [INFO][5623] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" HandleID="k8s-pod-network.2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" Workload="ip--172--31--24--143-k8s-goldmane--cccfbd5cf--xmwzp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003dc310), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-143", "pod":"goldmane-cccfbd5cf-xmwzp", "timestamp":"2026-03-12 23:47:46.387856737 +0000 UTC"}, Hostname:"ip-172-31-24-143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186580)} Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.436 [INFO][5623] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.614 [INFO][5623] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.614 [INFO][5623] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-143' Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.626 [INFO][5623] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" host="ip-172-31-24-143" Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.649 [INFO][5623] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-143" Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.705 [INFO][5623] ipam/ipam.go 526: Trying affinity for 192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.716 [INFO][5623] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.726 [INFO][5623] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.64/26 host="ip-172-31-24-143" Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.727 [INFO][5623] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.64/26 handle="k8s-pod-network.2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" host="ip-172-31-24-143" Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.738 [INFO][5623] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.761 [INFO][5623] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.64/26 handle="k8s-pod-network.2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" host="ip-172-31-24-143" Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.811 [INFO][5623] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.72/26] block=192.168.37.64/26 handle="k8s-pod-network.2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" host="ip-172-31-24-143" Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.812 [INFO][5623] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.72/26] handle="k8s-pod-network.2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" host="ip-172-31-24-143" Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.812 [INFO][5623] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 23:47:46.967338 containerd[2010]: 2026-03-12 23:47:46.812 [INFO][5623] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.72/26] IPv6=[] ContainerID="2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" HandleID="k8s-pod-network.2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" Workload="ip--172--31--24--143-k8s-goldmane--cccfbd5cf--xmwzp-eth0" Mar 12 23:47:46.975312 containerd[2010]: 2026-03-12 23:47:46.843 [INFO][5588] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-xmwzp" WorkloadEndpoint="ip--172--31--24--143-k8s-goldmane--cccfbd5cf--xmwzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--143-k8s-goldmane--cccfbd5cf--xmwzp-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"d3835311-ddc2-442e-a8c0-72f59b9ff3ae", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 47, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-143", ContainerID:"", Pod:"goldmane-cccfbd5cf-xmwzp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidb08d0afd73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:46.975312 containerd[2010]: 2026-03-12 23:47:46.843 [INFO][5588] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.72/32] ContainerID="2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-xmwzp" WorkloadEndpoint="ip--172--31--24--143-k8s-goldmane--cccfbd5cf--xmwzp-eth0" Mar 12 23:47:46.975312 containerd[2010]: 2026-03-12 23:47:46.843 [INFO][5588] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb08d0afd73 ContainerID="2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-xmwzp" WorkloadEndpoint="ip--172--31--24--143-k8s-goldmane--cccfbd5cf--xmwzp-eth0" Mar 12 23:47:46.975312 containerd[2010]: 2026-03-12 23:47:46.897 [INFO][5588] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-xmwzp" WorkloadEndpoint="ip--172--31--24--143-k8s-goldmane--cccfbd5cf--xmwzp-eth0" Mar 12 23:47:46.975312 containerd[2010]: 2026-03-12 23:47:46.898 [INFO][5588] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-xmwzp" WorkloadEndpoint="ip--172--31--24--143-k8s-goldmane--cccfbd5cf--xmwzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--143-k8s-goldmane--cccfbd5cf--xmwzp-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"d3835311-ddc2-442e-a8c0-72f59b9ff3ae", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 47, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-143", ContainerID:"2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e", Pod:"goldmane-cccfbd5cf-xmwzp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidb08d0afd73", MAC:"02:e4:93:88:1d:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:46.975312 containerd[2010]: 2026-03-12 23:47:46.943 [INFO][5588] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-xmwzp" WorkloadEndpoint="ip--172--31--24--143-k8s-goldmane--cccfbd5cf--xmwzp-eth0" Mar 12 23:47:47.013398 systemd[1]: Started cri-containerd-b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3.scope - libcontainer container b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3. Mar 12 23:47:47.043656 systemd[1]: Started cri-containerd-5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220.scope - libcontainer container 5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220. Mar 12 23:47:47.087013 containerd[2010]: time="2026-03-12T23:47:47.086902377Z" level=info msg="connecting to shim 2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e" address="unix:///run/containerd/s/e8bf265d54260779c4f6d1de511c8d1b072947f9e27e3e3eef7fa8b807996b49" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:47.181489 systemd[1]: Started cri-containerd-2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e.scope - libcontainer container 2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e. Mar 12 23:47:47.349498 containerd[2010]: time="2026-03-12T23:47:47.348205198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c756fffd4-j2tp7,Uid:abc90d13-06a6-4b38-88bd-93ea7ceb4e66,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3\"" Mar 12 23:47:47.470453 containerd[2010]: time="2026-03-12T23:47:47.467719211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-xmwzp,Uid:d3835311-ddc2-442e-a8c0-72f59b9ff3ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e\"" Mar 12 23:47:47.492455 containerd[2010]: time="2026-03-12T23:47:47.491495075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cfb7fbb5f-qtttw,Uid:64a54bab-a76e-4249-995e-1d55d1566fc4,Namespace:calico-system,Attempt:0,} returns sandbox id \"5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220\"" Mar 12 23:47:47.701239 systemd-networkd[1821]: cali4785379484d: Gained IPv6LL Mar 12 23:47:47.956729 systemd-networkd[1821]: calic270d103a47: Gained IPv6LL Mar 12 23:47:48.469692 systemd-networkd[1821]: calidb08d0afd73: Gained IPv6LL Mar 12 23:47:48.855193 containerd[2010]: time="2026-03-12T23:47:48.855134414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:48.858770 containerd[2010]: time="2026-03-12T23:47:48.858717026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 12 23:47:48.860849 containerd[2010]: time="2026-03-12T23:47:48.860779058Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:48.869558 containerd[2010]: time="2026-03-12T23:47:48.869458922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:48.871608 containerd[2010]: time="2026-03-12T23:47:48.871557218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 4.140697041s" Mar 12 23:47:48.871858 containerd[2010]: time="2026-03-12T23:47:48.871828874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 12 23:47:48.874628 containerd[2010]: time="2026-03-12T23:47:48.874560266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 23:47:48.883272 containerd[2010]: time="2026-03-12T23:47:48.883178642Z" level=info msg="CreateContainer within sandbox \"c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 23:47:48.908847 containerd[2010]: time="2026-03-12T23:47:48.908649386Z" level=info msg="Container 9376f122e9532a8440fc21ca7baf4346642d8c2e622d6c93e78ebb816e0312dc: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:48.916862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947791368.mount: Deactivated successfully. Mar 12 23:47:48.936093 containerd[2010]: time="2026-03-12T23:47:48.935969954Z" level=info msg="CreateContainer within sandbox \"c87e0b2283eff0b9755afead8281f76d096dfb54788cea22d48157ac1c598d38\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9376f122e9532a8440fc21ca7baf4346642d8c2e622d6c93e78ebb816e0312dc\"" Mar 12 23:47:48.937664 containerd[2010]: time="2026-03-12T23:47:48.936986522Z" level=info msg="StartContainer for \"9376f122e9532a8440fc21ca7baf4346642d8c2e622d6c93e78ebb816e0312dc\"" Mar 12 23:47:48.940109 containerd[2010]: time="2026-03-12T23:47:48.940034690Z" level=info msg="connecting to shim 9376f122e9532a8440fc21ca7baf4346642d8c2e622d6c93e78ebb816e0312dc" address="unix:///run/containerd/s/0b327fa074cfc11ba9265453bd60b14bc0a6782f81663be35dcb9f48771d5a9b" protocol=ttrpc version=3 Mar 12 23:47:48.992726 systemd[1]: Started cri-containerd-9376f122e9532a8440fc21ca7baf4346642d8c2e622d6c93e78ebb816e0312dc.scope - libcontainer container 9376f122e9532a8440fc21ca7baf4346642d8c2e622d6c93e78ebb816e0312dc. Mar 12 23:47:49.087940 containerd[2010]: time="2026-03-12T23:47:49.087789323Z" level=info msg="StartContainer for \"9376f122e9532a8440fc21ca7baf4346642d8c2e622d6c93e78ebb816e0312dc\" returns successfully" Mar 12 23:47:49.219121 containerd[2010]: time="2026-03-12T23:47:49.216473975Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:49.220045 containerd[2010]: time="2026-03-12T23:47:49.219684443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 12 23:47:49.227805 containerd[2010]: time="2026-03-12T23:47:49.227728835Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 353.099941ms" Mar 12 23:47:49.228041 containerd[2010]: time="2026-03-12T23:47:49.227802587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 12 23:47:49.233968 containerd[2010]: time="2026-03-12T23:47:49.230809872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 12 23:47:49.243054 containerd[2010]: time="2026-03-12T23:47:49.242950092Z" level=info msg="CreateContainer within sandbox \"b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 23:47:49.268582 containerd[2010]: time="2026-03-12T23:47:49.268382520Z" level=info msg="Container 6efb0ed9cbe68ec188dbf963733506f09c9e63c0a10053895e5b036fd2767bfb: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:49.286800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3791668219.mount: Deactivated successfully. Mar 12 23:47:49.307191 containerd[2010]: time="2026-03-12T23:47:49.307112436Z" level=info msg="CreateContainer within sandbox \"b0e7fd7e3df0bf734afe167ad1fe8456749f389b8078e64d55aaa6195b2093d3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6efb0ed9cbe68ec188dbf963733506f09c9e63c0a10053895e5b036fd2767bfb\"" Mar 12 23:47:49.310295 containerd[2010]: time="2026-03-12T23:47:49.310155324Z" level=info msg="StartContainer for \"6efb0ed9cbe68ec188dbf963733506f09c9e63c0a10053895e5b036fd2767bfb\"" Mar 12 23:47:49.314921 containerd[2010]: time="2026-03-12T23:47:49.313821072Z" level=info msg="connecting to shim 6efb0ed9cbe68ec188dbf963733506f09c9e63c0a10053895e5b036fd2767bfb" address="unix:///run/containerd/s/312162ec10d487a81da9890a2a33a8141292214ccc9ffcd3adc246edfeeb4177" protocol=ttrpc version=3 Mar 12 23:47:49.362303 systemd[1]: Started cri-containerd-6efb0ed9cbe68ec188dbf963733506f09c9e63c0a10053895e5b036fd2767bfb.scope - libcontainer container 6efb0ed9cbe68ec188dbf963733506f09c9e63c0a10053895e5b036fd2767bfb. Mar 12 23:47:49.487697 containerd[2010]: time="2026-03-12T23:47:49.487220569Z" level=info msg="StartContainer for \"6efb0ed9cbe68ec188dbf963733506f09c9e63c0a10053895e5b036fd2767bfb\" returns successfully" Mar 12 23:47:49.576199 kubelet[3349]: I0312 23:47:49.575346 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7c756fffd4-j2tp7" podStartSLOduration=37.7005438 podStartE2EDuration="39.575324749s" podCreationTimestamp="2026-03-12 23:47:10 +0000 UTC" firstStartedPulling="2026-03-12 23:47:47.35492029 +0000 UTC m=+59.853674002" lastFinishedPulling="2026-03-12 23:47:49.229701239 +0000 UTC m=+61.728454951" observedRunningTime="2026-03-12 23:47:49.546764377 +0000 UTC m=+62.045518257" watchObservedRunningTime="2026-03-12 23:47:49.575324749 +0000 UTC m=+62.074078473" Mar 12 23:47:49.580198 kubelet[3349]: I0312 23:47:49.579906 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7c756fffd4-8qfx6" podStartSLOduration=35.435690428 podStartE2EDuration="39.579879901s" podCreationTimestamp="2026-03-12 23:47:10 +0000 UTC" firstStartedPulling="2026-03-12 23:47:44.729447813 +0000 UTC m=+57.228201525" lastFinishedPulling="2026-03-12 23:47:48.873637214 +0000 UTC m=+61.372390998" observedRunningTime="2026-03-12 23:47:49.576621493 +0000 UTC m=+62.075375217" watchObservedRunningTime="2026-03-12 23:47:49.579879901 +0000 UTC m=+62.078633637" Mar 12 23:47:50.535775 kubelet[3349]: I0312 23:47:50.535717 3349 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 23:47:50.537792 kubelet[3349]: I0312 23:47:50.537748 3349 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 23:47:50.787850 ntpd[2185]: Listen normally on 10 cali98ce76aad37 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 12 23:47:50.789936 ntpd[2185]: 12 Mar 23:47:50 ntpd[2185]: Listen normally on 10 cali98ce76aad37 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 12 23:47:50.789936 ntpd[2185]: 12 Mar 23:47:50 ntpd[2185]: Listen normally on 11 calida923888b60 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 12 23:47:50.789936 ntpd[2185]: 12 Mar 23:47:50 ntpd[2185]: Listen normally on 12 calia75c1da2da7 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 12 23:47:50.789936 ntpd[2185]: 12 Mar 23:47:50 ntpd[2185]: Listen normally on 13 calic270d103a47 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 12 23:47:50.789936 ntpd[2185]: 12 Mar 23:47:50 ntpd[2185]: Listen normally on 14 cali4785379484d [fe80::ecee:eeff:feee:eeee%13]:123 Mar 12 23:47:50.789936 ntpd[2185]: 12 Mar 23:47:50 ntpd[2185]: Listen normally on 15 calidb08d0afd73 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 12 23:47:50.787937 ntpd[2185]: Listen normally on 11 calida923888b60 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 12 23:47:50.787984 ntpd[2185]: Listen normally on 12 calia75c1da2da7 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 12 23:47:50.788092 ntpd[2185]: Listen normally on 13 calic270d103a47 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 12 23:47:50.788254 ntpd[2185]: Listen normally on 14 cali4785379484d [fe80::ecee:eeff:feee:eeee%13]:123 Mar 12 23:47:50.788299 ntpd[2185]: Listen normally on 15 calidb08d0afd73 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 12 23:47:51.286453 systemd[1]: Started sshd@8-172.31.24.143:22-4.153.228.146:52098.service - OpenSSH per-connection server daemon (4.153.228.146:52098). Mar 12 23:47:51.791057 sshd[5922]: Accepted publickey for core from 4.153.228.146 port 52098 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:47:51.794146 sshd-session[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:47:51.806871 systemd-logind[1980]: New session 9 of user core. Mar 12 23:47:51.815258 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 23:47:52.220704 sshd[5925]: Connection closed by 4.153.228.146 port 52098 Mar 12 23:47:52.222722 sshd-session[5922]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:52.234772 systemd[1]: sshd@8-172.31.24.143:22-4.153.228.146:52098.service: Deactivated successfully. Mar 12 23:47:52.235329 systemd-logind[1980]: Session 9 logged out. Waiting for processes to exit. Mar 12 23:47:52.243334 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 23:47:52.250627 systemd-logind[1980]: Removed session 9. Mar 12 23:47:53.493653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount16216457.mount: Deactivated successfully. Mar 12 23:47:54.183077 containerd[2010]: time="2026-03-12T23:47:54.182071168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:54.184247 containerd[2010]: time="2026-03-12T23:47:54.184204324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 12 23:47:54.187489 containerd[2010]: time="2026-03-12T23:47:54.187420348Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:54.203720 containerd[2010]: time="2026-03-12T23:47:54.203410144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:54.206424 containerd[2010]: time="2026-03-12T23:47:54.206332708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 4.97546754s" Mar 12 23:47:54.206424 containerd[2010]: time="2026-03-12T23:47:54.206415628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 12 23:47:54.207906 containerd[2010]: time="2026-03-12T23:47:54.207841456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 12 23:47:54.217954 containerd[2010]: time="2026-03-12T23:47:54.217610464Z" level=info msg="CreateContainer within sandbox \"2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 12 23:47:54.244020 containerd[2010]: time="2026-03-12T23:47:54.243540160Z" level=info msg="Container 2e9d69876cc7f39f0d78f2bbf3f8d7117f1b36e56fb0ebe8ae5220a6ae29db2a: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:54.259492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1954797587.mount: Deactivated successfully. Mar 12 23:47:54.275077 containerd[2010]: time="2026-03-12T23:47:54.274488461Z" level=info msg="CreateContainer within sandbox \"2c0c868502ddcfeef3cf4dc6badb932682c39e05bb80d20b18f50a3158f9318e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2e9d69876cc7f39f0d78f2bbf3f8d7117f1b36e56fb0ebe8ae5220a6ae29db2a\"" Mar 12 23:47:54.277409 containerd[2010]: time="2026-03-12T23:47:54.277269785Z" level=info msg="StartContainer for \"2e9d69876cc7f39f0d78f2bbf3f8d7117f1b36e56fb0ebe8ae5220a6ae29db2a\"" Mar 12 23:47:54.280255 containerd[2010]: time="2026-03-12T23:47:54.280181933Z" level=info msg="connecting to shim 2e9d69876cc7f39f0d78f2bbf3f8d7117f1b36e56fb0ebe8ae5220a6ae29db2a" address="unix:///run/containerd/s/e8bf265d54260779c4f6d1de511c8d1b072947f9e27e3e3eef7fa8b807996b49" protocol=ttrpc version=3 Mar 12 23:47:54.327326 systemd[1]: Started cri-containerd-2e9d69876cc7f39f0d78f2bbf3f8d7117f1b36e56fb0ebe8ae5220a6ae29db2a.scope - libcontainer container 2e9d69876cc7f39f0d78f2bbf3f8d7117f1b36e56fb0ebe8ae5220a6ae29db2a. Mar 12 23:47:54.433984 containerd[2010]: time="2026-03-12T23:47:54.433519661Z" level=info msg="StartContainer for \"2e9d69876cc7f39f0d78f2bbf3f8d7117f1b36e56fb0ebe8ae5220a6ae29db2a\" returns successfully" Mar 12 23:47:54.622139 kubelet[3349]: I0312 23:47:54.621069 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-xmwzp" podStartSLOduration=36.892029529 podStartE2EDuration="43.619984182s" podCreationTimestamp="2026-03-12 23:47:11 +0000 UTC" firstStartedPulling="2026-03-12 23:47:47.479719523 +0000 UTC m=+59.978473223" lastFinishedPulling="2026-03-12 23:47:54.207674164 +0000 UTC m=+66.706427876" observedRunningTime="2026-03-12 23:47:54.619365594 +0000 UTC m=+67.118119498" watchObservedRunningTime="2026-03-12 23:47:54.619984182 +0000 UTC m=+67.118737894" Mar 12 23:47:56.841277 containerd[2010]: time="2026-03-12T23:47:56.841201173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:56.844930 containerd[2010]: time="2026-03-12T23:47:56.844857441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 12 23:47:56.847168 containerd[2010]: time="2026-03-12T23:47:56.847106349Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:56.853773 containerd[2010]: time="2026-03-12T23:47:56.853699617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:56.856526 containerd[2010]: time="2026-03-12T23:47:56.856319553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.647894417s" Mar 12 23:47:56.856526 containerd[2010]: time="2026-03-12T23:47:56.856376085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 12 23:47:56.891765 containerd[2010]: time="2026-03-12T23:47:56.891691006Z" level=info msg="CreateContainer within sandbox \"5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 12 23:47:56.910165 containerd[2010]: time="2026-03-12T23:47:56.910089694Z" level=info msg="Container 376f617ee3e7a25c9f4af8e384f299ca88ea2b97c79c10b4c09d3f82807d78ba: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:56.926446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130311524.mount: Deactivated successfully. Mar 12 23:47:56.930960 containerd[2010]: time="2026-03-12T23:47:56.930898258Z" level=info msg="CreateContainer within sandbox \"5fa5ff540b18d9b248a28366979804792bfeb08320fada14ca9f23496cd9b220\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"376f617ee3e7a25c9f4af8e384f299ca88ea2b97c79c10b4c09d3f82807d78ba\"" Mar 12 23:47:56.932750 containerd[2010]: time="2026-03-12T23:47:56.932106442Z" level=info msg="StartContainer for \"376f617ee3e7a25c9f4af8e384f299ca88ea2b97c79c10b4c09d3f82807d78ba\"" Mar 12 23:47:56.934930 containerd[2010]: time="2026-03-12T23:47:56.934869970Z" level=info msg="connecting to shim 376f617ee3e7a25c9f4af8e384f299ca88ea2b97c79c10b4c09d3f82807d78ba" address="unix:///run/containerd/s/9a8d8db35bdb6ac7a87d385c8043821eaa51297f8adde1973711fde099d04031" protocol=ttrpc version=3 Mar 12 23:47:56.979282 systemd[1]: Started cri-containerd-376f617ee3e7a25c9f4af8e384f299ca88ea2b97c79c10b4c09d3f82807d78ba.scope - libcontainer container 376f617ee3e7a25c9f4af8e384f299ca88ea2b97c79c10b4c09d3f82807d78ba. Mar 12 23:47:57.085847 containerd[2010]: time="2026-03-12T23:47:57.085791127Z" level=info msg="StartContainer for \"376f617ee3e7a25c9f4af8e384f299ca88ea2b97c79c10b4c09d3f82807d78ba\" returns successfully" Mar 12 23:47:57.328473 systemd[1]: Started sshd@9-172.31.24.143:22-4.153.228.146:52102.service - OpenSSH per-connection server daemon (4.153.228.146:52102). Mar 12 23:47:57.637770 kubelet[3349]: I0312 23:47:57.637133 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6cfb7fbb5f-qtttw" podStartSLOduration=35.282913763 podStartE2EDuration="44.637105221s" podCreationTimestamp="2026-03-12 23:47:13 +0000 UTC" firstStartedPulling="2026-03-12 23:47:47.503497931 +0000 UTC m=+60.002251631" lastFinishedPulling="2026-03-12 23:47:56.857689389 +0000 UTC m=+69.356443089" observedRunningTime="2026-03-12 23:47:57.635679573 +0000 UTC m=+70.134433309" watchObservedRunningTime="2026-03-12 23:47:57.637105221 +0000 UTC m=+70.135859473" Mar 12 23:47:57.837890 sshd[6096]: Accepted publickey for core from 4.153.228.146 port 52102 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:47:57.840724 sshd-session[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:47:57.849659 systemd-logind[1980]: New session 10 of user core. Mar 12 23:47:57.869280 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 23:47:58.223981 sshd[6127]: Connection closed by 4.153.228.146 port 52102 Mar 12 23:47:58.222855 sshd-session[6096]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:58.230520 systemd[1]: sshd@9-172.31.24.143:22-4.153.228.146:52102.service: Deactivated successfully. Mar 12 23:47:58.236102 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 23:47:58.238790 systemd-logind[1980]: Session 10 logged out. Waiting for processes to exit. Mar 12 23:47:58.242376 systemd-logind[1980]: Removed session 10. Mar 12 23:48:03.320477 systemd[1]: Started sshd@10-172.31.24.143:22-4.153.228.146:58814.service - OpenSSH per-connection server daemon (4.153.228.146:58814). Mar 12 23:48:03.821900 sshd[6165]: Accepted publickey for core from 4.153.228.146 port 58814 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:03.825489 sshd-session[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:03.839371 systemd-logind[1980]: New session 11 of user core. Mar 12 23:48:03.848283 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 23:48:04.210185 sshd[6169]: Connection closed by 4.153.228.146 port 58814 Mar 12 23:48:04.211341 sshd-session[6165]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:04.221570 systemd[1]: sshd@10-172.31.24.143:22-4.153.228.146:58814.service: Deactivated successfully. Mar 12 23:48:04.221856 systemd-logind[1980]: Session 11 logged out. Waiting for processes to exit. Mar 12 23:48:04.227290 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 23:48:04.233271 systemd-logind[1980]: Removed session 11. Mar 12 23:48:04.304626 systemd[1]: Started sshd@11-172.31.24.143:22-4.153.228.146:58816.service - OpenSSH per-connection server daemon (4.153.228.146:58816). Mar 12 23:48:04.778060 sshd[6182]: Accepted publickey for core from 4.153.228.146 port 58816 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:04.780716 sshd-session[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:04.791821 systemd-logind[1980]: New session 12 of user core. Mar 12 23:48:04.801278 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 23:48:05.232070 sshd[6201]: Connection closed by 4.153.228.146 port 58816 Mar 12 23:48:05.233788 sshd-session[6182]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:05.242339 systemd-logind[1980]: Session 12 logged out. Waiting for processes to exit. Mar 12 23:48:05.243646 systemd[1]: sshd@11-172.31.24.143:22-4.153.228.146:58816.service: Deactivated successfully. Mar 12 23:48:05.251980 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 23:48:05.256406 systemd-logind[1980]: Removed session 12. Mar 12 23:48:05.326372 systemd[1]: Started sshd@12-172.31.24.143:22-4.153.228.146:58828.service - OpenSSH per-connection server daemon (4.153.228.146:58828). Mar 12 23:48:05.801401 sshd[6211]: Accepted publickey for core from 4.153.228.146 port 58828 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:05.803629 sshd-session[6211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:05.812380 systemd-logind[1980]: New session 13 of user core. Mar 12 23:48:05.822477 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 23:48:06.155384 sshd[6214]: Connection closed by 4.153.228.146 port 58828 Mar 12 23:48:06.157486 sshd-session[6211]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:06.165265 systemd[1]: sshd@12-172.31.24.143:22-4.153.228.146:58828.service: Deactivated successfully. Mar 12 23:48:06.171446 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 23:48:06.174924 systemd-logind[1980]: Session 13 logged out. Waiting for processes to exit. Mar 12 23:48:06.177720 systemd-logind[1980]: Removed session 13. Mar 12 23:48:11.249125 systemd[1]: Started sshd@13-172.31.24.143:22-4.153.228.146:40704.service - OpenSSH per-connection server daemon (4.153.228.146:40704). Mar 12 23:48:11.707638 sshd[6234]: Accepted publickey for core from 4.153.228.146 port 40704 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:11.710059 sshd-session[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:11.722692 systemd-logind[1980]: New session 14 of user core. Mar 12 23:48:11.727288 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 23:48:12.092459 sshd[6237]: Connection closed by 4.153.228.146 port 40704 Mar 12 23:48:12.093562 sshd-session[6234]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:12.100831 systemd[1]: sshd@13-172.31.24.143:22-4.153.228.146:40704.service: Deactivated successfully. Mar 12 23:48:12.105867 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 23:48:12.108520 systemd-logind[1980]: Session 14 logged out. Waiting for processes to exit. Mar 12 23:48:12.112061 systemd-logind[1980]: Removed session 14. Mar 12 23:48:12.186618 systemd[1]: Started sshd@14-172.31.24.143:22-4.153.228.146:40712.service - OpenSSH per-connection server daemon (4.153.228.146:40712). Mar 12 23:48:12.654652 sshd[6249]: Accepted publickey for core from 4.153.228.146 port 40712 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:12.657228 sshd-session[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:12.666938 systemd-logind[1980]: New session 15 of user core. Mar 12 23:48:12.676299 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 23:48:13.407345 sshd[6252]: Connection closed by 4.153.228.146 port 40712 Mar 12 23:48:13.408714 sshd-session[6249]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:13.419126 systemd[1]: sshd@14-172.31.24.143:22-4.153.228.146:40712.service: Deactivated successfully. Mar 12 23:48:13.428347 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 23:48:13.431838 systemd-logind[1980]: Session 15 logged out. Waiting for processes to exit. Mar 12 23:48:13.436732 systemd-logind[1980]: Removed session 15. Mar 12 23:48:13.504456 systemd[1]: Started sshd@15-172.31.24.143:22-4.153.228.146:40724.service - OpenSSH per-connection server daemon (4.153.228.146:40724). Mar 12 23:48:13.996688 sshd[6262]: Accepted publickey for core from 4.153.228.146 port 40724 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:13.999909 sshd-session[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:14.011926 systemd-logind[1980]: New session 16 of user core. Mar 12 23:48:14.022301 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 23:48:15.347706 sshd[6269]: Connection closed by 4.153.228.146 port 40724 Mar 12 23:48:15.349338 sshd-session[6262]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:15.360030 systemd-logind[1980]: Session 16 logged out. Waiting for processes to exit. Mar 12 23:48:15.360890 systemd[1]: sshd@15-172.31.24.143:22-4.153.228.146:40724.service: Deactivated successfully. Mar 12 23:48:15.368534 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 23:48:15.377275 systemd-logind[1980]: Removed session 16. Mar 12 23:48:15.441181 systemd[1]: Started sshd@16-172.31.24.143:22-4.153.228.146:40734.service - OpenSSH per-connection server daemon (4.153.228.146:40734). Mar 12 23:48:15.926064 sshd[6296]: Accepted publickey for core from 4.153.228.146 port 40734 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:15.930889 sshd-session[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:15.944472 systemd-logind[1980]: New session 17 of user core. Mar 12 23:48:15.953615 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 23:48:16.704014 sshd[6299]: Connection closed by 4.153.228.146 port 40734 Mar 12 23:48:16.705232 sshd-session[6296]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:16.716489 systemd-logind[1980]: Session 17 logged out. Waiting for processes to exit. Mar 12 23:48:16.718582 systemd[1]: sshd@16-172.31.24.143:22-4.153.228.146:40734.service: Deactivated successfully. Mar 12 23:48:16.725071 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 23:48:16.731067 systemd-logind[1980]: Removed session 17. Mar 12 23:48:16.797690 systemd[1]: Started sshd@17-172.31.24.143:22-4.153.228.146:40736.service - OpenSSH per-connection server daemon (4.153.228.146:40736). Mar 12 23:48:17.314030 sshd[6320]: Accepted publickey for core from 4.153.228.146 port 40736 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:17.317122 sshd-session[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:17.332382 systemd-logind[1980]: New session 18 of user core. Mar 12 23:48:17.340636 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 23:48:17.720121 sshd[6325]: Connection closed by 4.153.228.146 port 40736 Mar 12 23:48:17.720843 sshd-session[6320]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:17.735453 systemd[1]: sshd@17-172.31.24.143:22-4.153.228.146:40736.service: Deactivated successfully. Mar 12 23:48:17.741096 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 23:48:17.745139 systemd-logind[1980]: Session 18 logged out. Waiting for processes to exit. Mar 12 23:48:17.752557 systemd-logind[1980]: Removed session 18. Mar 12 23:48:22.814680 systemd[1]: Started sshd@18-172.31.24.143:22-4.153.228.146:43246.service - OpenSSH per-connection server daemon (4.153.228.146:43246). Mar 12 23:48:23.278139 sshd[6339]: Accepted publickey for core from 4.153.228.146 port 43246 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:23.280704 sshd-session[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:23.293009 systemd-logind[1980]: New session 19 of user core. Mar 12 23:48:23.301637 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 23:48:23.673900 sshd[6344]: Connection closed by 4.153.228.146 port 43246 Mar 12 23:48:23.675616 sshd-session[6339]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:23.684305 systemd[1]: sshd@18-172.31.24.143:22-4.153.228.146:43246.service: Deactivated successfully. Mar 12 23:48:23.689406 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 23:48:23.691770 systemd-logind[1980]: Session 19 logged out. Waiting for processes to exit. Mar 12 23:48:23.694911 systemd-logind[1980]: Removed session 19. Mar 12 23:48:24.867024 kubelet[3349]: I0312 23:48:24.866727 3349 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 23:48:28.772449 systemd[1]: Started sshd@19-172.31.24.143:22-4.153.228.146:43256.service - OpenSSH per-connection server daemon (4.153.228.146:43256). Mar 12 23:48:29.228960 sshd[6417]: Accepted publickey for core from 4.153.228.146 port 43256 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:29.231409 sshd-session[6417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:29.241546 systemd-logind[1980]: New session 20 of user core. Mar 12 23:48:29.250341 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 23:48:29.597051 sshd[6421]: Connection closed by 4.153.228.146 port 43256 Mar 12 23:48:29.597858 sshd-session[6417]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:29.605707 systemd[1]: sshd@19-172.31.24.143:22-4.153.228.146:43256.service: Deactivated successfully. Mar 12 23:48:29.612170 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 23:48:29.614097 systemd-logind[1980]: Session 20 logged out. Waiting for processes to exit. Mar 12 23:48:29.617732 systemd-logind[1980]: Removed session 20. Mar 12 23:48:31.692456 kubelet[3349]: I0312 23:48:31.692222 3349 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 23:48:34.689099 systemd[1]: Started sshd@20-172.31.24.143:22-4.153.228.146:55468.service - OpenSSH per-connection server daemon (4.153.228.146:55468). Mar 12 23:48:35.147901 sshd[6460]: Accepted publickey for core from 4.153.228.146 port 55468 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:35.150680 sshd-session[6460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:35.159093 systemd-logind[1980]: New session 21 of user core. Mar 12 23:48:35.168292 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 23:48:35.514152 sshd[6463]: Connection closed by 4.153.228.146 port 55468 Mar 12 23:48:35.515283 sshd-session[6460]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:35.522354 systemd[1]: sshd@20-172.31.24.143:22-4.153.228.146:55468.service: Deactivated successfully. Mar 12 23:48:35.528337 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 23:48:35.531758 systemd-logind[1980]: Session 21 logged out. Waiting for processes to exit. Mar 12 23:48:35.538641 systemd-logind[1980]: Removed session 21. Mar 12 23:48:40.620364 systemd[1]: Started sshd@21-172.31.24.143:22-4.153.228.146:50202.service - OpenSSH per-connection server daemon (4.153.228.146:50202). Mar 12 23:48:41.077908 sshd[6477]: Accepted publickey for core from 4.153.228.146 port 50202 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:41.081308 sshd-session[6477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:41.094099 systemd-logind[1980]: New session 22 of user core. Mar 12 23:48:41.100321 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 12 23:48:41.436715 sshd[6480]: Connection closed by 4.153.228.146 port 50202 Mar 12 23:48:41.437728 sshd-session[6477]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:41.446931 systemd[1]: sshd@21-172.31.24.143:22-4.153.228.146:50202.service: Deactivated successfully. Mar 12 23:48:41.451821 systemd[1]: session-22.scope: Deactivated successfully. Mar 12 23:48:41.454609 systemd-logind[1980]: Session 22 logged out. Waiting for processes to exit. Mar 12 23:48:41.460368 systemd-logind[1980]: Removed session 22. Mar 12 23:48:46.530428 systemd[1]: Started sshd@22-172.31.24.143:22-4.153.228.146:50218.service - OpenSSH per-connection server daemon (4.153.228.146:50218). Mar 12 23:48:47.004147 sshd[6514]: Accepted publickey for core from 4.153.228.146 port 50218 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:47.008736 sshd-session[6514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:47.022767 systemd-logind[1980]: New session 23 of user core. Mar 12 23:48:47.031645 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 12 23:48:47.424172 sshd[6517]: Connection closed by 4.153.228.146 port 50218 Mar 12 23:48:47.425313 sshd-session[6514]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:47.437290 systemd[1]: sshd@22-172.31.24.143:22-4.153.228.146:50218.service: Deactivated successfully. Mar 12 23:48:47.442882 systemd[1]: session-23.scope: Deactivated successfully. Mar 12 23:48:47.446343 systemd-logind[1980]: Session 23 logged out. Waiting for processes to exit. Mar 12 23:48:47.452091 systemd-logind[1980]: Removed session 23. Mar 12 23:49:02.544932 systemd[1]: cri-containerd-f123172f16295206c0abc58f7378147e1266cbcbeeeb75d47650a5d7a8a0e398.scope: Deactivated successfully. Mar 12 23:49:02.545622 systemd[1]: cri-containerd-f123172f16295206c0abc58f7378147e1266cbcbeeeb75d47650a5d7a8a0e398.scope: Consumed 21.981s CPU time, 113.8M memory peak, 304K read from disk. Mar 12 23:49:02.553258 containerd[2010]: time="2026-03-12T23:49:02.553159404Z" level=info msg="received container exit event container_id:\"f123172f16295206c0abc58f7378147e1266cbcbeeeb75d47650a5d7a8a0e398\" id:\"f123172f16295206c0abc58f7378147e1266cbcbeeeb75d47650a5d7a8a0e398\" pid:3945 exit_status:1 exited_at:{seconds:1773359342 nanos:552573732}" Mar 12 23:49:02.598567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f123172f16295206c0abc58f7378147e1266cbcbeeeb75d47650a5d7a8a0e398-rootfs.mount: Deactivated successfully. Mar 12 23:49:02.740869 systemd[1]: cri-containerd-be5ca988691be188bf36d3d44b1412e9b1390d3640e1705d882aaf16eda5da49.scope: Deactivated successfully. Mar 12 23:49:02.742425 systemd[1]: cri-containerd-be5ca988691be188bf36d3d44b1412e9b1390d3640e1705d882aaf16eda5da49.scope: Consumed 5.371s CPU time, 59.7M memory peak, 64K read from disk. Mar 12 23:49:02.748264 containerd[2010]: time="2026-03-12T23:49:02.748213513Z" level=info msg="received container exit event container_id:\"be5ca988691be188bf36d3d44b1412e9b1390d3640e1705d882aaf16eda5da49\" id:\"be5ca988691be188bf36d3d44b1412e9b1390d3640e1705d882aaf16eda5da49\" pid:3165 exit_status:1 exited_at:{seconds:1773359342 nanos:746500633}" Mar 12 23:49:02.799956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be5ca988691be188bf36d3d44b1412e9b1390d3640e1705d882aaf16eda5da49-rootfs.mount: Deactivated successfully. Mar 12 23:49:02.870981 kubelet[3349]: I0312 23:49:02.870703 3349 scope.go:117] "RemoveContainer" containerID="be5ca988691be188bf36d3d44b1412e9b1390d3640e1705d882aaf16eda5da49" Mar 12 23:49:02.879854 kubelet[3349]: I0312 23:49:02.879795 3349 scope.go:117] "RemoveContainer" containerID="f123172f16295206c0abc58f7378147e1266cbcbeeeb75d47650a5d7a8a0e398" Mar 12 23:49:02.881557 containerd[2010]: time="2026-03-12T23:49:02.881004745Z" level=info msg="CreateContainer within sandbox \"c379f654373486c4f9161a22d0a19ffd1174249595abf75cb74dcbf87f6a8359\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 12 23:49:02.883921 containerd[2010]: time="2026-03-12T23:49:02.883859365Z" level=info msg="CreateContainer within sandbox \"f85a3412d430f96abfbfa62a1456cb5d9938082f7077bb24f40530ef19c05309\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 12 23:49:02.908150 containerd[2010]: time="2026-03-12T23:49:02.907902925Z" level=info msg="Container 992610753913017b8113a16133f74ef699dbe6a9877e203c5d827238f8a37dbf: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:49:02.914066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1947659898.mount: Deactivated successfully. Mar 12 23:49:02.920617 containerd[2010]: time="2026-03-12T23:49:02.920547842Z" level=info msg="Container 88fef7fe1792aa3bb7dbe3a24d73c20283ed8f14ee5211e13cf74b92c443a601: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:49:02.937288 containerd[2010]: time="2026-03-12T23:49:02.937161914Z" level=info msg="CreateContainer within sandbox \"c379f654373486c4f9161a22d0a19ffd1174249595abf75cb74dcbf87f6a8359\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"992610753913017b8113a16133f74ef699dbe6a9877e203c5d827238f8a37dbf\"" Mar 12 23:49:02.938195 containerd[2010]: time="2026-03-12T23:49:02.938131262Z" level=info msg="StartContainer for \"992610753913017b8113a16133f74ef699dbe6a9877e203c5d827238f8a37dbf\"" Mar 12 23:49:02.942334 containerd[2010]: time="2026-03-12T23:49:02.942144362Z" level=info msg="connecting to shim 992610753913017b8113a16133f74ef699dbe6a9877e203c5d827238f8a37dbf" address="unix:///run/containerd/s/f921a3b64df56a2583352bb8351fb721eda26fc65c5949ac4994f630f7525e29" protocol=ttrpc version=3 Mar 12 23:49:02.949956 containerd[2010]: time="2026-03-12T23:49:02.949810586Z" level=info msg="CreateContainer within sandbox \"f85a3412d430f96abfbfa62a1456cb5d9938082f7077bb24f40530ef19c05309\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"88fef7fe1792aa3bb7dbe3a24d73c20283ed8f14ee5211e13cf74b92c443a601\"" Mar 12 23:49:02.952082 containerd[2010]: time="2026-03-12T23:49:02.952043354Z" level=info msg="StartContainer for \"88fef7fe1792aa3bb7dbe3a24d73c20283ed8f14ee5211e13cf74b92c443a601\"" Mar 12 23:49:02.955522 containerd[2010]: time="2026-03-12T23:49:02.955467782Z" level=info msg="connecting to shim 88fef7fe1792aa3bb7dbe3a24d73c20283ed8f14ee5211e13cf74b92c443a601" address="unix:///run/containerd/s/08ccef7032bdf2911c32b19d9af66fa4121013446f9ad86cd21dbe5443bce8ef" protocol=ttrpc version=3 Mar 12 23:49:02.984332 systemd[1]: Started cri-containerd-992610753913017b8113a16133f74ef699dbe6a9877e203c5d827238f8a37dbf.scope - libcontainer container 992610753913017b8113a16133f74ef699dbe6a9877e203c5d827238f8a37dbf. Mar 12 23:49:03.006312 systemd[1]: Started cri-containerd-88fef7fe1792aa3bb7dbe3a24d73c20283ed8f14ee5211e13cf74b92c443a601.scope - libcontainer container 88fef7fe1792aa3bb7dbe3a24d73c20283ed8f14ee5211e13cf74b92c443a601. Mar 12 23:49:03.100709 containerd[2010]: time="2026-03-12T23:49:03.099751366Z" level=info msg="StartContainer for \"88fef7fe1792aa3bb7dbe3a24d73c20283ed8f14ee5211e13cf74b92c443a601\" returns successfully" Mar 12 23:49:03.119887 containerd[2010]: time="2026-03-12T23:49:03.119831303Z" level=info msg="StartContainer for \"992610753913017b8113a16133f74ef699dbe6a9877e203c5d827238f8a37dbf\" returns successfully" Mar 12 23:49:07.318087 systemd[1]: cri-containerd-641633648b53a2824c4d41b8e2526ec4d162d8224492883e1f47bda6b376d120.scope: Deactivated successfully. Mar 12 23:49:07.318685 systemd[1]: cri-containerd-641633648b53a2824c4d41b8e2526ec4d162d8224492883e1f47bda6b376d120.scope: Consumed 4.613s CPU time, 19.8M memory peak, 68K read from disk. Mar 12 23:49:07.324617 containerd[2010]: time="2026-03-12T23:49:07.324557259Z" level=info msg="received container exit event container_id:\"641633648b53a2824c4d41b8e2526ec4d162d8224492883e1f47bda6b376d120\" id:\"641633648b53a2824c4d41b8e2526ec4d162d8224492883e1f47bda6b376d120\" pid:3195 exit_status:1 exited_at:{seconds:1773359347 nanos:324183915}" Mar 12 23:49:07.367095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-641633648b53a2824c4d41b8e2526ec4d162d8224492883e1f47bda6b376d120-rootfs.mount: Deactivated successfully. Mar 12 23:49:07.924026 kubelet[3349]: I0312 23:49:07.923956 3349 scope.go:117] "RemoveContainer" containerID="641633648b53a2824c4d41b8e2526ec4d162d8224492883e1f47bda6b376d120" Mar 12 23:49:07.927094 containerd[2010]: time="2026-03-12T23:49:07.927038394Z" level=info msg="CreateContainer within sandbox \"398130d4757a6abb12c518ae80665daafecf5d047a8a7af246c9e354165da184\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 12 23:49:07.947320 containerd[2010]: time="2026-03-12T23:49:07.946009686Z" level=info msg="Container 51154e10ab850e72b9df0954d69288130ee3558d84b248feb459205621425992: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:49:07.972131 containerd[2010]: time="2026-03-12T23:49:07.972078019Z" level=info msg="CreateContainer within sandbox \"398130d4757a6abb12c518ae80665daafecf5d047a8a7af246c9e354165da184\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"51154e10ab850e72b9df0954d69288130ee3558d84b248feb459205621425992\"" Mar 12 23:49:07.973275 containerd[2010]: time="2026-03-12T23:49:07.973204951Z" level=info msg="StartContainer for \"51154e10ab850e72b9df0954d69288130ee3558d84b248feb459205621425992\"" Mar 12 23:49:07.975283 containerd[2010]: time="2026-03-12T23:49:07.975136795Z" level=info msg="connecting to shim 51154e10ab850e72b9df0954d69288130ee3558d84b248feb459205621425992" address="unix:///run/containerd/s/c1c17f1d6b5b0d05cf9464e745295a493845b4ad347fe9b24db8b315e623916c" protocol=ttrpc version=3 Mar 12 23:49:08.022328 systemd[1]: Started cri-containerd-51154e10ab850e72b9df0954d69288130ee3558d84b248feb459205621425992.scope - libcontainer container 51154e10ab850e72b9df0954d69288130ee3558d84b248feb459205621425992. Mar 12 23:49:08.102967 containerd[2010]: time="2026-03-12T23:49:08.102905199Z" level=info msg="StartContainer for \"51154e10ab850e72b9df0954d69288130ee3558d84b248feb459205621425992\" returns successfully" Mar 12 23:49:09.898013 kubelet[3349]: E0312 23:49:09.896915 3349 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-143?timeout=10s\": context deadline exceeded" Mar 12 23:49:14.633493 systemd[1]: cri-containerd-88fef7fe1792aa3bb7dbe3a24d73c20283ed8f14ee5211e13cf74b92c443a601.scope: Deactivated successfully. Mar 12 23:49:14.634367 systemd[1]: cri-containerd-88fef7fe1792aa3bb7dbe3a24d73c20283ed8f14ee5211e13cf74b92c443a601.scope: Consumed 463ms CPU time, 37.8M memory peak, 1M read from disk. Mar 12 23:49:14.636433 containerd[2010]: time="2026-03-12T23:49:14.636245640Z" level=info msg="received container exit event container_id:\"88fef7fe1792aa3bb7dbe3a24d73c20283ed8f14ee5211e13cf74b92c443a601\" id:\"88fef7fe1792aa3bb7dbe3a24d73c20283ed8f14ee5211e13cf74b92c443a601\" pid:6668 exit_status:1 exited_at:{seconds:1773359354 nanos:635673336}" Mar 12 23:49:14.680164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88fef7fe1792aa3bb7dbe3a24d73c20283ed8f14ee5211e13cf74b92c443a601-rootfs.mount: Deactivated successfully. Mar 12 23:49:14.962864 kubelet[3349]: I0312 23:49:14.962713 3349 scope.go:117] "RemoveContainer" containerID="f123172f16295206c0abc58f7378147e1266cbcbeeeb75d47650a5d7a8a0e398" Mar 12 23:49:14.964373 kubelet[3349]: I0312 23:49:14.964339 3349 scope.go:117] "RemoveContainer" containerID="88fef7fe1792aa3bb7dbe3a24d73c20283ed8f14ee5211e13cf74b92c443a601" Mar 12 23:49:14.964978 kubelet[3349]: E0312 23:49:14.964877 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5588576f44-h24xd_tigera-operator(3ce07789-10ac-4219-9e5d-9ea36b833731)\"" pod="tigera-operator/tigera-operator-5588576f44-h24xd" podUID="3ce07789-10ac-4219-9e5d-9ea36b833731" Mar 12 23:49:14.966491 containerd[2010]: time="2026-03-12T23:49:14.966434257Z" level=info msg="RemoveContainer for \"f123172f16295206c0abc58f7378147e1266cbcbeeeb75d47650a5d7a8a0e398\"" Mar 12 23:49:14.976257 containerd[2010]: time="2026-03-12T23:49:14.976067893Z" level=info msg="RemoveContainer for \"f123172f16295206c0abc58f7378147e1266cbcbeeeb75d47650a5d7a8a0e398\" returns successfully" Mar 12 23:49:19.898036 kubelet[3349]: E0312 23:49:19.897760 3349 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-143?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"