Mar 12 23:45:30.119890 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 12 23:45:30.122179 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Mar 12 22:07:21 -00 2026 Mar 12 23:45:30.122208 kernel: KASLR disabled due to lack of seed Mar 12 23:45:30.122225 kernel: efi: EFI v2.7 by EDK II Mar 12 23:45:30.122241 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78557598 Mar 12 23:45:30.122257 kernel: secureboot: Secure boot disabled Mar 12 23:45:30.122274 kernel: ACPI: Early table checksum verification disabled Mar 12 23:45:30.122290 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 12 23:45:30.122306 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 12 23:45:30.122321 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 12 23:45:30.122337 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 12 23:45:30.122357 kernel: ACPI: FACS 0x0000000078630000 000040 Mar 12 23:45:30.122373 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 12 23:45:30.122389 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 12 23:45:30.122408 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 12 23:45:30.122425 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 12 23:45:30.122447 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 12 23:45:30.122463 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 12 23:45:30.122480 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 12 23:45:30.122497 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 12 23:45:30.122513 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 12 23:45:30.122530 kernel: printk: legacy bootconsole [uart0] enabled Mar 12 23:45:30.122547 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 12 23:45:30.122563 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 12 23:45:30.122580 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Mar 12 23:45:30.122596 kernel: Zone ranges: Mar 12 23:45:30.122612 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 12 23:45:30.122632 kernel: DMA32 empty Mar 12 23:45:30.122648 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 12 23:45:30.122664 kernel: Device empty Mar 12 23:45:30.122680 kernel: Movable zone start for each node Mar 12 23:45:30.122697 kernel: Early memory node ranges Mar 12 23:45:30.122713 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 12 23:45:30.122730 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 12 23:45:30.122746 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 12 23:45:30.122762 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 12 23:45:30.122778 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 12 23:45:30.122795 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 12 23:45:30.122811 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 12 23:45:30.122832 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 12 23:45:30.122854 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 12 23:45:30.122871 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 12 23:45:30.122888 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Mar 12 23:45:30.122905 kernel: psci: probing for conduit method from ACPI. Mar 12 23:45:30.122926 kernel: psci: PSCIv1.0 detected in firmware. Mar 12 23:45:30.122943 kernel: psci: Using standard PSCI v0.2 function IDs Mar 12 23:45:30.122959 kernel: psci: Trusted OS migration not required Mar 12 23:45:30.122976 kernel: psci: SMC Calling Convention v1.1 Mar 12 23:45:30.122993 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Mar 12 23:45:30.123011 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 12 23:45:30.123028 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 12 23:45:30.123046 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 12 23:45:30.123064 kernel: Detected PIPT I-cache on CPU0 Mar 12 23:45:30.125248 kernel: CPU features: detected: GIC system register CPU interface Mar 12 23:45:30.125273 kernel: CPU features: detected: Spectre-v2 Mar 12 23:45:30.125302 kernel: CPU features: detected: Spectre-v3a Mar 12 23:45:30.125320 kernel: CPU features: detected: Spectre-BHB Mar 12 23:45:30.125337 kernel: CPU features: detected: ARM erratum 1742098 Mar 12 23:45:30.125354 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 12 23:45:30.125371 kernel: alternatives: applying boot alternatives Mar 12 23:45:30.125391 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9bf054737b516803a47d5bd373cc1c618bc257c93cef3d2e2bc09897e693383d Mar 12 23:45:30.125409 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 23:45:30.125426 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 23:45:30.125443 kernel: Fallback order for Node 0: 0 Mar 12 23:45:30.125461 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Mar 12 23:45:30.125477 kernel: Policy zone: Normal Mar 12 23:45:30.125498 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 23:45:30.125515 kernel: software IO TLB: area num 2. Mar 12 23:45:30.125532 kernel: software IO TLB: mapped [mem 0x0000000074557000-0x0000000078557000] (64MB) Mar 12 23:45:30.125550 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 12 23:45:30.125566 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 23:45:30.125585 kernel: rcu: RCU event tracing is enabled. Mar 12 23:45:30.125603 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 12 23:45:30.125621 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 23:45:30.125639 kernel: Tracing variant of Tasks RCU enabled. Mar 12 23:45:30.125657 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 23:45:30.125675 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 12 23:45:30.125696 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 12 23:45:30.125714 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 12 23:45:30.125731 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 12 23:45:30.125749 kernel: GICv3: 96 SPIs implemented Mar 12 23:45:30.125766 kernel: GICv3: 0 Extended SPIs implemented Mar 12 23:45:30.125783 kernel: Root IRQ handler: gic_handle_irq Mar 12 23:45:30.125801 kernel: GICv3: GICv3 features: 16 PPIs Mar 12 23:45:30.125819 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Mar 12 23:45:30.125837 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 12 23:45:30.125854 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 12 23:45:30.125872 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Mar 12 23:45:30.125890 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Mar 12 23:45:30.125911 kernel: GICv3: using LPI property table @0x0000000400110000 Mar 12 23:45:30.125927 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 12 23:45:30.125944 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Mar 12 23:45:30.125961 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 23:45:30.125978 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 12 23:45:30.125995 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 12 23:45:30.126012 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 12 23:45:30.126029 kernel: Console: colour dummy device 80x25 Mar 12 23:45:30.126046 kernel: printk: legacy console [tty1] enabled Mar 12 23:45:30.126064 kernel: ACPI: Core revision 20240827 Mar 12 23:45:30.126106 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 12 23:45:30.126136 kernel: pid_max: default: 32768 minimum: 301 Mar 12 23:45:30.126155 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 12 23:45:30.126173 kernel: landlock: Up and running. Mar 12 23:45:30.126191 kernel: SELinux: Initializing. Mar 12 23:45:30.126209 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 23:45:30.126227 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 23:45:30.126244 kernel: rcu: Hierarchical SRCU implementation. Mar 12 23:45:30.126261 kernel: rcu: Max phase no-delay instances is 400. Mar 12 23:45:30.126283 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 12 23:45:30.126300 kernel: Remapping and enabling EFI services. Mar 12 23:45:30.126317 kernel: smp: Bringing up secondary CPUs ... Mar 12 23:45:30.126334 kernel: Detected PIPT I-cache on CPU1 Mar 12 23:45:30.126351 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 12 23:45:30.126368 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Mar 12 23:45:30.126385 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 12 23:45:30.126403 kernel: smp: Brought up 1 node, 2 CPUs Mar 12 23:45:30.126421 kernel: SMP: Total of 2 processors activated. Mar 12 23:45:30.126443 kernel: CPU: All CPU(s) started at EL1 Mar 12 23:45:30.126472 kernel: CPU features: detected: 32-bit EL0 Support Mar 12 23:45:30.126492 kernel: CPU features: detected: 32-bit EL1 Support Mar 12 23:45:30.126514 kernel: CPU features: detected: CRC32 instructions Mar 12 23:45:30.126532 kernel: alternatives: applying system-wide alternatives Mar 12 23:45:30.126551 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Mar 12 23:45:30.126570 kernel: devtmpfs: initialized Mar 12 23:45:30.126590 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 23:45:30.126613 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 12 23:45:30.126632 kernel: 16880 pages in range for non-PLT usage Mar 12 23:45:30.126650 kernel: 508400 pages in range for PLT usage Mar 12 23:45:30.126668 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 23:45:30.126686 kernel: SMBIOS 3.0.0 present. Mar 12 23:45:30.126704 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 12 23:45:30.126722 kernel: DMI: Memory slots populated: 0/0 Mar 12 23:45:30.126740 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 23:45:30.126758 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 12 23:45:30.126779 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 12 23:45:30.126798 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 12 23:45:30.126816 kernel: audit: initializing netlink subsys (disabled) Mar 12 23:45:30.126834 kernel: audit: type=2000 audit(0.227:1): state=initialized audit_enabled=0 res=1 Mar 12 23:45:30.126851 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 23:45:30.126869 kernel: cpuidle: using governor menu Mar 12 23:45:30.126887 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 12 23:45:30.126906 kernel: ASID allocator initialised with 65536 entries Mar 12 23:45:30.126924 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 23:45:30.126947 kernel: Serial: AMBA PL011 UART driver Mar 12 23:45:30.126966 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 23:45:30.126985 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 23:45:30.127004 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 12 23:45:30.127023 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 12 23:45:30.127041 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 23:45:30.127059 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 23:45:30.129431 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 12 23:45:30.129462 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 12 23:45:30.129489 kernel: ACPI: Added _OSI(Module Device) Mar 12 23:45:30.129509 kernel: ACPI: Added _OSI(Processor Device) Mar 12 23:45:30.129528 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 23:45:30.129547 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 23:45:30.129568 kernel: ACPI: Interpreter enabled Mar 12 23:45:30.129586 kernel: ACPI: Using GIC for interrupt routing Mar 12 23:45:30.129605 kernel: ACPI: MCFG table detected, 1 entries Mar 12 23:45:30.129624 kernel: ACPI: CPU0 has been hot-added Mar 12 23:45:30.129643 kernel: ACPI: CPU1 has been hot-added Mar 12 23:45:30.129666 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Mar 12 23:45:30.130149 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 12 23:45:30.130357 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 12 23:45:30.130553 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 12 23:45:30.130746 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Mar 12 23:45:30.130941 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Mar 12 23:45:30.130966 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 12 23:45:30.130992 kernel: acpiphp: Slot [1] registered Mar 12 23:45:30.131012 kernel: acpiphp: Slot [2] registered Mar 12 23:45:30.131030 kernel: acpiphp: Slot [3] registered Mar 12 23:45:30.131049 kernel: acpiphp: Slot [4] registered Mar 12 23:45:30.131068 kernel: acpiphp: Slot [5] registered Mar 12 23:45:30.131779 kernel: acpiphp: Slot [6] registered Mar 12 23:45:30.131801 kernel: acpiphp: Slot [7] registered Mar 12 23:45:30.131820 kernel: acpiphp: Slot [8] registered Mar 12 23:45:30.131839 kernel: acpiphp: Slot [9] registered Mar 12 23:45:30.131857 kernel: acpiphp: Slot [10] registered Mar 12 23:45:30.131885 kernel: acpiphp: Slot [11] registered Mar 12 23:45:30.131904 kernel: acpiphp: Slot [12] registered Mar 12 23:45:30.131922 kernel: acpiphp: Slot [13] registered Mar 12 23:45:30.131941 kernel: acpiphp: Slot [14] registered Mar 12 23:45:30.131960 kernel: acpiphp: Slot [15] registered Mar 12 23:45:30.133580 kernel: acpiphp: Slot [16] registered Mar 12 23:45:30.133621 kernel: acpiphp: Slot [17] registered Mar 12 23:45:30.133640 kernel: acpiphp: Slot [18] registered Mar 12 23:45:30.133658 kernel: acpiphp: Slot [19] registered Mar 12 23:45:30.133687 kernel: acpiphp: Slot [20] registered Mar 12 23:45:30.133705 kernel: acpiphp: Slot [21] registered Mar 12 23:45:30.133723 kernel: acpiphp: Slot [22] registered Mar 12 23:45:30.133741 kernel: acpiphp: Slot [23] registered Mar 12 23:45:30.133759 kernel: acpiphp: Slot [24] registered Mar 12 23:45:30.133778 kernel: acpiphp: Slot [25] registered Mar 12 23:45:30.133796 kernel: acpiphp: Slot [26] registered Mar 12 23:45:30.133816 kernel: acpiphp: Slot [27] registered Mar 12 23:45:30.133834 kernel: acpiphp: Slot [28] registered Mar 12 23:45:30.133853 kernel: acpiphp: Slot [29] registered Mar 12 23:45:30.133876 kernel: acpiphp: Slot [30] registered Mar 12 23:45:30.133895 kernel: acpiphp: Slot [31] registered Mar 12 23:45:30.133913 kernel: PCI host bridge to bus 0000:00 Mar 12 23:45:30.134202 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 12 23:45:30.134388 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 12 23:45:30.134566 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 12 23:45:30.134740 kernel: pci_bus 0000:00: root bus resource [bus 00] Mar 12 23:45:30.134985 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Mar 12 23:45:30.135823 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Mar 12 23:45:30.136031 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Mar 12 23:45:30.136267 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Mar 12 23:45:30.136465 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Mar 12 23:45:30.136656 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 12 23:45:30.136912 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Mar 12 23:45:30.137156 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Mar 12 23:45:30.138344 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Mar 12 23:45:30.138554 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Mar 12 23:45:30.138749 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 12 23:45:30.138927 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 12 23:45:30.139139 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 12 23:45:30.139333 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 12 23:45:30.139361 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 12 23:45:30.139381 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 12 23:45:30.139400 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 12 23:45:30.139420 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 12 23:45:30.139438 kernel: iommu: Default domain type: Translated Mar 12 23:45:30.139457 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 12 23:45:30.139476 kernel: efivars: Registered efivars operations Mar 12 23:45:30.139494 kernel: vgaarb: loaded Mar 12 23:45:30.139519 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 12 23:45:30.139538 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 23:45:30.139556 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 23:45:30.139575 kernel: pnp: PnP ACPI init Mar 12 23:45:30.139793 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 12 23:45:30.139824 kernel: pnp: PnP ACPI: found 1 devices Mar 12 23:45:30.139843 kernel: NET: Registered PF_INET protocol family Mar 12 23:45:30.139862 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 23:45:30.139887 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 23:45:30.139909 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 23:45:30.139929 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 23:45:30.139948 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 23:45:30.139968 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 23:45:30.139987 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 23:45:30.140006 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 23:45:30.140025 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 23:45:30.140044 kernel: PCI: CLS 0 bytes, default 64 Mar 12 23:45:30.140067 kernel: kvm [1]: HYP mode not available Mar 12 23:45:30.145164 kernel: Initialise system trusted keyrings Mar 12 23:45:30.145186 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 23:45:30.145205 kernel: Key type asymmetric registered Mar 12 23:45:30.145224 kernel: Asymmetric key parser 'x509' registered Mar 12 23:45:30.145242 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 12 23:45:30.145261 kernel: io scheduler mq-deadline registered Mar 12 23:45:30.145279 kernel: io scheduler kyber registered Mar 12 23:45:30.145298 kernel: io scheduler bfq registered Mar 12 23:45:30.145566 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 12 23:45:30.145595 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 12 23:45:30.145614 kernel: ACPI: button: Power Button [PWRB] Mar 12 23:45:30.145633 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 12 23:45:30.145651 kernel: ACPI: button: Sleep Button [SLPB] Mar 12 23:45:30.145669 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 23:45:30.145688 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 12 23:45:30.145884 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 12 23:45:30.145915 kernel: printk: legacy console [ttyS0] disabled Mar 12 23:45:30.145934 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 12 23:45:30.145953 kernel: printk: legacy console [ttyS0] enabled Mar 12 23:45:30.145971 kernel: printk: legacy bootconsole [uart0] disabled Mar 12 23:45:30.145989 kernel: thunder_xcv, ver 1.0 Mar 12 23:45:30.146007 kernel: thunder_bgx, ver 1.0 Mar 12 23:45:30.146024 kernel: nicpf, ver 1.0 Mar 12 23:45:30.146042 kernel: nicvf, ver 1.0 Mar 12 23:45:30.146271 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 12 23:45:30.146457 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-12T23:45:29 UTC (1773359129) Mar 12 23:45:30.146482 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 12 23:45:30.146500 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Mar 12 23:45:30.146519 kernel: NET: Registered PF_INET6 protocol family Mar 12 23:45:30.146537 kernel: watchdog: NMI not fully supported Mar 12 23:45:30.146554 kernel: watchdog: Hard watchdog permanently disabled Mar 12 23:45:30.146572 kernel: Segment Routing with IPv6 Mar 12 23:45:30.146590 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 23:45:30.146607 kernel: NET: Registered PF_PACKET protocol family Mar 12 23:45:30.146630 kernel: Key type dns_resolver registered Mar 12 23:45:30.146648 kernel: registered taskstats version 1 Mar 12 23:45:30.146666 kernel: Loading compiled-in X.509 certificates Mar 12 23:45:30.146684 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 653709f5ad64856a37b70c07139630123477ee1c' Mar 12 23:45:30.146702 kernel: Demotion targets for Node 0: null Mar 12 23:45:30.146720 kernel: Key type .fscrypt registered Mar 12 23:45:30.146737 kernel: Key type fscrypt-provisioning registered Mar 12 23:45:30.146755 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 23:45:30.146773 kernel: ima: Allocated hash algorithm: sha1 Mar 12 23:45:30.146795 kernel: ima: No architecture policies found Mar 12 23:45:30.146813 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 12 23:45:30.146831 kernel: clk: Disabling unused clocks Mar 12 23:45:30.146849 kernel: PM: genpd: Disabling unused power domains Mar 12 23:45:30.146867 kernel: Warning: unable to open an initial console. Mar 12 23:45:30.146885 kernel: Freeing unused kernel memory: 39552K Mar 12 23:45:30.146903 kernel: Run /init as init process Mar 12 23:45:30.146920 kernel: with arguments: Mar 12 23:45:30.146938 kernel: /init Mar 12 23:45:30.146959 kernel: with environment: Mar 12 23:45:30.146976 kernel: HOME=/ Mar 12 23:45:30.146994 kernel: TERM=linux Mar 12 23:45:30.147013 systemd[1]: Successfully made /usr/ read-only. Mar 12 23:45:30.147037 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 12 23:45:30.147058 systemd[1]: Detected virtualization amazon. Mar 12 23:45:30.149752 systemd[1]: Detected architecture arm64. Mar 12 23:45:30.149784 systemd[1]: Running in initrd. Mar 12 23:45:30.149804 systemd[1]: No hostname configured, using default hostname. Mar 12 23:45:30.149824 systemd[1]: Hostname set to . Mar 12 23:45:30.149843 systemd[1]: Initializing machine ID from VM UUID. Mar 12 23:45:30.149862 systemd[1]: Queued start job for default target initrd.target. Mar 12 23:45:30.149881 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 23:45:30.149901 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 23:45:30.149921 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 23:45:30.149944 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 23:45:30.149964 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 23:45:30.149984 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 23:45:30.150006 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 23:45:30.150025 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 23:45:30.150045 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 23:45:30.150064 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 23:45:30.150108 systemd[1]: Reached target paths.target - Path Units. Mar 12 23:45:30.150129 systemd[1]: Reached target slices.target - Slice Units. Mar 12 23:45:30.150148 systemd[1]: Reached target swap.target - Swaps. Mar 12 23:45:30.150167 systemd[1]: Reached target timers.target - Timer Units. Mar 12 23:45:30.150186 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 23:45:30.150205 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 23:45:30.150224 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 23:45:30.150243 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 12 23:45:30.150263 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 23:45:30.150286 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 23:45:30.150305 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 23:45:30.150324 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 23:45:30.150343 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 23:45:30.150363 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 23:45:30.150382 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 23:45:30.150402 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 12 23:45:30.150421 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 23:45:30.150444 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 23:45:30.150463 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 23:45:30.150482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 23:45:30.150501 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 23:45:30.150522 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 23:45:30.150546 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 23:45:30.150565 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 23:45:30.150624 systemd-journald[258]: Collecting audit messages is disabled. Mar 12 23:45:30.150666 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 23:45:30.150690 systemd-journald[258]: Journal started Mar 12 23:45:30.150726 systemd-journald[258]: Runtime Journal (/run/log/journal/ec2938bb08ed1c81a1676f0d61a91e36) is 8M, max 75.3M, 67.3M free. Mar 12 23:45:30.114595 systemd-modules-load[260]: Inserted module 'overlay' Mar 12 23:45:30.158057 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 23:45:30.158841 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 23:45:30.171184 kernel: Bridge firewalling registered Mar 12 23:45:30.168298 systemd-modules-load[260]: Inserted module 'br_netfilter' Mar 12 23:45:30.174230 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 23:45:30.181133 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 23:45:30.188994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 23:45:30.198378 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 23:45:30.215108 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 23:45:30.228429 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 23:45:30.247624 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 23:45:30.259375 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 23:45:30.276650 systemd-tmpfiles[282]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 12 23:45:30.286700 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 23:45:30.292923 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 23:45:30.300164 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 23:45:30.316790 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 23:45:30.347796 dracut-cmdline[300]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9bf054737b516803a47d5bd373cc1c618bc257c93cef3d2e2bc09897e693383d Mar 12 23:45:30.417763 systemd-resolved[301]: Positive Trust Anchors: Mar 12 23:45:30.417800 systemd-resolved[301]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 23:45:30.417862 systemd-resolved[301]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 23:45:30.517111 kernel: SCSI subsystem initialized Mar 12 23:45:30.525109 kernel: Loading iSCSI transport class v2.0-870. Mar 12 23:45:30.538111 kernel: iscsi: registered transport (tcp) Mar 12 23:45:30.559106 kernel: iscsi: registered transport (qla4xxx) Mar 12 23:45:30.560104 kernel: QLogic iSCSI HBA Driver Mar 12 23:45:30.595249 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 23:45:30.637134 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 23:45:30.645995 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 23:45:30.692115 kernel: random: crng init done Mar 12 23:45:30.692418 systemd-resolved[301]: Defaulting to hostname 'linux'. Mar 12 23:45:30.696269 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 23:45:30.701696 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 23:45:30.740131 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 23:45:30.741877 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 23:45:30.837118 kernel: raid6: neonx8 gen() 6566 MB/s Mar 12 23:45:30.854107 kernel: raid6: neonx4 gen() 6569 MB/s Mar 12 23:45:30.871118 kernel: raid6: neonx2 gen() 5430 MB/s Mar 12 23:45:30.888108 kernel: raid6: neonx1 gen() 3930 MB/s Mar 12 23:45:30.905108 kernel: raid6: int64x8 gen() 3648 MB/s Mar 12 23:45:30.922107 kernel: raid6: int64x4 gen() 3688 MB/s Mar 12 23:45:30.939106 kernel: raid6: int64x2 gen() 3599 MB/s Mar 12 23:45:30.957177 kernel: raid6: int64x1 gen() 2762 MB/s Mar 12 23:45:30.957217 kernel: raid6: using algorithm neonx4 gen() 6569 MB/s Mar 12 23:45:30.976110 kernel: raid6: .... xor() 4866 MB/s, rmw enabled Mar 12 23:45:30.976150 kernel: raid6: using neon recovery algorithm Mar 12 23:45:30.984810 kernel: xor: measuring software checksum speed Mar 12 23:45:30.984866 kernel: 8regs : 12939 MB/sec Mar 12 23:45:30.986067 kernel: 32regs : 13041 MB/sec Mar 12 23:45:30.988435 kernel: arm64_neon : 8698 MB/sec Mar 12 23:45:30.988468 kernel: xor: using function: 32regs (13041 MB/sec) Mar 12 23:45:31.081124 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 23:45:31.094149 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 23:45:31.101943 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 23:45:31.152439 systemd-udevd[510]: Using default interface naming scheme 'v255'. Mar 12 23:45:31.163039 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 23:45:31.177836 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 23:45:31.217251 dracut-pre-trigger[519]: rd.md=0: removing MD RAID activation Mar 12 23:45:31.263722 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 23:45:31.270792 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 23:45:31.399819 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 23:45:31.411850 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 23:45:31.554555 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 12 23:45:31.554632 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 12 23:45:31.576989 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 12 23:45:31.577365 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 12 23:45:31.587749 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 23:45:31.592919 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 23:45:31.599985 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 12 23:45:31.600044 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 12 23:45:31.600298 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 23:45:31.607631 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 23:45:31.615527 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 12 23:45:31.622757 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 12 23:45:31.623017 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:84:3a:13:73:d5 Mar 12 23:45:31.635930 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 12 23:45:31.635993 kernel: GPT:9289727 != 33554431 Mar 12 23:45:31.636018 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 12 23:45:31.637651 kernel: GPT:9289727 != 33554431 Mar 12 23:45:31.638838 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 12 23:45:31.640846 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 12 23:45:31.650371 (udev-worker)[569]: Network interface NamePolicy= disabled on kernel command line. Mar 12 23:45:31.670758 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 23:45:31.701104 kernel: nvme nvme0: using unchecked data buffer Mar 12 23:45:31.839986 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 12 23:45:31.867558 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 12 23:45:31.891163 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 23:45:31.920134 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 12 23:45:31.960157 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 12 23:45:31.963029 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 12 23:45:31.972873 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 23:45:31.975633 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 23:45:31.983780 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 23:45:31.990500 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 23:45:31.997231 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 23:45:32.020264 disk-uuid[689]: Primary Header is updated. Mar 12 23:45:32.020264 disk-uuid[689]: Secondary Entries is updated. Mar 12 23:45:32.020264 disk-uuid[689]: Secondary Header is updated. Mar 12 23:45:32.035103 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 12 23:45:32.043585 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 23:45:33.059113 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 12 23:45:33.062762 disk-uuid[691]: The operation has completed successfully. Mar 12 23:45:33.240332 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 23:45:33.240881 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 23:45:33.324223 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 23:45:33.348415 sh[957]: Success Mar 12 23:45:33.379871 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 23:45:33.379958 kernel: device-mapper: uevent: version 1.0.3 Mar 12 23:45:33.382369 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 12 23:45:33.394112 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 12 23:45:33.511832 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 23:45:33.519220 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 23:45:33.536762 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 23:45:33.564907 kernel: BTRFS: device fsid fcbb17b2-5053-44fc-82f0-b24e4919d6d8 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (992) Mar 12 23:45:33.564984 kernel: BTRFS info (device dm-0): first mount of filesystem fcbb17b2-5053-44fc-82f0-b24e4919d6d8 Mar 12 23:45:33.565012 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 12 23:45:33.688180 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 12 23:45:33.688254 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 12 23:45:33.688280 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 12 23:45:33.713648 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 23:45:33.715997 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 12 23:45:33.720709 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 23:45:33.721890 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 23:45:33.732581 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 23:45:33.796279 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:10) scanned by mount (1025) Mar 12 23:45:33.803440 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 12 23:45:33.803521 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 12 23:45:33.814131 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 12 23:45:33.814208 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 12 23:45:33.823162 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 12 23:45:33.826303 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 23:45:33.832291 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 23:45:33.932390 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 23:45:33.942235 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 23:45:34.025983 systemd-networkd[1161]: lo: Link UP Mar 12 23:45:34.026005 systemd-networkd[1161]: lo: Gained carrier Mar 12 23:45:34.028542 systemd-networkd[1161]: Enumeration completed Mar 12 23:45:34.029428 systemd-networkd[1161]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 23:45:34.029435 systemd-networkd[1161]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 23:45:34.029710 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 23:45:34.038552 systemd[1]: Reached target network.target - Network. Mar 12 23:45:34.040024 systemd-networkd[1161]: eth0: Link UP Mar 12 23:45:34.040031 systemd-networkd[1161]: eth0: Gained carrier Mar 12 23:45:34.040052 systemd-networkd[1161]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 23:45:34.072158 systemd-networkd[1161]: eth0: DHCPv4 address 172.31.21.65/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 12 23:45:34.398526 ignition[1082]: Ignition 2.22.0 Mar 12 23:45:34.398554 ignition[1082]: Stage: fetch-offline Mar 12 23:45:34.402061 ignition[1082]: no configs at "/usr/lib/ignition/base.d" Mar 12 23:45:34.402108 ignition[1082]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 12 23:45:34.406581 ignition[1082]: Ignition finished successfully Mar 12 23:45:34.413249 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 23:45:34.423331 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 12 23:45:34.473905 ignition[1172]: Ignition 2.22.0 Mar 12 23:45:34.474431 ignition[1172]: Stage: fetch Mar 12 23:45:34.474944 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Mar 12 23:45:34.474967 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 12 23:45:34.475123 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 12 23:45:34.492924 ignition[1172]: PUT result: OK Mar 12 23:45:34.496897 ignition[1172]: parsed url from cmdline: "" Mar 12 23:45:34.497036 ignition[1172]: no config URL provided Mar 12 23:45:34.497054 ignition[1172]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 23:45:34.497652 ignition[1172]: no config at "/usr/lib/ignition/user.ign" Mar 12 23:45:34.497687 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 12 23:45:34.503489 ignition[1172]: PUT result: OK Mar 12 23:45:34.505918 ignition[1172]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 12 23:45:34.511923 ignition[1172]: GET result: OK Mar 12 23:45:34.513468 ignition[1172]: parsing config with SHA512: eb8cee35a91a100ba6be096d6e16874f3c7fc4a150552faeea8c770855872f2e1025b7f0c165db589af8f04f6524b83faae7c44f2702c7d40ea50145d95ce619 Mar 12 23:45:34.527674 unknown[1172]: fetched base config from "system" Mar 12 23:45:34.527933 unknown[1172]: fetched base config from "system" Mar 12 23:45:34.528566 ignition[1172]: fetch: fetch complete Mar 12 23:45:34.527947 unknown[1172]: fetched user config from "aws" Mar 12 23:45:34.528581 ignition[1172]: fetch: fetch passed Mar 12 23:45:34.528768 ignition[1172]: Ignition finished successfully Mar 12 23:45:34.541138 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 12 23:45:34.547809 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 23:45:34.600345 ignition[1179]: Ignition 2.22.0 Mar 12 23:45:34.600880 ignition[1179]: Stage: kargs Mar 12 23:45:34.601478 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Mar 12 23:45:34.601502 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 12 23:45:34.601659 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 12 23:45:34.616949 ignition[1179]: PUT result: OK Mar 12 23:45:34.621990 ignition[1179]: kargs: kargs passed Mar 12 23:45:34.622137 ignition[1179]: Ignition finished successfully Mar 12 23:45:34.627684 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 23:45:34.634385 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 23:45:34.684360 ignition[1185]: Ignition 2.22.0 Mar 12 23:45:34.684391 ignition[1185]: Stage: disks Mar 12 23:45:34.685050 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Mar 12 23:45:34.685111 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 12 23:45:34.685236 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 12 23:45:34.687844 ignition[1185]: PUT result: OK Mar 12 23:45:34.702475 ignition[1185]: disks: disks passed Mar 12 23:45:34.702800 ignition[1185]: Ignition finished successfully Mar 12 23:45:34.712143 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 23:45:34.717220 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 23:45:34.720346 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 23:45:34.728402 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 23:45:34.731407 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 23:45:34.737937 systemd[1]: Reached target basic.target - Basic System. Mar 12 23:45:34.743635 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 23:45:34.787298 systemd-fsck[1193]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 12 23:45:34.794399 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 23:45:34.803685 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 23:45:34.953118 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4b09db19-3beb-48c2-8dcb-3eec5602206c r/w with ordered data mode. Quota mode: none. Mar 12 23:45:34.955037 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 23:45:34.959276 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 23:45:34.966578 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 23:45:34.977684 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 23:45:34.980461 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 12 23:45:34.980537 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 23:45:34.980582 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 23:45:35.004538 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 23:45:35.012807 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 23:45:35.026123 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:10) scanned by mount (1212) Mar 12 23:45:35.030643 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 12 23:45:35.030720 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 12 23:45:35.038406 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 12 23:45:35.038456 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 12 23:45:35.041602 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 23:45:35.390203 initrd-setup-root[1236]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 23:45:35.400336 initrd-setup-root[1243]: cut: /sysroot/etc/group: No such file or directory Mar 12 23:45:35.409991 initrd-setup-root[1250]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 23:45:35.419063 initrd-setup-root[1257]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 23:45:35.545429 systemd-networkd[1161]: eth0: Gained IPv6LL Mar 12 23:45:35.792831 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 23:45:35.798648 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 23:45:35.806787 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 23:45:35.834667 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 23:45:35.840104 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 12 23:45:35.871166 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 23:45:35.891113 ignition[1325]: INFO : Ignition 2.22.0 Mar 12 23:45:35.891113 ignition[1325]: INFO : Stage: mount Mar 12 23:45:35.895041 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 23:45:35.895041 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 12 23:45:35.895041 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 12 23:45:35.903709 ignition[1325]: INFO : PUT result: OK Mar 12 23:45:35.908448 ignition[1325]: INFO : mount: mount passed Mar 12 23:45:35.910381 ignition[1325]: INFO : Ignition finished successfully Mar 12 23:45:35.914829 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 23:45:35.920836 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 23:45:35.957988 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 23:45:35.995930 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:10) scanned by mount (1337) Mar 12 23:45:35.995994 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 12 23:45:35.996032 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 12 23:45:36.004889 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 12 23:45:36.004960 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 12 23:45:36.008184 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 23:45:36.056506 ignition[1354]: INFO : Ignition 2.22.0 Mar 12 23:45:36.056506 ignition[1354]: INFO : Stage: files Mar 12 23:45:36.061029 ignition[1354]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 23:45:36.061029 ignition[1354]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 12 23:45:36.061029 ignition[1354]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 12 23:45:36.061029 ignition[1354]: INFO : PUT result: OK Mar 12 23:45:36.072230 ignition[1354]: DEBUG : files: compiled without relabeling support, skipping Mar 12 23:45:36.084278 ignition[1354]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 23:45:36.084278 ignition[1354]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 23:45:36.091373 ignition[1354]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 23:45:36.091373 ignition[1354]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 23:45:36.091373 ignition[1354]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 23:45:36.089966 unknown[1354]: wrote ssh authorized keys file for user: core Mar 12 23:45:36.104567 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 12 23:45:36.110159 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 12 23:45:36.202155 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 23:45:36.366157 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 12 23:45:36.366157 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 12 23:45:36.375485 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 23:45:36.375485 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 23:45:36.375485 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 23:45:36.375485 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 23:45:36.375485 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 23:45:36.375485 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 23:45:36.375485 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 23:45:36.402561 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 23:45:36.402561 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 23:45:36.402561 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 12 23:45:36.402561 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 12 23:45:36.402561 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 12 23:45:36.402561 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 12 23:45:36.726034 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 12 23:45:37.192559 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 12 23:45:37.192559 ignition[1354]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 12 23:45:37.201450 ignition[1354]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 23:45:37.201450 ignition[1354]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 23:45:37.201450 ignition[1354]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 12 23:45:37.201450 ignition[1354]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 12 23:45:37.201450 ignition[1354]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 23:45:37.201450 ignition[1354]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 23:45:37.201450 ignition[1354]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 23:45:37.201450 ignition[1354]: INFO : files: files passed Mar 12 23:45:37.201450 ignition[1354]: INFO : Ignition finished successfully Mar 12 23:45:37.232383 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 23:45:37.237839 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 23:45:37.245029 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 23:45:37.264943 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 23:45:37.266922 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 23:45:37.282838 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 23:45:37.282838 initrd-setup-root-after-ignition[1383]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 23:45:37.293270 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 23:45:37.300165 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 23:45:37.303538 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 23:45:37.312162 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 23:45:37.396179 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 23:45:37.396380 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 23:45:37.400514 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 23:45:37.402966 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 23:45:37.407529 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 23:45:37.409341 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 23:45:37.470802 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 23:45:37.479199 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 23:45:37.535095 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 23:45:37.538826 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 23:45:37.547175 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 23:45:37.549670 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 23:45:37.549921 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 23:45:37.560012 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 23:45:37.562854 systemd[1]: Stopped target basic.target - Basic System. Mar 12 23:45:37.569767 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 23:45:37.572878 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 23:45:37.580421 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 23:45:37.586677 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 12 23:45:37.592057 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 23:45:37.596665 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 23:45:37.602436 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 23:45:37.605612 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 23:45:37.612524 systemd[1]: Stopped target swap.target - Swaps. Mar 12 23:45:37.614795 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 23:45:37.615128 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 23:45:37.624065 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 23:45:37.627154 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 23:45:37.635400 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 23:45:37.637651 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 23:45:37.641171 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 23:45:37.641399 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 23:45:37.649994 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 23:45:37.650283 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 23:45:37.658039 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 23:45:37.658377 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 23:45:37.668253 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 23:45:37.677308 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 23:45:37.681557 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 23:45:37.686449 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 23:45:37.693145 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 23:45:37.693601 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 23:45:37.714681 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 23:45:37.716525 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 23:45:37.741910 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 23:45:37.751816 ignition[1407]: INFO : Ignition 2.22.0 Mar 12 23:45:37.751816 ignition[1407]: INFO : Stage: umount Mar 12 23:45:37.756225 ignition[1407]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 23:45:37.756225 ignition[1407]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 12 23:45:37.756225 ignition[1407]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 12 23:45:37.763993 ignition[1407]: INFO : PUT result: OK Mar 12 23:45:37.766638 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 23:45:37.769314 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 23:45:37.775939 ignition[1407]: INFO : umount: umount passed Mar 12 23:45:37.777991 ignition[1407]: INFO : Ignition finished successfully Mar 12 23:45:37.780208 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 23:45:37.782429 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 23:45:37.789465 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 23:45:37.789699 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 23:45:37.796037 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 23:45:37.796174 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 23:45:37.800508 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 12 23:45:37.800590 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 12 23:45:37.809905 systemd[1]: Stopped target network.target - Network. Mar 12 23:45:37.811903 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 23:45:37.811997 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 23:45:37.819201 systemd[1]: Stopped target paths.target - Path Units. Mar 12 23:45:37.821405 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 23:45:37.828301 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 23:45:37.831959 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 23:45:37.838966 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 23:45:37.842995 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 23:45:37.844030 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 23:45:37.847525 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 23:45:37.847603 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 23:45:37.849158 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 23:45:37.849252 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 23:45:37.853471 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 23:45:37.853552 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 23:45:37.857501 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 23:45:37.857585 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 23:45:37.864669 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 23:45:37.866396 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 23:45:37.889812 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 23:45:37.890025 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 23:45:37.901729 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 12 23:45:37.902237 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 23:45:37.902640 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 23:45:37.913971 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 12 23:45:37.915422 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 12 23:45:37.922539 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 23:45:37.922763 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 23:45:37.931482 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 23:45:37.936165 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 23:45:37.936282 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 23:45:37.939798 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 23:45:37.939901 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 23:45:37.957202 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 23:45:37.957424 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 23:45:37.964448 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 23:45:37.964540 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 23:45:37.969748 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 23:45:37.973982 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 12 23:45:37.976227 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 12 23:45:38.004662 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 23:45:38.008728 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 23:45:38.013662 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 23:45:38.013807 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 23:45:38.021022 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 23:45:38.021128 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 23:45:38.025148 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 23:45:38.025247 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 23:45:38.028580 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 23:45:38.028674 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 23:45:38.029600 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 23:45:38.029684 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 23:45:38.034894 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 23:45:38.039866 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 12 23:45:38.039980 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 23:45:38.043424 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 23:45:38.043526 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 23:45:38.077164 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 12 23:45:38.077271 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 23:45:38.089309 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 23:45:38.089430 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 23:45:38.094276 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 23:45:38.094386 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 23:45:38.106538 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 12 23:45:38.106642 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 12 23:45:38.106721 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 12 23:45:38.106805 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 12 23:45:38.107884 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 23:45:38.109266 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 23:45:38.115490 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 23:45:38.115651 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 23:45:38.120110 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 23:45:38.144457 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 23:45:38.198361 systemd[1]: Switching root. Mar 12 23:45:38.251781 systemd-journald[258]: Journal stopped Mar 12 23:45:40.735796 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Mar 12 23:45:40.735916 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 23:45:40.735958 kernel: SELinux: policy capability open_perms=1 Mar 12 23:45:40.735988 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 23:45:40.736018 kernel: SELinux: policy capability always_check_network=0 Mar 12 23:45:40.736054 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 23:45:40.736105 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 23:45:40.736146 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 23:45:40.736175 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 23:45:40.736203 kernel: SELinux: policy capability userspace_initial_context=0 Mar 12 23:45:40.736231 kernel: audit: type=1403 audit(1773359138.723:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 23:45:40.736264 systemd[1]: Successfully loaded SELinux policy in 84.459ms. Mar 12 23:45:40.736311 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.925ms. Mar 12 23:45:40.736344 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 12 23:45:40.736380 systemd[1]: Detected virtualization amazon. Mar 12 23:45:40.736410 systemd[1]: Detected architecture arm64. Mar 12 23:45:40.736439 systemd[1]: Detected first boot. Mar 12 23:45:40.736470 systemd[1]: Initializing machine ID from VM UUID. Mar 12 23:45:40.736501 zram_generator::config[1450]: No configuration found. Mar 12 23:45:40.736535 kernel: NET: Registered PF_VSOCK protocol family Mar 12 23:45:40.736565 systemd[1]: Populated /etc with preset unit settings. Mar 12 23:45:40.736596 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 12 23:45:40.736629 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 23:45:40.736657 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 23:45:40.736684 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 23:45:40.736734 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 23:45:40.736769 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 23:45:40.736796 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 23:45:40.736826 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 23:45:40.736853 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 23:45:40.736885 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 23:45:40.736918 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 23:45:40.736955 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 23:45:40.736985 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 23:45:40.737013 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 23:45:40.737041 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 23:45:40.738092 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 23:45:40.738139 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 23:45:40.738171 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 23:45:40.738203 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 12 23:45:40.738239 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 23:45:40.738267 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 23:45:40.738295 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 23:45:40.738323 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 23:45:40.738352 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 23:45:40.738380 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 23:45:40.738414 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 23:45:40.738446 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 23:45:40.738474 systemd[1]: Reached target slices.target - Slice Units. Mar 12 23:45:40.738504 systemd[1]: Reached target swap.target - Swaps. Mar 12 23:45:40.738532 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 23:45:40.738561 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 23:45:40.738591 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 12 23:45:40.738618 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 23:45:40.738647 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 23:45:40.738675 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 23:45:40.738703 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 23:45:40.738736 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 23:45:40.738763 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 23:45:40.738790 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 23:45:40.738818 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 23:45:40.738846 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 23:45:40.738873 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 23:45:40.738901 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 23:45:40.738931 systemd[1]: Reached target machines.target - Containers. Mar 12 23:45:40.738962 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 23:45:40.738990 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 23:45:40.739020 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 23:45:40.739047 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 23:45:40.739096 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 23:45:40.739130 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 23:45:40.739158 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 23:45:40.739186 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 23:45:40.739213 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 23:45:40.739246 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 23:45:40.739274 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 23:45:40.739303 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 23:45:40.739331 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 23:45:40.739358 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 23:45:40.739387 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 23:45:40.739417 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 23:45:40.739445 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 23:45:40.739477 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 23:45:40.739506 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 23:45:40.739561 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 12 23:45:40.739593 kernel: fuse: init (API version 7.41) Mar 12 23:45:40.739623 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 23:45:40.739657 systemd[1]: verity-setup.service: Deactivated successfully. Mar 12 23:45:40.739694 systemd[1]: Stopped verity-setup.service. Mar 12 23:45:40.739724 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 23:45:40.739752 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 23:45:40.739779 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 23:45:40.739809 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 23:45:40.739840 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 23:45:40.739869 kernel: loop: module loaded Mar 12 23:45:40.739897 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 23:45:40.739925 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 23:45:40.739952 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 23:45:40.739980 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 23:45:40.740008 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 23:45:40.740035 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 23:45:40.740067 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 23:45:40.740130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 23:45:40.740159 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 23:45:40.740187 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 23:45:40.740214 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 23:45:40.740245 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 23:45:40.740272 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 23:45:40.740303 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 23:45:40.740334 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 23:45:40.740366 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 23:45:40.740394 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 23:45:40.740421 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 23:45:40.740451 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 23:45:40.740481 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 23:45:40.740509 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 12 23:45:40.740536 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 23:45:40.740564 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 23:45:40.740642 systemd-journald[1529]: Collecting audit messages is disabled. Mar 12 23:45:40.740712 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 23:45:40.740750 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 23:45:40.740781 systemd-journald[1529]: Journal started Mar 12 23:45:40.740830 systemd-journald[1529]: Runtime Journal (/run/log/journal/ec2938bb08ed1c81a1676f0d61a91e36) is 8M, max 75.3M, 67.3M free. Mar 12 23:45:40.056254 systemd[1]: Queued start job for default target multi-user.target. Mar 12 23:45:40.080958 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 12 23:45:40.081768 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 23:45:40.758101 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 23:45:40.758202 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 23:45:40.768291 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 23:45:40.777739 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 23:45:40.789891 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 23:45:40.796789 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 23:45:40.807422 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 12 23:45:40.812928 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 23:45:40.815972 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 23:45:40.844314 kernel: ACPI: bus type drm_connector registered Mar 12 23:45:40.848873 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 23:45:40.856599 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 23:45:40.870200 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 23:45:40.881714 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 23:45:40.888148 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 23:45:40.897771 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 12 23:45:40.923060 systemd-tmpfiles[1557]: ACLs are not supported, ignoring. Mar 12 23:45:40.923134 systemd-tmpfiles[1557]: ACLs are not supported, ignoring. Mar 12 23:45:40.945632 kernel: loop0: detected capacity change from 0 to 100632 Mar 12 23:45:40.972529 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 23:45:40.980052 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 23:45:40.987429 systemd-journald[1529]: Time spent on flushing to /var/log/journal/ec2938bb08ed1c81a1676f0d61a91e36 is 122.868ms for 934 entries. Mar 12 23:45:40.987429 systemd-journald[1529]: System Journal (/var/log/journal/ec2938bb08ed1c81a1676f0d61a91e36) is 8M, max 195.6M, 187.6M free. Mar 12 23:45:41.129775 systemd-journald[1529]: Received client request to flush runtime journal. Mar 12 23:45:41.130295 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 23:45:40.996372 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 23:45:41.009764 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 23:45:41.157171 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 23:45:41.164160 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 23:45:41.165562 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 23:45:41.169241 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 12 23:45:41.185280 kernel: loop1: detected capacity change from 0 to 61264 Mar 12 23:45:41.234114 kernel: loop2: detected capacity change from 0 to 119840 Mar 12 23:45:41.251130 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 23:45:41.257490 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 23:45:41.297348 systemd-tmpfiles[1608]: ACLs are not supported, ignoring. Mar 12 23:45:41.297897 systemd-tmpfiles[1608]: ACLs are not supported, ignoring. Mar 12 23:45:41.305542 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 23:45:41.342126 kernel: loop3: detected capacity change from 0 to 209336 Mar 12 23:45:41.477111 kernel: loop4: detected capacity change from 0 to 100632 Mar 12 23:45:41.490204 kernel: loop5: detected capacity change from 0 to 61264 Mar 12 23:45:41.517227 kernel: loop6: detected capacity change from 0 to 119840 Mar 12 23:45:41.534130 kernel: loop7: detected capacity change from 0 to 209336 Mar 12 23:45:41.563640 (sd-merge)[1614]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 12 23:45:41.566141 (sd-merge)[1614]: Merged extensions into '/usr'. Mar 12 23:45:41.576529 systemd[1]: Reload requested from client PID 1556 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 23:45:41.576740 systemd[1]: Reloading... Mar 12 23:45:41.733167 zram_generator::config[1643]: No configuration found. Mar 12 23:45:42.226956 systemd[1]: Reloading finished in 649 ms. Mar 12 23:45:42.253168 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 23:45:42.257797 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 23:45:42.275357 systemd[1]: Starting ensure-sysext.service... Mar 12 23:45:42.292175 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 23:45:42.300811 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 23:45:42.334878 systemd[1]: Reload requested from client PID 1692 ('systemctl') (unit ensure-sysext.service)... Mar 12 23:45:42.335111 systemd[1]: Reloading... Mar 12 23:45:42.361306 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 12 23:45:42.362260 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 12 23:45:42.362916 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 23:45:42.364038 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 23:45:42.366476 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 23:45:42.367282 systemd-tmpfiles[1693]: ACLs are not supported, ignoring. Mar 12 23:45:42.367417 systemd-tmpfiles[1693]: ACLs are not supported, ignoring. Mar 12 23:45:42.377639 systemd-tmpfiles[1693]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 23:45:42.377833 systemd-tmpfiles[1693]: Skipping /boot Mar 12 23:45:42.402116 systemd-tmpfiles[1693]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 23:45:42.403754 systemd-tmpfiles[1693]: Skipping /boot Mar 12 23:45:42.432561 systemd-udevd[1694]: Using default interface naming scheme 'v255'. Mar 12 23:45:42.552196 zram_generator::config[1723]: No configuration found. Mar 12 23:45:42.688990 ldconfig[1548]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 23:45:42.817276 (udev-worker)[1730]: Network interface NamePolicy= disabled on kernel command line. Mar 12 23:45:43.167534 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 12 23:45:43.167690 systemd[1]: Reloading finished in 831 ms. Mar 12 23:45:43.177855 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 23:45:43.181601 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 23:45:43.202305 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 23:45:43.269814 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 12 23:45:43.277454 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 23:45:43.286995 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 23:45:43.293748 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 23:45:43.301381 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 23:45:43.306354 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 23:45:43.318679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 23:45:43.326674 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 23:45:43.337646 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 23:45:43.354622 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 23:45:43.357174 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 23:45:43.357424 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 23:45:43.364584 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 23:45:43.364950 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 23:45:43.366279 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 23:45:43.380719 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 23:45:43.390877 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 23:45:43.393495 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 23:45:43.393562 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 23:45:43.393665 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 23:45:43.403475 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 23:45:43.408001 systemd[1]: Finished ensure-sysext.service. Mar 12 23:45:43.432905 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 23:45:43.443938 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 23:45:43.456352 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 23:45:43.513190 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 23:45:43.516268 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 23:45:43.519648 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 23:45:43.542654 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 23:45:43.545992 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 23:45:43.547306 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 23:45:43.551921 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 23:45:43.557959 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 23:45:43.567263 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 23:45:43.567798 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 23:45:43.571700 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 23:45:43.622050 augenrules[1934]: No rules Mar 12 23:45:43.623564 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 23:45:43.627172 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 12 23:45:43.641201 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 23:45:43.645395 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 23:45:43.759404 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 23:45:43.868253 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 12 23:45:43.882551 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 23:45:43.900813 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 23:45:43.956477 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 23:45:44.011916 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 23:45:44.055057 systemd-resolved[1868]: Positive Trust Anchors: Mar 12 23:45:44.055107 systemd-resolved[1868]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 23:45:44.055171 systemd-resolved[1868]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 23:45:44.060133 systemd-networkd[1867]: lo: Link UP Mar 12 23:45:44.060148 systemd-networkd[1867]: lo: Gained carrier Mar 12 23:45:44.063428 systemd-networkd[1867]: Enumeration completed Mar 12 23:45:44.064243 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 23:45:44.069140 systemd-networkd[1867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 23:45:44.069425 systemd-networkd[1867]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 23:45:44.070393 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 12 23:45:44.075282 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 23:45:44.078774 systemd-resolved[1868]: Defaulting to hostname 'linux'. Mar 12 23:45:44.079594 systemd-networkd[1867]: eth0: Link UP Mar 12 23:45:44.079860 systemd-networkd[1867]: eth0: Gained carrier Mar 12 23:45:44.079895 systemd-networkd[1867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 23:45:44.086566 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 23:45:44.087800 systemd[1]: Reached target network.target - Network. Mar 12 23:45:44.087870 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 23:45:44.091289 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 23:45:44.091737 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 23:45:44.091999 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 23:45:44.096760 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 23:45:44.097054 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 23:45:44.097616 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 23:45:44.097968 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 23:45:44.098012 systemd[1]: Reached target paths.target - Path Units. Mar 12 23:45:44.100273 systemd[1]: Reached target timers.target - Timer Units. Mar 12 23:45:44.109605 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 23:45:44.117650 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 23:45:44.125964 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 12 23:45:44.129346 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 12 23:45:44.132264 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 12 23:45:44.137177 systemd-networkd[1867]: eth0: DHCPv4 address 172.31.21.65/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 12 23:45:44.139540 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 23:45:44.142621 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 12 23:45:44.146563 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 23:45:44.149504 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 23:45:44.151808 systemd[1]: Reached target basic.target - Basic System. Mar 12 23:45:44.154050 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 23:45:44.154158 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 23:45:44.158176 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 23:45:44.165963 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 12 23:45:44.173210 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 23:45:44.184314 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 23:45:44.193370 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 23:45:44.199590 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 23:45:44.202053 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 23:45:44.207229 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 23:45:44.220497 systemd[1]: Started ntpd.service - Network Time Service. Mar 12 23:45:44.226929 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 23:45:44.235420 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 12 23:45:44.247423 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 23:45:44.253257 jq[1982]: false Mar 12 23:45:44.253576 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 23:45:44.269066 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 23:45:44.274030 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 23:45:44.283837 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 23:45:44.291524 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 23:45:44.300532 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 23:45:44.311670 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 12 23:45:44.317873 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 23:45:44.321435 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 23:45:44.321944 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 23:45:44.336056 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 23:45:44.336520 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 23:45:44.375574 extend-filesystems[1983]: Found /dev/nvme0n1p6 Mar 12 23:45:44.405193 extend-filesystems[1983]: Found /dev/nvme0n1p9 Mar 12 23:45:44.422422 extend-filesystems[1983]: Checking size of /dev/nvme0n1p9 Mar 12 23:45:44.430904 jq[1994]: true Mar 12 23:45:44.450145 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 23:45:44.450602 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 23:45:44.490169 tar[1999]: linux-arm64/LICENSE Mar 12 23:45:44.490169 tar[1999]: linux-arm64/helm Mar 12 23:45:44.521411 dbus-daemon[1980]: [system] SELinux support is enabled Mar 12 23:45:44.521712 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 23:45:44.527623 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 23:45:44.537966 extend-filesystems[1983]: Resized partition /dev/nvme0n1p9 Mar 12 23:45:44.527689 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 23:45:44.530610 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 23:45:44.530640 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 23:45:44.543718 (ntainerd)[2024]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 23:45:44.552243 extend-filesystems[2032]: resize2fs 1.47.3 (8-Jul-2025) Mar 12 23:45:44.560479 dbus-daemon[1980]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1867 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 12 23:45:44.570175 coreos-metadata[1979]: Mar 12 23:45:44.568 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 12 23:45:44.570860 update_engine[1993]: I20260312 23:45:44.569818 1993 main.cc:92] Flatcar Update Engine starting Mar 12 23:45:44.574503 coreos-metadata[1979]: Mar 12 23:45:44.574 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 12 23:45:44.580606 dbus-daemon[1980]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 12 23:45:44.581971 jq[2023]: true Mar 12 23:45:44.588303 coreos-metadata[1979]: Mar 12 23:45:44.588 INFO Fetch successful Mar 12 23:45:44.588303 coreos-metadata[1979]: Mar 12 23:45:44.588 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 12 23:45:44.588303 coreos-metadata[1979]: Mar 12 23:45:44.588 INFO Fetch successful Mar 12 23:45:44.588303 coreos-metadata[1979]: Mar 12 23:45:44.588 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 12 23:45:44.591578 coreos-metadata[1979]: Mar 12 23:45:44.591 INFO Fetch successful Mar 12 23:45:44.591578 coreos-metadata[1979]: Mar 12 23:45:44.591 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 12 23:45:44.592268 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 12 23:45:44.600127 ntpd[1985]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:41 UTC 2026 (1): Starting Mar 12 23:45:44.603868 coreos-metadata[1979]: Mar 12 23:45:44.601 INFO Fetch successful Mar 12 23:45:44.603868 coreos-metadata[1979]: Mar 12 23:45:44.601 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 12 23:45:44.603974 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:41 UTC 2026 (1): Starting Mar 12 23:45:44.603974 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 12 23:45:44.603974 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: ---------------------------------------------------- Mar 12 23:45:44.603974 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: ntp-4 is maintained by Network Time Foundation, Mar 12 23:45:44.603974 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 12 23:45:44.603974 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: corporation. Support and training for ntp-4 are Mar 12 23:45:44.603974 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: available at https://www.nwtime.org/support Mar 12 23:45:44.603974 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: ---------------------------------------------------- Mar 12 23:45:44.600327 ntpd[1985]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 12 23:45:44.632473 update_engine[1993]: I20260312 23:45:44.608409 1993 update_check_scheduler.cc:74] Next update check in 2m20s Mar 12 23:45:44.632603 coreos-metadata[1979]: Mar 12 23:45:44.628 INFO Fetch failed with 404: resource not found Mar 12 23:45:44.632603 coreos-metadata[1979]: Mar 12 23:45:44.628 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 12 23:45:44.632603 coreos-metadata[1979]: Mar 12 23:45:44.628 INFO Fetch successful Mar 12 23:45:44.632603 coreos-metadata[1979]: Mar 12 23:45:44.628 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 12 23:45:44.632603 coreos-metadata[1979]: Mar 12 23:45:44.628 INFO Fetch successful Mar 12 23:45:44.632603 coreos-metadata[1979]: Mar 12 23:45:44.632 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 12 23:45:44.611451 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 12 23:45:44.600351 ntpd[1985]: ---------------------------------------------------- Mar 12 23:45:44.637647 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: proto: precision = 0.096 usec (-23) Mar 12 23:45:44.637647 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: basedate set to 2026-02-28 Mar 12 23:45:44.637647 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: gps base set to 2026-03-01 (week 2408) Mar 12 23:45:44.637647 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: Listen and drop on 0 v6wildcard [::]:123 Mar 12 23:45:44.637647 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 12 23:45:44.637647 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: Listen normally on 2 lo 127.0.0.1:123 Mar 12 23:45:44.637647 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: Listen normally on 3 eth0 172.31.21.65:123 Mar 12 23:45:44.637647 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: Listen normally on 4 lo [::1]:123 Mar 12 23:45:44.637647 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: bind(21) AF_INET6 [fe80::484:3aff:fe13:73d5%2]:123 flags 0x811 failed: Cannot assign requested address Mar 12 23:45:44.637647 ntpd[1985]: 12 Mar 23:45:44 ntpd[1985]: unable to create socket on eth0 (5) for [fe80::484:3aff:fe13:73d5%2]:123 Mar 12 23:45:44.614116 systemd[1]: Started update-engine.service - Update Engine. Mar 12 23:45:44.600368 ntpd[1985]: ntp-4 is maintained by Network Time Foundation, Mar 12 23:45:44.631585 systemd-coredump[2040]: Process 1985 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 12 23:45:44.600384 ntpd[1985]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 12 23:45:44.600400 ntpd[1985]: corporation. Support and training for ntp-4 are Mar 12 23:45:44.600416 ntpd[1985]: available at https://www.nwtime.org/support Mar 12 23:45:44.600431 ntpd[1985]: ---------------------------------------------------- Mar 12 23:45:44.610412 ntpd[1985]: proto: precision = 0.096 usec (-23) Mar 12 23:45:44.613896 ntpd[1985]: basedate set to 2026-02-28 Mar 12 23:45:44.613926 ntpd[1985]: gps base set to 2026-03-01 (week 2408) Mar 12 23:45:44.614141 ntpd[1985]: Listen and drop on 0 v6wildcard [::]:123 Mar 12 23:45:44.614190 ntpd[1985]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 12 23:45:44.616542 ntpd[1985]: Listen normally on 2 lo 127.0.0.1:123 Mar 12 23:45:44.616587 ntpd[1985]: Listen normally on 3 eth0 172.31.21.65:123 Mar 12 23:45:44.616633 ntpd[1985]: Listen normally on 4 lo [::1]:123 Mar 12 23:45:44.649821 coreos-metadata[1979]: Mar 12 23:45:44.641 INFO Fetch successful Mar 12 23:45:44.649821 coreos-metadata[1979]: Mar 12 23:45:44.641 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 12 23:45:44.649821 coreos-metadata[1979]: Mar 12 23:45:44.642 INFO Fetch successful Mar 12 23:45:44.649821 coreos-metadata[1979]: Mar 12 23:45:44.642 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 12 23:45:44.649821 coreos-metadata[1979]: Mar 12 23:45:44.646 INFO Fetch successful Mar 12 23:45:44.616698 ntpd[1985]: bind(21) AF_INET6 [fe80::484:3aff:fe13:73d5%2]:123 flags 0x811 failed: Cannot assign requested address Mar 12 23:45:44.616737 ntpd[1985]: unable to create socket on eth0 (5) for [fe80::484:3aff:fe13:73d5%2]:123 Mar 12 23:45:44.660949 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 23:45:44.665471 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 12 23:45:44.676920 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Mar 12 23:45:44.691240 systemd[1]: Started systemd-coredump@0-2040-0.service - Process Core Dump (PID 2040/UID 0). Mar 12 23:45:44.791962 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 23:45:44.801304 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 12 23:45:44.804475 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 23:45:44.869123 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 12 23:45:44.901196 extend-filesystems[2032]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 12 23:45:44.901196 extend-filesystems[2032]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 12 23:45:44.901196 extend-filesystems[2032]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 12 23:45:44.910206 extend-filesystems[1983]: Resized filesystem in /dev/nvme0n1p9 Mar 12 23:45:44.909861 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 23:45:44.912394 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 23:45:44.969862 bash[2076]: Updated "/home/core/.ssh/authorized_keys" Mar 12 23:45:44.973170 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 23:45:44.988507 systemd[1]: Starting sshkeys.service... Mar 12 23:45:45.115816 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 12 23:45:45.126249 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 12 23:45:45.130521 systemd-logind[1990]: Watching system buttons on /dev/input/event0 (Power Button) Mar 12 23:45:45.130575 systemd-logind[1990]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 12 23:45:45.136841 systemd-logind[1990]: New seat seat0. Mar 12 23:45:45.147460 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 23:45:45.275385 containerd[2024]: time="2026-03-12T23:45:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 12 23:45:45.281669 containerd[2024]: time="2026-03-12T23:45:45.281588242Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 12 23:45:45.331461 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 12 23:45:45.340062 dbus-daemon[1980]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 12 23:45:45.360365 dbus-daemon[1980]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2039 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 12 23:45:45.371158 systemd[1]: Starting polkit.service - Authorization Manager... Mar 12 23:45:45.385821 containerd[2024]: time="2026-03-12T23:45:45.382681390Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.012µs" Mar 12 23:45:45.385821 containerd[2024]: time="2026-03-12T23:45:45.382746694Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 12 23:45:45.385821 containerd[2024]: time="2026-03-12T23:45:45.382784434Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 12 23:45:45.385821 containerd[2024]: time="2026-03-12T23:45:45.383066422Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 12 23:45:45.385821 containerd[2024]: time="2026-03-12T23:45:45.383991298Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 12 23:45:45.385821 containerd[2024]: time="2026-03-12T23:45:45.384046462Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 12 23:45:45.385821 containerd[2024]: time="2026-03-12T23:45:45.384208138Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 12 23:45:45.385821 containerd[2024]: time="2026-03-12T23:45:45.384235438Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 12 23:45:45.385821 containerd[2024]: time="2026-03-12T23:45:45.384612706Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 12 23:45:45.385821 containerd[2024]: time="2026-03-12T23:45:45.384640786Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 12 23:45:45.385821 containerd[2024]: time="2026-03-12T23:45:45.384687022Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 12 23:45:45.385821 containerd[2024]: time="2026-03-12T23:45:45.384742738Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 12 23:45:45.386479 containerd[2024]: time="2026-03-12T23:45:45.384891982Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 12 23:45:45.386479 containerd[2024]: time="2026-03-12T23:45:45.385308790Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 12 23:45:45.386479 containerd[2024]: time="2026-03-12T23:45:45.385370842Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 12 23:45:45.386479 containerd[2024]: time="2026-03-12T23:45:45.385398130Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 12 23:45:45.386479 containerd[2024]: time="2026-03-12T23:45:45.385455334Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 12 23:45:45.386479 containerd[2024]: time="2026-03-12T23:45:45.386039446Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 12 23:45:45.395119 containerd[2024]: time="2026-03-12T23:45:45.391251382Z" level=info msg="metadata content store policy set" policy=shared Mar 12 23:45:45.403591 containerd[2024]: time="2026-03-12T23:45:45.403475698Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 12 23:45:45.403999 containerd[2024]: time="2026-03-12T23:45:45.403932262Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 12 23:45:45.404181 containerd[2024]: time="2026-03-12T23:45:45.404150026Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 12 23:45:45.410709 containerd[2024]: time="2026-03-12T23:45:45.407260415Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 12 23:45:45.410709 containerd[2024]: time="2026-03-12T23:45:45.407346611Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 12 23:45:45.410709 containerd[2024]: time="2026-03-12T23:45:45.407401955Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 12 23:45:45.410709 containerd[2024]: time="2026-03-12T23:45:45.407435303Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 12 23:45:45.410709 containerd[2024]: time="2026-03-12T23:45:45.407491943Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 12 23:45:45.410709 containerd[2024]: time="2026-03-12T23:45:45.407530619Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 12 23:45:45.410709 containerd[2024]: time="2026-03-12T23:45:45.407583779Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 12 23:45:45.410709 containerd[2024]: time="2026-03-12T23:45:45.407615639Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 12 23:45:45.410709 containerd[2024]: time="2026-03-12T23:45:45.407696663Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 12 23:45:45.410709 containerd[2024]: time="2026-03-12T23:45:45.409300091Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 12 23:45:45.411581 containerd[2024]: time="2026-03-12T23:45:45.411257447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 12 23:45:45.411581 containerd[2024]: time="2026-03-12T23:45:45.411342623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 12 23:45:45.411581 containerd[2024]: time="2026-03-12T23:45:45.411373607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 12 23:45:45.411581 containerd[2024]: time="2026-03-12T23:45:45.411426467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 12 23:45:45.411581 containerd[2024]: time="2026-03-12T23:45:45.411455207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 12 23:45:45.411581 containerd[2024]: time="2026-03-12T23:45:45.411508187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 12 23:45:45.414293 containerd[2024]: time="2026-03-12T23:45:45.411543551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 12 23:45:45.414293 containerd[2024]: time="2026-03-12T23:45:45.411906875Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 12 23:45:45.414293 containerd[2024]: time="2026-03-12T23:45:45.411939023Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 12 23:45:45.414293 containerd[2024]: time="2026-03-12T23:45:45.413145779Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 12 23:45:45.414293 containerd[2024]: time="2026-03-12T23:45:45.414195719Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 12 23:45:45.414293 containerd[2024]: time="2026-03-12T23:45:45.414243911Z" level=info msg="Start snapshots syncer" Mar 12 23:45:45.419184 containerd[2024]: time="2026-03-12T23:45:45.416222711Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 12 23:45:45.419184 containerd[2024]: time="2026-03-12T23:45:45.416712527Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 12 23:45:45.419453 containerd[2024]: time="2026-03-12T23:45:45.416810639Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 12 23:45:45.419453 containerd[2024]: time="2026-03-12T23:45:45.416902487Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 12 23:45:45.422637 containerd[2024]: time="2026-03-12T23:45:45.422575331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 12 23:45:45.428635 containerd[2024]: time="2026-03-12T23:45:45.425189939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 12 23:45:45.428635 containerd[2024]: time="2026-03-12T23:45:45.425232119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 12 23:45:45.428635 containerd[2024]: time="2026-03-12T23:45:45.425293691Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 12 23:45:45.428635 containerd[2024]: time="2026-03-12T23:45:45.425326367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 12 23:45:45.428635 containerd[2024]: time="2026-03-12T23:45:45.425381579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 12 23:45:45.428635 containerd[2024]: time="2026-03-12T23:45:45.425410679Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 12 23:45:45.428635 containerd[2024]: time="2026-03-12T23:45:45.425502143Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 12 23:45:45.428635 containerd[2024]: time="2026-03-12T23:45:45.427135559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 12 23:45:45.428635 containerd[2024]: time="2026-03-12T23:45:45.427191263Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 12 23:45:45.428635 containerd[2024]: time="2026-03-12T23:45:45.427305887Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 12 23:45:45.428635 containerd[2024]: time="2026-03-12T23:45:45.427343531Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 12 23:45:45.428635 containerd[2024]: time="2026-03-12T23:45:45.428475875Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 12 23:45:45.431092 containerd[2024]: time="2026-03-12T23:45:45.428503799Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 12 23:45:45.431092 containerd[2024]: time="2026-03-12T23:45:45.429252539Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 12 23:45:45.431092 containerd[2024]: time="2026-03-12T23:45:45.429284795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 12 23:45:45.431092 containerd[2024]: time="2026-03-12T23:45:45.429338903Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 12 23:45:45.433763 containerd[2024]: time="2026-03-12T23:45:45.433215647Z" level=info msg="runtime interface created" Mar 12 23:45:45.433763 containerd[2024]: time="2026-03-12T23:45:45.433268171Z" level=info msg="created NRI interface" Mar 12 23:45:45.433763 containerd[2024]: time="2026-03-12T23:45:45.433323755Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 12 23:45:45.433763 containerd[2024]: time="2026-03-12T23:45:45.433382951Z" level=info msg="Connect containerd service" Mar 12 23:45:45.433763 containerd[2024]: time="2026-03-12T23:45:45.433445279Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 23:45:45.452626 containerd[2024]: time="2026-03-12T23:45:45.446687819Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 23:45:45.713201 containerd[2024]: time="2026-03-12T23:45:45.712276884Z" level=info msg="Start subscribing containerd event" Mar 12 23:45:45.713201 containerd[2024]: time="2026-03-12T23:45:45.712393080Z" level=info msg="Start recovering state" Mar 12 23:45:45.713201 containerd[2024]: time="2026-03-12T23:45:45.712560972Z" level=info msg="Start event monitor" Mar 12 23:45:45.713201 containerd[2024]: time="2026-03-12T23:45:45.712589904Z" level=info msg="Start cni network conf syncer for default" Mar 12 23:45:45.713201 containerd[2024]: time="2026-03-12T23:45:45.712609104Z" level=info msg="Start streaming server" Mar 12 23:45:45.713201 containerd[2024]: time="2026-03-12T23:45:45.712627992Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 12 23:45:45.713201 containerd[2024]: time="2026-03-12T23:45:45.712646076Z" level=info msg="runtime interface starting up..." Mar 12 23:45:45.713201 containerd[2024]: time="2026-03-12T23:45:45.712662732Z" level=info msg="starting plugins..." Mar 12 23:45:45.713201 containerd[2024]: time="2026-03-12T23:45:45.712710024Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 12 23:45:45.713624 containerd[2024]: time="2026-03-12T23:45:45.713252580Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 23:45:45.713624 containerd[2024]: time="2026-03-12T23:45:45.713348016Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 23:45:45.713624 containerd[2024]: time="2026-03-12T23:45:45.713443284Z" level=info msg="containerd successfully booted in 0.438710s" Mar 12 23:45:45.714217 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 23:45:45.729288 sshd_keygen[2028]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 23:45:45.757316 coreos-metadata[2128]: Mar 12 23:45:45.757 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 12 23:45:45.770922 coreos-metadata[2128]: Mar 12 23:45:45.770 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 12 23:45:45.771028 coreos-metadata[2128]: Mar 12 23:45:45.770 INFO Fetch successful Mar 12 23:45:45.771028 coreos-metadata[2128]: Mar 12 23:45:45.770 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 12 23:45:45.771028 coreos-metadata[2128]: Mar 12 23:45:45.770 INFO Fetch successful Mar 12 23:45:45.774888 unknown[2128]: wrote ssh authorized keys file for user: core Mar 12 23:45:45.839366 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 23:45:45.849716 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 23:45:45.856328 update-ssh-keys[2198]: Updated "/home/core/.ssh/authorized_keys" Mar 12 23:45:45.858651 systemd[1]: Started sshd@0-172.31.21.65:22-4.153.228.146:33750.service - OpenSSH per-connection server daemon (4.153.228.146:33750). Mar 12 23:45:45.864790 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 12 23:45:45.873186 systemd[1]: Finished sshkeys.service. Mar 12 23:45:45.913270 systemd-networkd[1867]: eth0: Gained IPv6LL Mar 12 23:45:45.918952 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 23:45:45.924193 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 23:45:45.929907 locksmithd[2041]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 23:45:45.934808 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 12 23:45:45.945504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:45:45.950616 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 23:45:45.964756 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 23:45:45.967320 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 23:45:45.976546 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 23:45:46.052970 systemd-coredump[2049]: Process 1985 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1985: #0 0x0000aaaabe480b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaabe42fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaabe430240 n/a (ntpd + 0x10240) #3 0x0000aaaabe42be14 n/a (ntpd + 0xbe14) #4 0x0000aaaabe42d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaabe435a38 n/a (ntpd + 0x15a38) #6 0x0000aaaabe42738c n/a (ntpd + 0x738c) #7 0x0000ffff98832034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff98832118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaabe4273f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Mar 12 23:45:46.072501 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 12 23:45:46.072834 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 12 23:45:46.086759 systemd[1]: systemd-coredump@0-2040-0.service: Deactivated successfully. Mar 12 23:45:46.114187 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 23:45:46.128881 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 23:45:46.134951 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 12 23:45:46.138260 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 23:45:46.223060 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Mar 12 23:45:46.225295 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 23:45:46.233560 systemd[1]: Started ntpd.service - Network Time Service. Mar 12 23:45:46.246284 amazon-ssm-agent[2208]: Initializing new seelog logger Mar 12 23:45:46.246284 amazon-ssm-agent[2208]: New Seelog Logger Creation Complete Mar 12 23:45:46.246284 amazon-ssm-agent[2208]: 2026/03/12 23:45:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:45:46.246284 amazon-ssm-agent[2208]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:45:46.247796 amazon-ssm-agent[2208]: 2026/03/12 23:45:46 processing appconfig overrides Mar 12 23:45:46.250314 amazon-ssm-agent[2208]: 2026/03/12 23:45:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:45:46.250314 amazon-ssm-agent[2208]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:45:46.250314 amazon-ssm-agent[2208]: 2026/03/12 23:45:46 processing appconfig overrides Mar 12 23:45:46.250314 amazon-ssm-agent[2208]: 2026/03/12 23:45:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:45:46.250314 amazon-ssm-agent[2208]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:45:46.250314 amazon-ssm-agent[2208]: 2026/03/12 23:45:46 processing appconfig overrides Mar 12 23:45:46.253326 amazon-ssm-agent[2208]: 2026-03-12 23:45:46.2495 INFO Proxy environment variables: Mar 12 23:45:46.258309 amazon-ssm-agent[2208]: 2026/03/12 23:45:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:45:46.258309 amazon-ssm-agent[2208]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:45:46.258309 amazon-ssm-agent[2208]: 2026/03/12 23:45:46 processing appconfig overrides Mar 12 23:45:46.285750 polkitd[2153]: Started polkitd version 126 Mar 12 23:45:46.329469 polkitd[2153]: Loading rules from directory /etc/polkit-1/rules.d Mar 12 23:45:46.332767 polkitd[2153]: Loading rules from directory /run/polkit-1/rules.d Mar 12 23:45:46.334844 polkitd[2153]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 12 23:45:46.336000 polkitd[2153]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 12 23:45:46.336058 polkitd[2153]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 12 23:45:46.336153 polkitd[2153]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 12 23:45:46.340764 ntpd[2239]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:41 UTC 2026 (1): Starting Mar 12 23:45:46.343831 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:41 UTC 2026 (1): Starting Mar 12 23:45:46.343831 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 12 23:45:46.343831 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: ---------------------------------------------------- Mar 12 23:45:46.343831 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: ntp-4 is maintained by Network Time Foundation, Mar 12 23:45:46.343831 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 12 23:45:46.343831 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: corporation. Support and training for ntp-4 are Mar 12 23:45:46.343831 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: available at https://www.nwtime.org/support Mar 12 23:45:46.343831 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: ---------------------------------------------------- Mar 12 23:45:46.342726 ntpd[2239]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 12 23:45:46.342744 ntpd[2239]: ---------------------------------------------------- Mar 12 23:45:46.342760 ntpd[2239]: ntp-4 is maintained by Network Time Foundation, Mar 12 23:45:46.342776 ntpd[2239]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 12 23:45:46.342791 ntpd[2239]: corporation. Support and training for ntp-4 are Mar 12 23:45:46.342806 ntpd[2239]: available at https://www.nwtime.org/support Mar 12 23:45:46.342823 ntpd[2239]: ---------------------------------------------------- Mar 12 23:45:46.347641 polkitd[2153]: Finished loading, compiling and executing 2 rules Mar 12 23:45:46.348464 ntpd[2239]: proto: precision = 0.096 usec (-23) Mar 12 23:45:46.349359 systemd[1]: Started polkit.service - Authorization Manager. Mar 12 23:45:46.353240 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: proto: precision = 0.096 usec (-23) Mar 12 23:45:46.353240 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: basedate set to 2026-02-28 Mar 12 23:45:46.353240 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: gps base set to 2026-03-01 (week 2408) Mar 12 23:45:46.353240 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: Listen and drop on 0 v6wildcard [::]:123 Mar 12 23:45:46.353240 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 12 23:45:46.348815 ntpd[2239]: basedate set to 2026-02-28 Mar 12 23:45:46.353540 amazon-ssm-agent[2208]: 2026-03-12 23:45:46.2496 INFO https_proxy: Mar 12 23:45:46.348837 ntpd[2239]: gps base set to 2026-03-01 (week 2408) Mar 12 23:45:46.348969 ntpd[2239]: Listen and drop on 0 v6wildcard [::]:123 Mar 12 23:45:46.349013 ntpd[2239]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 12 23:45:46.354386 ntpd[2239]: Listen normally on 2 lo 127.0.0.1:123 Mar 12 23:45:46.356943 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: Listen normally on 2 lo 127.0.0.1:123 Mar 12 23:45:46.356943 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: Listen normally on 3 eth0 172.31.21.65:123 Mar 12 23:45:46.356943 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: Listen normally on 4 lo [::1]:123 Mar 12 23:45:46.356943 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: Listen normally on 5 eth0 [fe80::484:3aff:fe13:73d5%2]:123 Mar 12 23:45:46.356943 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: Listening on routing socket on fd #22 for interface updates Mar 12 23:45:46.354446 ntpd[2239]: Listen normally on 3 eth0 172.31.21.65:123 Mar 12 23:45:46.354493 ntpd[2239]: Listen normally on 4 lo [::1]:123 Mar 12 23:45:46.354535 ntpd[2239]: Listen normally on 5 eth0 [fe80::484:3aff:fe13:73d5%2]:123 Mar 12 23:45:46.354574 ntpd[2239]: Listening on routing socket on fd #22 for interface updates Mar 12 23:45:46.357356 dbus-daemon[1980]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 12 23:45:46.360897 polkitd[2153]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 12 23:45:46.376052 ntpd[2239]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 12 23:45:46.377357 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 12 23:45:46.377357 ntpd[2239]: 12 Mar 23:45:46 ntpd[2239]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 12 23:45:46.376126 ntpd[2239]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 12 23:45:46.398301 systemd-resolved[1868]: System hostname changed to 'ip-172-31-21-65'. Mar 12 23:45:46.398307 systemd-hostnamed[2039]: Hostname set to (transient) Mar 12 23:45:46.452464 amazon-ssm-agent[2208]: 2026-03-12 23:45:46.2496 INFO http_proxy: Mar 12 23:45:46.513939 sshd[2202]: Accepted publickey for core from 4.153.228.146 port 33750 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:45:46.518860 sshd-session[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:45:46.543983 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 23:45:46.551913 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 23:45:46.559952 amazon-ssm-agent[2208]: 2026-03-12 23:45:46.2496 INFO no_proxy: Mar 12 23:45:46.585113 systemd-logind[1990]: New session 1 of user core. Mar 12 23:45:46.602200 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 23:45:46.616787 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 23:45:46.654549 (systemd)[2254]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 23:45:46.655405 amazon-ssm-agent[2208]: 2026-03-12 23:45:46.2498 INFO Checking if agent identity type OnPrem can be assumed Mar 12 23:45:46.664681 systemd-logind[1990]: New session c1 of user core. Mar 12 23:45:46.754184 amazon-ssm-agent[2208]: 2026-03-12 23:45:46.2499 INFO Checking if agent identity type EC2 can be assumed Mar 12 23:45:46.853531 amazon-ssm-agent[2208]: 2026-03-12 23:45:46.4346 INFO Agent will take identity from EC2 Mar 12 23:45:46.953733 amazon-ssm-agent[2208]: 2026-03-12 23:45:46.4381 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Mar 12 23:45:46.953947 tar[1999]: linux-arm64/README.md Mar 12 23:45:46.993902 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 23:45:47.043011 systemd[2254]: Queued start job for default target default.target. Mar 12 23:45:47.050370 systemd[2254]: Created slice app.slice - User Application Slice. Mar 12 23:45:47.050437 systemd[2254]: Reached target paths.target - Paths. Mar 12 23:45:47.050526 systemd[2254]: Reached target timers.target - Timers. Mar 12 23:45:47.053276 amazon-ssm-agent[2208]: 2026-03-12 23:45:46.4382 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 12 23:45:47.054027 systemd[2254]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 23:45:47.095315 systemd[2254]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 23:45:47.095728 systemd[2254]: Reached target sockets.target - Sockets. Mar 12 23:45:47.095930 systemd[2254]: Reached target basic.target - Basic System. Mar 12 23:45:47.096259 systemd[2254]: Reached target default.target - Main User Target. Mar 12 23:45:47.096323 systemd[2254]: Startup finished in 409ms. Mar 12 23:45:47.096450 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 23:45:47.107350 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 23:45:47.152449 amazon-ssm-agent[2208]: 2026-03-12 23:45:46.4382 INFO [amazon-ssm-agent] Starting Core Agent Mar 12 23:45:47.254245 amazon-ssm-agent[2208]: 2026-03-12 23:45:46.4382 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Mar 12 23:45:47.354524 amazon-ssm-agent[2208]: 2026-03-12 23:45:46.4382 INFO [Registrar] Starting registrar module Mar 12 23:45:47.371509 systemd[1]: Started sshd@1-172.31.21.65:22-4.153.228.146:33758.service - OpenSSH per-connection server daemon (4.153.228.146:33758). Mar 12 23:45:47.455264 amazon-ssm-agent[2208]: 2026-03-12 23:45:46.4403 INFO [EC2Identity] Checking disk for registration info Mar 12 23:45:47.555591 amazon-ssm-agent[2208]: 2026-03-12 23:45:46.4404 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Mar 12 23:45:47.656600 amazon-ssm-agent[2208]: 2026-03-12 23:45:46.4404 INFO [EC2Identity] Generating registration keypair Mar 12 23:45:47.758751 amazon-ssm-agent[2208]: 2026-03-12 23:45:47.7580 INFO [EC2Identity] Checking write access before registering Mar 12 23:45:47.812056 amazon-ssm-agent[2208]: 2026/03/12 23:45:47 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:45:47.812056 amazon-ssm-agent[2208]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 12 23:45:47.812236 amazon-ssm-agent[2208]: 2026/03/12 23:45:47 processing appconfig overrides Mar 12 23:45:47.842091 amazon-ssm-agent[2208]: 2026-03-12 23:45:47.7595 INFO [EC2Identity] Registering EC2 instance with Systems Manager Mar 12 23:45:47.842091 amazon-ssm-agent[2208]: 2026-03-12 23:45:47.8117 INFO [EC2Identity] EC2 registration was successful. Mar 12 23:45:47.842315 amazon-ssm-agent[2208]: 2026-03-12 23:45:47.8118 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Mar 12 23:45:47.842315 amazon-ssm-agent[2208]: 2026-03-12 23:45:47.8119 INFO [CredentialRefresher] credentialRefresher has started Mar 12 23:45:47.842315 amazon-ssm-agent[2208]: 2026-03-12 23:45:47.8119 INFO [CredentialRefresher] Starting credentials refresher loop Mar 12 23:45:47.842315 amazon-ssm-agent[2208]: 2026-03-12 23:45:47.8417 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 12 23:45:47.842315 amazon-ssm-agent[2208]: 2026-03-12 23:45:47.8419 INFO [CredentialRefresher] Credentials ready Mar 12 23:45:47.843105 sshd[2270]: Accepted publickey for core from 4.153.228.146 port 33758 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:45:47.845752 sshd-session[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:45:47.856884 systemd-logind[1990]: New session 2 of user core. Mar 12 23:45:47.859809 amazon-ssm-agent[2208]: 2026-03-12 23:45:47.8421 INFO [CredentialRefresher] Next credential rotation will be in 29.9999921044 minutes Mar 12 23:45:47.866363 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 23:45:47.983931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:45:47.988950 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 23:45:47.993292 systemd[1]: Startup finished in 3.839s (kernel) + 9.013s (initrd) + 9.355s (userspace) = 22.208s. Mar 12 23:45:48.001893 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 23:45:48.085913 sshd[2273]: Connection closed by 4.153.228.146 port 33758 Mar 12 23:45:48.087373 sshd-session[2270]: pam_unix(sshd:session): session closed for user core Mar 12 23:45:48.096337 systemd[1]: sshd@1-172.31.21.65:22-4.153.228.146:33758.service: Deactivated successfully. Mar 12 23:45:48.101396 systemd[1]: session-2.scope: Deactivated successfully. Mar 12 23:45:48.106755 systemd-logind[1990]: Session 2 logged out. Waiting for processes to exit. Mar 12 23:45:48.109337 systemd-logind[1990]: Removed session 2. Mar 12 23:45:48.180492 systemd[1]: Started sshd@2-172.31.21.65:22-4.153.228.146:33768.service - OpenSSH per-connection server daemon (4.153.228.146:33768). Mar 12 23:45:48.632974 sshd[2289]: Accepted publickey for core from 4.153.228.146 port 33768 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:45:48.635385 sshd-session[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:45:48.647934 systemd-logind[1990]: New session 3 of user core. Mar 12 23:45:48.652375 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 23:45:48.862947 sshd[2296]: Connection closed by 4.153.228.146 port 33768 Mar 12 23:45:48.863851 sshd-session[2289]: pam_unix(sshd:session): session closed for user core Mar 12 23:45:48.877434 systemd-logind[1990]: Session 3 logged out. Waiting for processes to exit. Mar 12 23:45:48.877957 systemd[1]: sshd@2-172.31.21.65:22-4.153.228.146:33768.service: Deactivated successfully. Mar 12 23:45:48.883921 systemd[1]: session-3.scope: Deactivated successfully. Mar 12 23:45:48.889833 systemd-logind[1990]: Removed session 3. Mar 12 23:45:48.892252 amazon-ssm-agent[2208]: 2026-03-12 23:45:48.8898 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 12 23:45:48.991593 amazon-ssm-agent[2208]: 2026-03-12 23:45:48.8944 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2304) started Mar 12 23:45:49.009836 systemd[1]: Started sshd@3-172.31.21.65:22-4.153.228.146:60830.service - OpenSSH per-connection server daemon (4.153.228.146:60830). Mar 12 23:45:49.092055 amazon-ssm-agent[2208]: 2026-03-12 23:45:48.8945 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 12 23:45:49.097101 kubelet[2278]: E0312 23:45:49.096188 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 23:45:49.111980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 23:45:49.112352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 23:45:49.114221 systemd[1]: kubelet.service: Consumed 1.406s CPU time, 259.1M memory peak. Mar 12 23:45:49.520928 sshd[2309]: Accepted publickey for core from 4.153.228.146 port 60830 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:45:49.523286 sshd-session[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:45:49.532161 systemd-logind[1990]: New session 4 of user core. Mar 12 23:45:49.542367 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 23:45:49.759137 sshd[2321]: Connection closed by 4.153.228.146 port 60830 Mar 12 23:45:49.759333 sshd-session[2309]: pam_unix(sshd:session): session closed for user core Mar 12 23:45:49.766647 systemd-logind[1990]: Session 4 logged out. Waiting for processes to exit. Mar 12 23:45:49.766907 systemd[1]: sshd@3-172.31.21.65:22-4.153.228.146:60830.service: Deactivated successfully. Mar 12 23:45:49.770106 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 23:45:49.774845 systemd-logind[1990]: Removed session 4. Mar 12 23:45:49.862603 systemd[1]: Started sshd@4-172.31.21.65:22-4.153.228.146:60840.service - OpenSSH per-connection server daemon (4.153.228.146:60840). Mar 12 23:45:50.319466 sshd[2328]: Accepted publickey for core from 4.153.228.146 port 60840 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:45:50.321794 sshd-session[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:45:50.331311 systemd-logind[1990]: New session 5 of user core. Mar 12 23:45:50.335353 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 23:45:50.539610 sudo[2332]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 23:45:50.540262 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 23:45:50.568674 sudo[2332]: pam_unix(sudo:session): session closed for user root Mar 12 23:45:50.646334 sshd[2331]: Connection closed by 4.153.228.146 port 60840 Mar 12 23:45:50.647344 sshd-session[2328]: pam_unix(sshd:session): session closed for user core Mar 12 23:45:50.654594 systemd-logind[1990]: Session 5 logged out. Waiting for processes to exit. Mar 12 23:45:50.656217 systemd[1]: sshd@4-172.31.21.65:22-4.153.228.146:60840.service: Deactivated successfully. Mar 12 23:45:50.660549 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 23:45:50.663271 systemd-logind[1990]: Removed session 5. Mar 12 23:45:50.741258 systemd[1]: Started sshd@5-172.31.21.65:22-4.153.228.146:60844.service - OpenSSH per-connection server daemon (4.153.228.146:60844). Mar 12 23:45:51.199846 sshd[2338]: Accepted publickey for core from 4.153.228.146 port 60844 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:45:51.202194 sshd-session[2338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:45:51.211175 systemd-logind[1990]: New session 6 of user core. Mar 12 23:45:51.220357 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 23:45:51.368740 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 23:45:51.369508 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 23:45:51.392856 sudo[2343]: pam_unix(sudo:session): session closed for user root Mar 12 23:45:51.403269 sudo[2342]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 12 23:45:51.403903 sudo[2342]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 23:45:51.427535 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 12 23:45:51.492413 augenrules[2365]: No rules Mar 12 23:45:51.495060 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 23:45:51.495873 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 12 23:45:51.498393 sudo[2342]: pam_unix(sudo:session): session closed for user root Mar 12 23:45:51.577217 sshd[2341]: Connection closed by 4.153.228.146 port 60844 Mar 12 23:45:51.578415 sshd-session[2338]: pam_unix(sshd:session): session closed for user core Mar 12 23:45:51.585061 systemd-logind[1990]: Session 6 logged out. Waiting for processes to exit. Mar 12 23:45:51.585235 systemd[1]: sshd@5-172.31.21.65:22-4.153.228.146:60844.service: Deactivated successfully. Mar 12 23:45:51.590609 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 23:45:51.594512 systemd-logind[1990]: Removed session 6. Mar 12 23:45:51.668550 systemd[1]: Started sshd@6-172.31.21.65:22-4.153.228.146:60860.service - OpenSSH per-connection server daemon (4.153.228.146:60860). Mar 12 23:45:52.114704 sshd[2374]: Accepted publickey for core from 4.153.228.146 port 60860 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:45:52.116933 sshd-session[2374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:45:52.125175 systemd-logind[1990]: New session 7 of user core. Mar 12 23:45:52.132341 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 23:45:52.276320 sudo[2378]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 23:45:52.276933 sudo[2378]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 23:45:52.976488 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 23:45:52.991847 (dockerd)[2396]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 23:45:53.749987 systemd-resolved[1868]: Clock change detected. Flushing caches. Mar 12 23:45:53.937932 dockerd[2396]: time="2026-03-12T23:45:53.937311968Z" level=info msg="Starting up" Mar 12 23:45:53.941207 dockerd[2396]: time="2026-03-12T23:45:53.941157092Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 12 23:45:53.961574 dockerd[2396]: time="2026-03-12T23:45:53.961512789Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 12 23:45:54.043495 systemd[1]: var-lib-docker-metacopy\x2dcheck468572037-merged.mount: Deactivated successfully. Mar 12 23:45:54.061999 dockerd[2396]: time="2026-03-12T23:45:54.061558421Z" level=info msg="Loading containers: start." Mar 12 23:45:54.075948 kernel: Initializing XFRM netlink socket Mar 12 23:45:54.445833 (udev-worker)[2418]: Network interface NamePolicy= disabled on kernel command line. Mar 12 23:45:54.525193 systemd-networkd[1867]: docker0: Link UP Mar 12 23:45:54.547711 dockerd[2396]: time="2026-03-12T23:45:54.547596475Z" level=info msg="Loading containers: done." Mar 12 23:45:54.584926 dockerd[2396]: time="2026-03-12T23:45:54.584599628Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 23:45:54.584926 dockerd[2396]: time="2026-03-12T23:45:54.584710172Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 12 23:45:54.584926 dockerd[2396]: time="2026-03-12T23:45:54.584848676Z" level=info msg="Initializing buildkit" Mar 12 23:45:54.637241 dockerd[2396]: time="2026-03-12T23:45:54.636932336Z" level=info msg="Completed buildkit initialization" Mar 12 23:45:54.653236 dockerd[2396]: time="2026-03-12T23:45:54.653170496Z" level=info msg="Daemon has completed initialization" Mar 12 23:45:54.653648 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 23:45:54.656448 dockerd[2396]: time="2026-03-12T23:45:54.654717680Z" level=info msg="API listen on /run/docker.sock" Mar 12 23:45:54.992761 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3303105248-merged.mount: Deactivated successfully. Mar 12 23:45:55.547310 containerd[2024]: time="2026-03-12T23:45:55.547187204Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 12 23:45:56.204021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2976635116.mount: Deactivated successfully. Mar 12 23:45:57.700927 containerd[2024]: time="2026-03-12T23:45:57.699419699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:57.701713 containerd[2024]: time="2026-03-12T23:45:57.701671415Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 12 23:45:57.703864 containerd[2024]: time="2026-03-12T23:45:57.703821047Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:57.709318 containerd[2024]: time="2026-03-12T23:45:57.709252463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:57.711481 containerd[2024]: time="2026-03-12T23:45:57.711426431Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 2.164177775s" Mar 12 23:45:57.711664 containerd[2024]: time="2026-03-12T23:45:57.711635435Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 12 23:45:57.712532 containerd[2024]: time="2026-03-12T23:45:57.712468439Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 12 23:45:59.212885 containerd[2024]: time="2026-03-12T23:45:59.212803235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:59.216085 containerd[2024]: time="2026-03-12T23:45:59.215740067Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 12 23:45:59.222034 containerd[2024]: time="2026-03-12T23:45:59.221975327Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:59.230062 containerd[2024]: time="2026-03-12T23:45:59.230006195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:59.233561 containerd[2024]: time="2026-03-12T23:45:59.233355959Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.520828132s" Mar 12 23:45:59.233561 containerd[2024]: time="2026-03-12T23:45:59.233415011Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 12 23:45:59.234555 containerd[2024]: time="2026-03-12T23:45:59.234021983Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 12 23:45:59.768729 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 23:45:59.772050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:46:00.200734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:46:00.216550 (kubelet)[2681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 23:46:00.340531 kubelet[2681]: E0312 23:46:00.340411 2681 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 23:46:00.352424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 23:46:00.352743 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 23:46:00.353639 systemd[1]: kubelet.service: Consumed 380ms CPU time, 107.1M memory peak. Mar 12 23:46:00.731044 containerd[2024]: time="2026-03-12T23:46:00.730988882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:00.733308 containerd[2024]: time="2026-03-12T23:46:00.733253642Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 12 23:46:00.735508 containerd[2024]: time="2026-03-12T23:46:00.735438014Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:00.740799 containerd[2024]: time="2026-03-12T23:46:00.740717654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:00.742940 containerd[2024]: time="2026-03-12T23:46:00.742726766Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.508657707s" Mar 12 23:46:00.742940 containerd[2024]: time="2026-03-12T23:46:00.742777706Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 12 23:46:00.743833 containerd[2024]: time="2026-03-12T23:46:00.743518814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 12 23:46:02.011871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1737030945.mount: Deactivated successfully. Mar 12 23:46:02.610961 containerd[2024]: time="2026-03-12T23:46:02.610454932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:02.613016 containerd[2024]: time="2026-03-12T23:46:02.612646048Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 12 23:46:02.615262 containerd[2024]: time="2026-03-12T23:46:02.615196576Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:02.619752 containerd[2024]: time="2026-03-12T23:46:02.619694380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:02.620880 containerd[2024]: time="2026-03-12T23:46:02.620823184Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.877250286s" Mar 12 23:46:02.621318 containerd[2024]: time="2026-03-12T23:46:02.620878336Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 12 23:46:02.621699 containerd[2024]: time="2026-03-12T23:46:02.621665656Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 12 23:46:03.156076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3230828934.mount: Deactivated successfully. Mar 12 23:46:04.286917 containerd[2024]: time="2026-03-12T23:46:04.286016116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:04.289671 containerd[2024]: time="2026-03-12T23:46:04.289620820Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 12 23:46:04.292443 containerd[2024]: time="2026-03-12T23:46:04.292390720Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:04.297986 containerd[2024]: time="2026-03-12T23:46:04.297934420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:04.300950 containerd[2024]: time="2026-03-12T23:46:04.300051316Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.677503456s" Mar 12 23:46:04.300950 containerd[2024]: time="2026-03-12T23:46:04.300113848Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 12 23:46:04.301480 containerd[2024]: time="2026-03-12T23:46:04.301405672Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 12 23:46:04.806948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3766436703.mount: Deactivated successfully. Mar 12 23:46:04.819858 containerd[2024]: time="2026-03-12T23:46:04.819780474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 23:46:04.821948 containerd[2024]: time="2026-03-12T23:46:04.821641602Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 12 23:46:04.824291 containerd[2024]: time="2026-03-12T23:46:04.824213311Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 23:46:04.828949 containerd[2024]: time="2026-03-12T23:46:04.828700159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 23:46:04.831014 containerd[2024]: time="2026-03-12T23:46:04.830095051Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 528.623691ms" Mar 12 23:46:04.831014 containerd[2024]: time="2026-03-12T23:46:04.830148871Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 12 23:46:04.831421 containerd[2024]: time="2026-03-12T23:46:04.831383023Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 12 23:46:05.418948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2601662253.mount: Deactivated successfully. Mar 12 23:46:06.857514 containerd[2024]: time="2026-03-12T23:46:06.857431005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:06.861007 containerd[2024]: time="2026-03-12T23:46:06.860953701Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 12 23:46:06.863587 containerd[2024]: time="2026-03-12T23:46:06.863509401Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:06.869216 containerd[2024]: time="2026-03-12T23:46:06.869151681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:06.871370 containerd[2024]: time="2026-03-12T23:46:06.871189749Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.039662018s" Mar 12 23:46:06.871370 containerd[2024]: time="2026-03-12T23:46:06.871242849Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 12 23:46:10.604992 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 23:46:10.609205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:46:10.936181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:46:10.948741 (kubelet)[2845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 23:46:11.030742 kubelet[2845]: E0312 23:46:11.030647 2845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 23:46:11.035400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 23:46:11.036488 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 23:46:11.037357 systemd[1]: kubelet.service: Consumed 298ms CPU time, 106.9M memory peak. Mar 12 23:46:15.818817 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:46:15.819715 systemd[1]: kubelet.service: Consumed 298ms CPU time, 106.9M memory peak. Mar 12 23:46:15.824841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:46:15.879055 systemd[1]: Reload requested from client PID 2859 ('systemctl') (unit session-7.scope)... Mar 12 23:46:15.879089 systemd[1]: Reloading... Mar 12 23:46:16.140003 zram_generator::config[2910]: No configuration found. Mar 12 23:46:16.581800 systemd[1]: Reloading finished in 702 ms. Mar 12 23:46:16.691671 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 12 23:46:16.691871 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 12 23:46:16.692584 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:46:16.692681 systemd[1]: kubelet.service: Consumed 225ms CPU time, 95M memory peak. Mar 12 23:46:16.696980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:46:16.841362 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 12 23:46:17.216569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:46:17.227697 (kubelet)[2971]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 23:46:17.299734 kubelet[2971]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 23:46:17.299734 kubelet[2971]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 23:46:17.299734 kubelet[2971]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 23:46:17.300328 kubelet[2971]: I0312 23:46:17.299886 2971 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 23:46:18.872752 kubelet[2971]: I0312 23:46:18.872683 2971 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 23:46:18.872752 kubelet[2971]: I0312 23:46:18.872733 2971 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 23:46:18.873377 kubelet[2971]: I0312 23:46:18.873134 2971 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 23:46:18.931822 kubelet[2971]: E0312 23:46:18.931750 2971 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.21.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.65:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 23:46:18.934017 kubelet[2971]: I0312 23:46:18.933756 2971 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 23:46:18.947105 kubelet[2971]: I0312 23:46:18.946956 2971 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 23:46:18.953598 kubelet[2971]: I0312 23:46:18.953534 2971 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 23:46:18.957126 kubelet[2971]: I0312 23:46:18.957050 2971 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 23:46:18.957408 kubelet[2971]: I0312 23:46:18.957123 2971 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-65","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 23:46:18.957592 kubelet[2971]: I0312 23:46:18.957408 2971 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 23:46:18.957592 kubelet[2971]: I0312 23:46:18.957429 2971 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 23:46:18.959728 kubelet[2971]: I0312 23:46:18.959679 2971 state_mem.go:36] "Initialized new in-memory state store" Mar 12 23:46:18.967429 kubelet[2971]: I0312 23:46:18.967364 2971 kubelet.go:480] "Attempting to sync node with API server" Mar 12 23:46:18.967429 kubelet[2971]: I0312 23:46:18.967412 2971 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 23:46:18.967605 kubelet[2971]: I0312 23:46:18.967464 2971 kubelet.go:386] "Adding apiserver pod source" Mar 12 23:46:18.969616 kubelet[2971]: I0312 23:46:18.969576 2971 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 23:46:18.974512 kubelet[2971]: I0312 23:46:18.974479 2971 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 12 23:46:18.975800 kubelet[2971]: I0312 23:46:18.975754 2971 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 23:46:18.976198 kubelet[2971]: W0312 23:46:18.976176 2971 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 23:46:18.981238 kubelet[2971]: I0312 23:46:18.981196 2971 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 23:46:18.981381 kubelet[2971]: I0312 23:46:18.981273 2971 server.go:1289] "Started kubelet" Mar 12 23:46:18.982953 kubelet[2971]: E0312 23:46:18.981555 2971 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.21.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-65&limit=500&resourceVersion=0\": dial tcp 172.31.21.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 23:46:18.989134 kubelet[2971]: E0312 23:46:18.989074 2971 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.21.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 23:46:18.989738 kubelet[2971]: I0312 23:46:18.989687 2971 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 23:46:18.992451 kubelet[2971]: I0312 23:46:18.992409 2971 server.go:317] "Adding debug handlers to kubelet server" Mar 12 23:46:18.993187 kubelet[2971]: I0312 23:46:18.993140 2971 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 23:46:19.006659 kubelet[2971]: I0312 23:46:19.006591 2971 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 23:46:19.011114 kubelet[2971]: I0312 23:46:19.011060 2971 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 23:46:19.011572 kubelet[2971]: E0312 23:46:19.011521 2971 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-65\" not found" Mar 12 23:46:19.012272 kubelet[2971]: I0312 23:46:19.012219 2971 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 23:46:19.012527 kubelet[2971]: I0312 23:46:19.012483 2971 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 23:46:19.012624 kubelet[2971]: I0312 23:46:19.012589 2971 reconciler.go:26] "Reconciler: start to sync state" Mar 12 23:46:19.013618 kubelet[2971]: I0312 23:46:19.013057 2971 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 23:46:19.013618 kubelet[2971]: I0312 23:46:19.013422 2971 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 23:46:19.014623 kubelet[2971]: E0312 23:46:19.014564 2971 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.21.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 23:46:19.014777 kubelet[2971]: E0312 23:46:19.014716 2971 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-65?timeout=10s\": dial tcp 172.31.21.65:6443: connect: connection refused" interval="200ms" Mar 12 23:46:19.015651 kubelet[2971]: I0312 23:46:19.015605 2971 factory.go:223] Registration of the systemd container factory successfully Mar 12 23:46:19.018445 kubelet[2971]: E0312 23:46:19.015778 2971 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.65:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.65:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-65.189c3cc7125aba05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-65,UID:ip-172-31-21-65,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-65,},FirstTimestamp:2026-03-12 23:46:18.981227013 +0000 UTC m=+1.746498250,LastTimestamp:2026-03-12 23:46:18.981227013 +0000 UTC m=+1.746498250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-65,}" Mar 12 23:46:19.019275 kubelet[2971]: I0312 23:46:19.019226 2971 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 23:46:19.028473 kubelet[2971]: I0312 23:46:19.028426 2971 factory.go:223] Registration of the containerd container factory successfully Mar 12 23:46:19.044079 kubelet[2971]: I0312 23:46:19.044032 2971 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 23:46:19.045547 kubelet[2971]: I0312 23:46:19.045037 2971 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 23:46:19.045547 kubelet[2971]: I0312 23:46:19.045083 2971 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 23:46:19.045547 kubelet[2971]: I0312 23:46:19.045099 2971 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 23:46:19.045547 kubelet[2971]: E0312 23:46:19.045167 2971 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 23:46:19.052965 kubelet[2971]: E0312 23:46:19.052576 2971 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 23:46:19.053227 kubelet[2971]: E0312 23:46:19.053193 2971 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.21.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 23:46:19.069311 kubelet[2971]: I0312 23:46:19.069280 2971 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 23:46:19.069465 kubelet[2971]: I0312 23:46:19.069446 2971 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 23:46:19.069566 kubelet[2971]: I0312 23:46:19.069549 2971 state_mem.go:36] "Initialized new in-memory state store" Mar 12 23:46:19.073574 kubelet[2971]: I0312 23:46:19.073545 2971 policy_none.go:49] "None policy: Start" Mar 12 23:46:19.073727 kubelet[2971]: I0312 23:46:19.073709 2971 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 23:46:19.073830 kubelet[2971]: I0312 23:46:19.073813 2971 state_mem.go:35] "Initializing new in-memory state store" Mar 12 23:46:19.085413 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 23:46:19.101433 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 23:46:19.109417 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 23:46:19.111687 kubelet[2971]: E0312 23:46:19.111642 2971 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-65\" not found" Mar 12 23:46:19.130269 kubelet[2971]: E0312 23:46:19.130126 2971 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 23:46:19.130934 kubelet[2971]: I0312 23:46:19.130454 2971 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 23:46:19.130934 kubelet[2971]: I0312 23:46:19.130488 2971 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 23:46:19.131625 kubelet[2971]: I0312 23:46:19.131544 2971 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 23:46:19.135853 kubelet[2971]: E0312 23:46:19.135538 2971 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 23:46:19.135853 kubelet[2971]: E0312 23:46:19.135627 2971 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-65\" not found" Mar 12 23:46:19.166724 systemd[1]: Created slice kubepods-burstable-pod5adefd0a9253d0d3c214b55ee8d1ceb5.slice - libcontainer container kubepods-burstable-pod5adefd0a9253d0d3c214b55ee8d1ceb5.slice. Mar 12 23:46:19.186147 kubelet[2971]: E0312 23:46:19.186080 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-65\" not found" node="ip-172-31-21-65" Mar 12 23:46:19.192014 systemd[1]: Created slice kubepods-burstable-podf1eaa1639352e88e0c783769587a79d1.slice - libcontainer container kubepods-burstable-podf1eaa1639352e88e0c783769587a79d1.slice. Mar 12 23:46:19.197327 kubelet[2971]: E0312 23:46:19.196977 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-65\" not found" node="ip-172-31-21-65" Mar 12 23:46:19.203460 systemd[1]: Created slice kubepods-burstable-pod38096cff036c2678d7e25a0dc9a7e7e2.slice - libcontainer container kubepods-burstable-pod38096cff036c2678d7e25a0dc9a7e7e2.slice. Mar 12 23:46:19.207936 kubelet[2971]: E0312 23:46:19.207868 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-65\" not found" node="ip-172-31-21-65" Mar 12 23:46:19.213263 kubelet[2971]: I0312 23:46:19.213216 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1eaa1639352e88e0c783769587a79d1-ca-certs\") pod \"kube-apiserver-ip-172-31-21-65\" (UID: \"f1eaa1639352e88e0c783769587a79d1\") " pod="kube-system/kube-apiserver-ip-172-31-21-65" Mar 12 23:46:19.213263 kubelet[2971]: I0312 23:46:19.213274 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38096cff036c2678d7e25a0dc9a7e7e2-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-65\" (UID: \"38096cff036c2678d7e25a0dc9a7e7e2\") " pod="kube-system/kube-controller-manager-ip-172-31-21-65" Mar 12 23:46:19.213263 kubelet[2971]: I0312 23:46:19.213323 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38096cff036c2678d7e25a0dc9a7e7e2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-65\" (UID: \"38096cff036c2678d7e25a0dc9a7e7e2\") " pod="kube-system/kube-controller-manager-ip-172-31-21-65" Mar 12 23:46:19.213662 kubelet[2971]: I0312 23:46:19.213360 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38096cff036c2678d7e25a0dc9a7e7e2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-65\" (UID: \"38096cff036c2678d7e25a0dc9a7e7e2\") " pod="kube-system/kube-controller-manager-ip-172-31-21-65" Mar 12 23:46:19.213662 kubelet[2971]: I0312 23:46:19.213395 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38096cff036c2678d7e25a0dc9a7e7e2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-65\" (UID: \"38096cff036c2678d7e25a0dc9a7e7e2\") " pod="kube-system/kube-controller-manager-ip-172-31-21-65" Mar 12 23:46:19.213662 kubelet[2971]: I0312 23:46:19.213454 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1eaa1639352e88e0c783769587a79d1-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-65\" (UID: \"f1eaa1639352e88e0c783769587a79d1\") " pod="kube-system/kube-apiserver-ip-172-31-21-65" Mar 12 23:46:19.214101 kubelet[2971]: I0312 23:46:19.213876 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1eaa1639352e88e0c783769587a79d1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-65\" (UID: \"f1eaa1639352e88e0c783769587a79d1\") " pod="kube-system/kube-apiserver-ip-172-31-21-65" Mar 12 23:46:19.214101 kubelet[2971]: I0312 23:46:19.213968 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38096cff036c2678d7e25a0dc9a7e7e2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-65\" (UID: \"38096cff036c2678d7e25a0dc9a7e7e2\") " pod="kube-system/kube-controller-manager-ip-172-31-21-65" Mar 12 23:46:19.214101 kubelet[2971]: I0312 23:46:19.214031 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5adefd0a9253d0d3c214b55ee8d1ceb5-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-65\" (UID: \"5adefd0a9253d0d3c214b55ee8d1ceb5\") " pod="kube-system/kube-scheduler-ip-172-31-21-65" Mar 12 23:46:19.215364 kubelet[2971]: E0312 23:46:19.215303 2971 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-65?timeout=10s\": dial tcp 172.31.21.65:6443: connect: connection refused" interval="400ms" Mar 12 23:46:19.233610 kubelet[2971]: I0312 23:46:19.233561 2971 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-65" Mar 12 23:46:19.234685 kubelet[2971]: E0312 23:46:19.234612 2971 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.65:6443/api/v1/nodes\": dial tcp 172.31.21.65:6443: connect: connection refused" node="ip-172-31-21-65" Mar 12 23:46:19.437444 kubelet[2971]: I0312 23:46:19.437314 2971 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-65" Mar 12 23:46:19.438138 kubelet[2971]: E0312 23:46:19.438087 2971 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.65:6443/api/v1/nodes\": dial tcp 172.31.21.65:6443: connect: connection refused" node="ip-172-31-21-65" Mar 12 23:46:19.488713 containerd[2024]: time="2026-03-12T23:46:19.488620795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-65,Uid:5adefd0a9253d0d3c214b55ee8d1ceb5,Namespace:kube-system,Attempt:0,}" Mar 12 23:46:19.498751 containerd[2024]: time="2026-03-12T23:46:19.498644983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-65,Uid:f1eaa1639352e88e0c783769587a79d1,Namespace:kube-system,Attempt:0,}" Mar 12 23:46:19.510194 containerd[2024]: time="2026-03-12T23:46:19.510121051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-65,Uid:38096cff036c2678d7e25a0dc9a7e7e2,Namespace:kube-system,Attempt:0,}" Mar 12 23:46:19.545671 kubelet[2971]: E0312 23:46:19.544940 2971 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.65:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.65:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-65.189c3cc7125aba05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-65,UID:ip-172-31-21-65,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-65,},FirstTimestamp:2026-03-12 23:46:18.981227013 +0000 UTC m=+1.746498250,LastTimestamp:2026-03-12 23:46:18.981227013 +0000 UTC m=+1.746498250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-65,}" Mar 12 23:46:19.573108 containerd[2024]: time="2026-03-12T23:46:19.572878892Z" level=info msg="connecting to shim c64c5542bcb541d5e3bcc2fd13ad65284f0d538b7ff1d5680ede478cbd8f3954" address="unix:///run/containerd/s/2f4121e5b151b1cbed2d464fe26b0a431315fc95a45c4fe4ff88d84474afcae7" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:46:19.600687 containerd[2024]: time="2026-03-12T23:46:19.600635540Z" level=info msg="connecting to shim 723e333f0b4ba938205c61c073f7ad6e57145aa0a11923ef82b7de8f20325443" address="unix:///run/containerd/s/73492fcdef5e7114cc047d869fc8ba8932e755734a2f7d3be4be6c7228a3d772" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:46:19.605767 containerd[2024]: time="2026-03-12T23:46:19.605675648Z" level=info msg="connecting to shim 42e67c40b16f93b172f53920b7c3c57a7667f142e3e5edee14e242a70b957dda" address="unix:///run/containerd/s/f73f43bb1631c6dfba4548008870f26801120c6a294be536504b4ef986df9d9a" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:46:19.616023 kubelet[2971]: E0312 23:46:19.615959 2971 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-65?timeout=10s\": dial tcp 172.31.21.65:6443: connect: connection refused" interval="800ms" Mar 12 23:46:19.667956 systemd[1]: Started cri-containerd-c64c5542bcb541d5e3bcc2fd13ad65284f0d538b7ff1d5680ede478cbd8f3954.scope - libcontainer container c64c5542bcb541d5e3bcc2fd13ad65284f0d538b7ff1d5680ede478cbd8f3954. Mar 12 23:46:19.682987 systemd[1]: Started cri-containerd-723e333f0b4ba938205c61c073f7ad6e57145aa0a11923ef82b7de8f20325443.scope - libcontainer container 723e333f0b4ba938205c61c073f7ad6e57145aa0a11923ef82b7de8f20325443. Mar 12 23:46:19.700325 systemd[1]: Started cri-containerd-42e67c40b16f93b172f53920b7c3c57a7667f142e3e5edee14e242a70b957dda.scope - libcontainer container 42e67c40b16f93b172f53920b7c3c57a7667f142e3e5edee14e242a70b957dda. Mar 12 23:46:19.830574 containerd[2024]: time="2026-03-12T23:46:19.830434245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-65,Uid:5adefd0a9253d0d3c214b55ee8d1ceb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c64c5542bcb541d5e3bcc2fd13ad65284f0d538b7ff1d5680ede478cbd8f3954\"" Mar 12 23:46:19.844529 kubelet[2971]: I0312 23:46:19.843485 2971 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-65" Mar 12 23:46:19.844529 kubelet[2971]: E0312 23:46:19.844000 2971 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.65:6443/api/v1/nodes\": dial tcp 172.31.21.65:6443: connect: connection refused" node="ip-172-31-21-65" Mar 12 23:46:19.845576 containerd[2024]: time="2026-03-12T23:46:19.845517189Z" level=info msg="CreateContainer within sandbox \"c64c5542bcb541d5e3bcc2fd13ad65284f0d538b7ff1d5680ede478cbd8f3954\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 23:46:19.850765 containerd[2024]: time="2026-03-12T23:46:19.850671969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-65,Uid:f1eaa1639352e88e0c783769587a79d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"723e333f0b4ba938205c61c073f7ad6e57145aa0a11923ef82b7de8f20325443\"" Mar 12 23:46:19.860400 containerd[2024]: time="2026-03-12T23:46:19.860353125Z" level=info msg="CreateContainer within sandbox \"723e333f0b4ba938205c61c073f7ad6e57145aa0a11923ef82b7de8f20325443\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 23:46:19.870612 containerd[2024]: time="2026-03-12T23:46:19.870552105Z" level=info msg="Container 6797d9127297fce47f2a37706b6457f842c24acc1b42a2afce8239cb88f7e3be: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:46:19.878933 containerd[2024]: time="2026-03-12T23:46:19.878844345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-65,Uid:38096cff036c2678d7e25a0dc9a7e7e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"42e67c40b16f93b172f53920b7c3c57a7667f142e3e5edee14e242a70b957dda\"" Mar 12 23:46:19.889044 containerd[2024]: time="2026-03-12T23:46:19.888986889Z" level=info msg="CreateContainer within sandbox \"42e67c40b16f93b172f53920b7c3c57a7667f142e3e5edee14e242a70b957dda\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 23:46:19.890022 containerd[2024]: time="2026-03-12T23:46:19.889444689Z" level=info msg="CreateContainer within sandbox \"c64c5542bcb541d5e3bcc2fd13ad65284f0d538b7ff1d5680ede478cbd8f3954\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6797d9127297fce47f2a37706b6457f842c24acc1b42a2afce8239cb88f7e3be\"" Mar 12 23:46:19.892778 containerd[2024]: time="2026-03-12T23:46:19.892730769Z" level=info msg="StartContainer for \"6797d9127297fce47f2a37706b6457f842c24acc1b42a2afce8239cb88f7e3be\"" Mar 12 23:46:19.893254 containerd[2024]: time="2026-03-12T23:46:19.892821369Z" level=info msg="Container 8698957d758212346b66a7a3e4bfbbdf6136e3c8c059164d3e8c978976b84bd7: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:46:19.896934 containerd[2024]: time="2026-03-12T23:46:19.896854221Z" level=info msg="connecting to shim 6797d9127297fce47f2a37706b6457f842c24acc1b42a2afce8239cb88f7e3be" address="unix:///run/containerd/s/2f4121e5b151b1cbed2d464fe26b0a431315fc95a45c4fe4ff88d84474afcae7" protocol=ttrpc version=3 Mar 12 23:46:19.914368 containerd[2024]: time="2026-03-12T23:46:19.914282193Z" level=info msg="Container b5d31c494f33d4c640b8b93dd7ecc9bb1f06928af86ffc78423e8dd92ff6c170: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:46:19.925696 containerd[2024]: time="2026-03-12T23:46:19.925592074Z" level=info msg="CreateContainer within sandbox \"723e333f0b4ba938205c61c073f7ad6e57145aa0a11923ef82b7de8f20325443\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8698957d758212346b66a7a3e4bfbbdf6136e3c8c059164d3e8c978976b84bd7\"" Mar 12 23:46:19.927986 containerd[2024]: time="2026-03-12T23:46:19.927880258Z" level=info msg="StartContainer for \"8698957d758212346b66a7a3e4bfbbdf6136e3c8c059164d3e8c978976b84bd7\"" Mar 12 23:46:19.931205 systemd[1]: Started cri-containerd-6797d9127297fce47f2a37706b6457f842c24acc1b42a2afce8239cb88f7e3be.scope - libcontainer container 6797d9127297fce47f2a37706b6457f842c24acc1b42a2afce8239cb88f7e3be. Mar 12 23:46:19.932182 containerd[2024]: time="2026-03-12T23:46:19.932054638Z" level=info msg="connecting to shim 8698957d758212346b66a7a3e4bfbbdf6136e3c8c059164d3e8c978976b84bd7" address="unix:///run/containerd/s/73492fcdef5e7114cc047d869fc8ba8932e755734a2f7d3be4be6c7228a3d772" protocol=ttrpc version=3 Mar 12 23:46:19.947994 containerd[2024]: time="2026-03-12T23:46:19.947886406Z" level=info msg="CreateContainer within sandbox \"42e67c40b16f93b172f53920b7c3c57a7667f142e3e5edee14e242a70b957dda\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b5d31c494f33d4c640b8b93dd7ecc9bb1f06928af86ffc78423e8dd92ff6c170\"" Mar 12 23:46:19.949392 containerd[2024]: time="2026-03-12T23:46:19.949064158Z" level=info msg="StartContainer for \"b5d31c494f33d4c640b8b93dd7ecc9bb1f06928af86ffc78423e8dd92ff6c170\"" Mar 12 23:46:19.961320 containerd[2024]: time="2026-03-12T23:46:19.961153990Z" level=info msg="connecting to shim b5d31c494f33d4c640b8b93dd7ecc9bb1f06928af86ffc78423e8dd92ff6c170" address="unix:///run/containerd/s/f73f43bb1631c6dfba4548008870f26801120c6a294be536504b4ef986df9d9a" protocol=ttrpc version=3 Mar 12 23:46:20.005307 systemd[1]: Started cri-containerd-8698957d758212346b66a7a3e4bfbbdf6136e3c8c059164d3e8c978976b84bd7.scope - libcontainer container 8698957d758212346b66a7a3e4bfbbdf6136e3c8c059164d3e8c978976b84bd7. Mar 12 23:46:20.009106 kubelet[2971]: E0312 23:46:20.008305 2971 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.21.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-65&limit=500&resourceVersion=0\": dial tcp 172.31.21.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 23:46:20.041232 systemd[1]: Started cri-containerd-b5d31c494f33d4c640b8b93dd7ecc9bb1f06928af86ffc78423e8dd92ff6c170.scope - libcontainer container b5d31c494f33d4c640b8b93dd7ecc9bb1f06928af86ffc78423e8dd92ff6c170. Mar 12 23:46:20.089868 containerd[2024]: time="2026-03-12T23:46:20.089785734Z" level=info msg="StartContainer for \"6797d9127297fce47f2a37706b6457f842c24acc1b42a2afce8239cb88f7e3be\" returns successfully" Mar 12 23:46:20.152728 kubelet[2971]: E0312 23:46:20.152660 2971 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.21.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 23:46:20.188669 containerd[2024]: time="2026-03-12T23:46:20.188551435Z" level=info msg="StartContainer for \"8698957d758212346b66a7a3e4bfbbdf6136e3c8c059164d3e8c978976b84bd7\" returns successfully" Mar 12 23:46:20.212864 containerd[2024]: time="2026-03-12T23:46:20.212621359Z" level=info msg="StartContainer for \"b5d31c494f33d4c640b8b93dd7ecc9bb1f06928af86ffc78423e8dd92ff6c170\" returns successfully" Mar 12 23:46:20.646816 kubelet[2971]: I0312 23:46:20.646770 2971 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-65" Mar 12 23:46:21.097559 kubelet[2971]: E0312 23:46:21.097409 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-65\" not found" node="ip-172-31-21-65" Mar 12 23:46:21.103737 kubelet[2971]: E0312 23:46:21.103675 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-65\" not found" node="ip-172-31-21-65" Mar 12 23:46:21.110759 kubelet[2971]: E0312 23:46:21.110713 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-65\" not found" node="ip-172-31-21-65" Mar 12 23:46:22.114577 kubelet[2971]: E0312 23:46:22.114515 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-65\" not found" node="ip-172-31-21-65" Mar 12 23:46:22.115873 kubelet[2971]: E0312 23:46:22.115359 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-65\" not found" node="ip-172-31-21-65" Mar 12 23:46:22.116313 kubelet[2971]: E0312 23:46:22.116270 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-65\" not found" node="ip-172-31-21-65" Mar 12 23:46:23.116664 kubelet[2971]: E0312 23:46:23.116596 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-65\" not found" node="ip-172-31-21-65" Mar 12 23:46:23.117376 kubelet[2971]: E0312 23:46:23.117331 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-65\" not found" node="ip-172-31-21-65" Mar 12 23:46:23.317373 kubelet[2971]: E0312 23:46:23.317310 2971 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-65\" not found" node="ip-172-31-21-65" Mar 12 23:46:23.415962 kubelet[2971]: I0312 23:46:23.415176 2971 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-65" Mar 12 23:46:23.416552 kubelet[2971]: I0312 23:46:23.416514 2971 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-65" Mar 12 23:46:23.489013 kubelet[2971]: E0312 23:46:23.488945 2971 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-65\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-21-65" Mar 12 23:46:23.489013 kubelet[2971]: I0312 23:46:23.489020 2971 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-65" Mar 12 23:46:23.499165 kubelet[2971]: E0312 23:46:23.499106 2971 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-65\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-21-65" Mar 12 23:46:23.499165 kubelet[2971]: I0312 23:46:23.499155 2971 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-65" Mar 12 23:46:23.507927 kubelet[2971]: E0312 23:46:23.507577 2971 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-65\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-21-65" Mar 12 23:46:23.986611 kubelet[2971]: I0312 23:46:23.986550 2971 apiserver.go:52] "Watching apiserver" Mar 12 23:46:24.013006 kubelet[2971]: I0312 23:46:24.012941 2971 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 23:46:24.783201 kubelet[2971]: I0312 23:46:24.783074 2971 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-65" Mar 12 23:46:25.723542 systemd[1]: Reload requested from client PID 3253 ('systemctl') (unit session-7.scope)... Mar 12 23:46:25.724049 systemd[1]: Reloading... Mar 12 23:46:25.919950 zram_generator::config[3297]: No configuration found. Mar 12 23:46:26.405206 systemd[1]: Reloading finished in 680 ms. Mar 12 23:46:26.444520 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:46:26.464997 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 23:46:26.465679 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:46:26.465763 systemd[1]: kubelet.service: Consumed 2.448s CPU time, 128.9M memory peak. Mar 12 23:46:26.469202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:46:26.837189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:46:26.856553 (kubelet)[3357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 23:46:26.970774 kubelet[3357]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 23:46:26.971949 kubelet[3357]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 23:46:26.971949 kubelet[3357]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 23:46:26.971949 kubelet[3357]: I0312 23:46:26.971520 3357 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 23:46:26.986149 kubelet[3357]: I0312 23:46:26.984985 3357 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 23:46:26.986149 kubelet[3357]: I0312 23:46:26.985035 3357 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 23:46:26.986149 kubelet[3357]: I0312 23:46:26.985462 3357 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 23:46:26.988049 kubelet[3357]: I0312 23:46:26.988003 3357 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 23:46:26.993247 kubelet[3357]: I0312 23:46:26.992451 3357 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 23:46:27.001737 kubelet[3357]: I0312 23:46:27.001685 3357 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 23:46:27.009179 kubelet[3357]: I0312 23:46:27.009124 3357 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 23:46:27.014304 kubelet[3357]: I0312 23:46:27.011366 3357 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 23:46:27.014304 kubelet[3357]: I0312 23:46:27.011425 3357 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-65","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 23:46:27.014304 kubelet[3357]: I0312 23:46:27.011862 3357 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 23:46:27.014304 kubelet[3357]: I0312 23:46:27.011883 3357 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 23:46:27.014304 kubelet[3357]: I0312 23:46:27.012003 3357 state_mem.go:36] "Initialized new in-memory state store" Mar 12 23:46:27.014721 kubelet[3357]: I0312 23:46:27.012310 3357 kubelet.go:480] "Attempting to sync node with API server" Mar 12 23:46:27.014721 kubelet[3357]: I0312 23:46:27.012335 3357 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 23:46:27.014721 kubelet[3357]: I0312 23:46:27.012389 3357 kubelet.go:386] "Adding apiserver pod source" Mar 12 23:46:27.014721 kubelet[3357]: I0312 23:46:27.012418 3357 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 23:46:27.028314 kubelet[3357]: I0312 23:46:27.028276 3357 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 12 23:46:27.032430 kubelet[3357]: I0312 23:46:27.032387 3357 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 23:46:27.039550 kubelet[3357]: I0312 23:46:27.039518 3357 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 23:46:27.040680 kubelet[3357]: I0312 23:46:27.040650 3357 server.go:1289] "Started kubelet" Mar 12 23:46:27.046301 kubelet[3357]: I0312 23:46:27.046260 3357 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 23:46:27.060342 kubelet[3357]: I0312 23:46:27.060269 3357 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 23:46:27.064707 kubelet[3357]: I0312 23:46:27.063778 3357 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 23:46:27.066316 kubelet[3357]: I0312 23:46:27.066286 3357 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 23:46:27.074497 kubelet[3357]: I0312 23:46:27.067272 3357 server.go:317] "Adding debug handlers to kubelet server" Mar 12 23:46:27.081571 kubelet[3357]: I0312 23:46:27.080483 3357 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 23:46:27.088962 kubelet[3357]: I0312 23:46:27.087069 3357 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 23:46:27.090647 kubelet[3357]: E0312 23:46:27.090586 3357 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-65\" not found" Mar 12 23:46:27.095748 kubelet[3357]: I0312 23:46:27.095685 3357 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 23:46:27.100709 kubelet[3357]: I0312 23:46:27.100649 3357 factory.go:223] Registration of the systemd container factory successfully Mar 12 23:46:27.101628 kubelet[3357]: I0312 23:46:27.100838 3357 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 23:46:27.116014 kubelet[3357]: I0312 23:46:27.115634 3357 reconciler.go:26] "Reconciler: start to sync state" Mar 12 23:46:27.134767 kubelet[3357]: I0312 23:46:27.134628 3357 factory.go:223] Registration of the containerd container factory successfully Mar 12 23:46:27.149242 kubelet[3357]: E0312 23:46:27.149187 3357 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 23:46:27.187540 kubelet[3357]: I0312 23:46:27.186551 3357 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 23:46:27.193752 kubelet[3357]: I0312 23:46:27.193712 3357 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 23:46:27.195083 kubelet[3357]: I0312 23:46:27.194750 3357 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 23:46:27.195083 kubelet[3357]: I0312 23:46:27.194834 3357 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 23:46:27.195083 kubelet[3357]: I0312 23:46:27.194852 3357 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 23:46:27.195421 kubelet[3357]: E0312 23:46:27.195350 3357 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 23:46:27.295644 kubelet[3357]: E0312 23:46:27.295465 3357 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 23:46:27.348408 kubelet[3357]: I0312 23:46:27.347389 3357 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 23:46:27.350171 kubelet[3357]: I0312 23:46:27.348839 3357 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 23:46:27.350171 kubelet[3357]: I0312 23:46:27.348936 3357 state_mem.go:36] "Initialized new in-memory state store" Mar 12 23:46:27.350171 kubelet[3357]: I0312 23:46:27.349165 3357 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 23:46:27.350171 kubelet[3357]: I0312 23:46:27.349184 3357 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 23:46:27.350171 kubelet[3357]: I0312 23:46:27.349219 3357 policy_none.go:49] "None policy: Start" Mar 12 23:46:27.350171 kubelet[3357]: I0312 23:46:27.349240 3357 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 23:46:27.350171 kubelet[3357]: I0312 23:46:27.349261 3357 state_mem.go:35] "Initializing new in-memory state store" Mar 12 23:46:27.350171 kubelet[3357]: I0312 23:46:27.349421 3357 state_mem.go:75] "Updated machine memory state" Mar 12 23:46:27.366577 kubelet[3357]: E0312 23:46:27.366520 3357 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 23:46:27.367747 kubelet[3357]: I0312 23:46:27.367696 3357 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 23:46:27.368418 kubelet[3357]: I0312 23:46:27.368053 3357 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 23:46:27.369791 kubelet[3357]: I0312 23:46:27.369110 3357 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 23:46:27.377384 kubelet[3357]: E0312 23:46:27.377233 3357 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 23:46:27.491072 kubelet[3357]: I0312 23:46:27.491029 3357 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-65" Mar 12 23:46:27.498420 kubelet[3357]: I0312 23:46:27.498363 3357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-65" Mar 12 23:46:27.503187 kubelet[3357]: I0312 23:46:27.501765 3357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-65" Mar 12 23:46:27.503187 kubelet[3357]: I0312 23:46:27.502342 3357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-65" Mar 12 23:46:27.525490 kubelet[3357]: E0312 23:46:27.525450 3357 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-65\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-65" Mar 12 23:46:27.525759 kubelet[3357]: I0312 23:46:27.525714 3357 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-21-65" Mar 12 23:46:27.525934 kubelet[3357]: I0312 23:46:27.525840 3357 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-65" Mar 12 23:46:27.631771 kubelet[3357]: I0312 23:46:27.631247 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1eaa1639352e88e0c783769587a79d1-ca-certs\") pod \"kube-apiserver-ip-172-31-21-65\" (UID: \"f1eaa1639352e88e0c783769587a79d1\") " pod="kube-system/kube-apiserver-ip-172-31-21-65" Mar 12 23:46:27.631771 kubelet[3357]: I0312 23:46:27.631318 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38096cff036c2678d7e25a0dc9a7e7e2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-65\" (UID: \"38096cff036c2678d7e25a0dc9a7e7e2\") " pod="kube-system/kube-controller-manager-ip-172-31-21-65" Mar 12 23:46:27.631771 kubelet[3357]: I0312 23:46:27.631358 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38096cff036c2678d7e25a0dc9a7e7e2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-65\" (UID: \"38096cff036c2678d7e25a0dc9a7e7e2\") " pod="kube-system/kube-controller-manager-ip-172-31-21-65" Mar 12 23:46:27.631771 kubelet[3357]: I0312 23:46:27.631394 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38096cff036c2678d7e25a0dc9a7e7e2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-65\" (UID: \"38096cff036c2678d7e25a0dc9a7e7e2\") " pod="kube-system/kube-controller-manager-ip-172-31-21-65" Mar 12 23:46:27.631771 kubelet[3357]: I0312 23:46:27.631432 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38096cff036c2678d7e25a0dc9a7e7e2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-65\" (UID: \"38096cff036c2678d7e25a0dc9a7e7e2\") " pod="kube-system/kube-controller-manager-ip-172-31-21-65" Mar 12 23:46:27.632147 kubelet[3357]: I0312 23:46:27.631470 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1eaa1639352e88e0c783769587a79d1-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-65\" (UID: \"f1eaa1639352e88e0c783769587a79d1\") " pod="kube-system/kube-apiserver-ip-172-31-21-65" Mar 12 23:46:27.632147 kubelet[3357]: I0312 23:46:27.631506 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1eaa1639352e88e0c783769587a79d1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-65\" (UID: \"f1eaa1639352e88e0c783769587a79d1\") " pod="kube-system/kube-apiserver-ip-172-31-21-65" Mar 12 23:46:27.632147 kubelet[3357]: I0312 23:46:27.631564 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38096cff036c2678d7e25a0dc9a7e7e2-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-65\" (UID: \"38096cff036c2678d7e25a0dc9a7e7e2\") " pod="kube-system/kube-controller-manager-ip-172-31-21-65" Mar 12 23:46:27.632147 kubelet[3357]: I0312 23:46:27.631600 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5adefd0a9253d0d3c214b55ee8d1ceb5-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-65\" (UID: \"5adefd0a9253d0d3c214b55ee8d1ceb5\") " pod="kube-system/kube-scheduler-ip-172-31-21-65" Mar 12 23:46:28.022010 kubelet[3357]: I0312 23:46:28.021761 3357 apiserver.go:52] "Watching apiserver" Mar 12 23:46:28.096689 kubelet[3357]: I0312 23:46:28.096627 3357 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 23:46:28.282083 kubelet[3357]: I0312 23:46:28.281923 3357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-65" Mar 12 23:46:28.283453 kubelet[3357]: I0312 23:46:28.283402 3357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-65" Mar 12 23:46:28.284572 kubelet[3357]: I0312 23:46:28.283881 3357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-65" Mar 12 23:46:28.304370 kubelet[3357]: E0312 23:46:28.303727 3357 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-65\" already exists" pod="kube-system/kube-scheduler-ip-172-31-21-65" Mar 12 23:46:28.325939 kubelet[3357]: E0312 23:46:28.325209 3357 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-65\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-65" Mar 12 23:46:28.332197 kubelet[3357]: E0312 23:46:28.332132 3357 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-65\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-21-65" Mar 12 23:46:28.401316 kubelet[3357]: I0312 23:46:28.401166 3357 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-65" podStartSLOduration=4.4011289 podStartE2EDuration="4.4011289s" podCreationTimestamp="2026-03-12 23:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:46:28.396697072 +0000 UTC m=+1.528203177" watchObservedRunningTime="2026-03-12 23:46:28.4011289 +0000 UTC m=+1.532635017" Mar 12 23:46:28.439071 kubelet[3357]: I0312 23:46:28.437485 3357 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-65" podStartSLOduration=1.437351752 podStartE2EDuration="1.437351752s" podCreationTimestamp="2026-03-12 23:46:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:46:28.437264092 +0000 UTC m=+1.568770221" watchObservedRunningTime="2026-03-12 23:46:28.437351752 +0000 UTC m=+1.568857917" Mar 12 23:46:30.194161 update_engine[1993]: I20260312 23:46:30.193151 1993 update_attempter.cc:509] Updating boot flags... Mar 12 23:46:31.315858 kubelet[3357]: I0312 23:46:31.315727 3357 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-65" podStartSLOduration=4.315705078 podStartE2EDuration="4.315705078s" podCreationTimestamp="2026-03-12 23:46:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:46:28.48164104 +0000 UTC m=+1.613147145" watchObservedRunningTime="2026-03-12 23:46:31.315705078 +0000 UTC m=+4.447211195" Mar 12 23:46:31.916620 kubelet[3357]: I0312 23:46:31.916571 3357 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 23:46:31.917604 containerd[2024]: time="2026-03-12T23:46:31.917421105Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 23:46:31.918838 kubelet[3357]: I0312 23:46:31.918792 3357 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 23:46:32.864514 systemd[1]: Created slice kubepods-besteffort-podf9b2aa50_287c_4bfe_99d3_3556909a4fc5.slice - libcontainer container kubepods-besteffort-podf9b2aa50_287c_4bfe_99d3_3556909a4fc5.slice. Mar 12 23:46:32.978030 kubelet[3357]: I0312 23:46:32.977370 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wvvz\" (UniqueName: \"kubernetes.io/projected/f9b2aa50-287c-4bfe-99d3-3556909a4fc5-kube-api-access-9wvvz\") pod \"kube-proxy-9q2vb\" (UID: \"f9b2aa50-287c-4bfe-99d3-3556909a4fc5\") " pod="kube-system/kube-proxy-9q2vb" Mar 12 23:46:32.978030 kubelet[3357]: I0312 23:46:32.977439 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9b2aa50-287c-4bfe-99d3-3556909a4fc5-kube-proxy\") pod \"kube-proxy-9q2vb\" (UID: \"f9b2aa50-287c-4bfe-99d3-3556909a4fc5\") " pod="kube-system/kube-proxy-9q2vb" Mar 12 23:46:32.978030 kubelet[3357]: I0312 23:46:32.977481 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9b2aa50-287c-4bfe-99d3-3556909a4fc5-xtables-lock\") pod \"kube-proxy-9q2vb\" (UID: \"f9b2aa50-287c-4bfe-99d3-3556909a4fc5\") " pod="kube-system/kube-proxy-9q2vb" Mar 12 23:46:32.978030 kubelet[3357]: I0312 23:46:32.977516 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9b2aa50-287c-4bfe-99d3-3556909a4fc5-lib-modules\") pod \"kube-proxy-9q2vb\" (UID: \"f9b2aa50-287c-4bfe-99d3-3556909a4fc5\") " pod="kube-system/kube-proxy-9q2vb" Mar 12 23:46:33.180239 containerd[2024]: time="2026-03-12T23:46:33.180051295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9q2vb,Uid:f9b2aa50-287c-4bfe-99d3-3556909a4fc5,Namespace:kube-system,Attempt:0,}" Mar 12 23:46:33.189705 systemd[1]: Created slice kubepods-besteffort-pod88c9b78b_868b_4006_aaac_9bf4f01e050e.slice - libcontainer container kubepods-besteffort-pod88c9b78b_868b_4006_aaac_9bf4f01e050e.slice. Mar 12 23:46:33.233226 containerd[2024]: time="2026-03-12T23:46:33.233116340Z" level=info msg="connecting to shim 20c8c69dfa20b69f45a204bbf0ede41fda86c88c820c44ac462eaf19fb2d53e9" address="unix:///run/containerd/s/c81e9ddf081fc6a1123ceb5fa9bcbdc391f5453a957574b754e15261abe2f0c0" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:46:33.280096 kubelet[3357]: I0312 23:46:33.280024 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxnnk\" (UniqueName: \"kubernetes.io/projected/88c9b78b-868b-4006-aaac-9bf4f01e050e-kube-api-access-lxnnk\") pod \"tigera-operator-6bf85f8dd-qd5kz\" (UID: \"88c9b78b-868b-4006-aaac-9bf4f01e050e\") " pod="tigera-operator/tigera-operator-6bf85f8dd-qd5kz" Mar 12 23:46:33.280238 kubelet[3357]: I0312 23:46:33.280096 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/88c9b78b-868b-4006-aaac-9bf4f01e050e-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-qd5kz\" (UID: \"88c9b78b-868b-4006-aaac-9bf4f01e050e\") " pod="tigera-operator/tigera-operator-6bf85f8dd-qd5kz" Mar 12 23:46:33.288200 systemd[1]: Started cri-containerd-20c8c69dfa20b69f45a204bbf0ede41fda86c88c820c44ac462eaf19fb2d53e9.scope - libcontainer container 20c8c69dfa20b69f45a204bbf0ede41fda86c88c820c44ac462eaf19fb2d53e9. Mar 12 23:46:33.343593 containerd[2024]: time="2026-03-12T23:46:33.343532408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9q2vb,Uid:f9b2aa50-287c-4bfe-99d3-3556909a4fc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"20c8c69dfa20b69f45a204bbf0ede41fda86c88c820c44ac462eaf19fb2d53e9\"" Mar 12 23:46:33.355361 containerd[2024]: time="2026-03-12T23:46:33.355293728Z" level=info msg="CreateContainer within sandbox \"20c8c69dfa20b69f45a204bbf0ede41fda86c88c820c44ac462eaf19fb2d53e9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 23:46:33.381285 containerd[2024]: time="2026-03-12T23:46:33.379018832Z" level=info msg="Container 0362793bc89287e636fd2b45f22192e588da6e3a50fb3fc10e8995d8f91e174f: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:46:33.405404 containerd[2024]: time="2026-03-12T23:46:33.405241760Z" level=info msg="CreateContainer within sandbox \"20c8c69dfa20b69f45a204bbf0ede41fda86c88c820c44ac462eaf19fb2d53e9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0362793bc89287e636fd2b45f22192e588da6e3a50fb3fc10e8995d8f91e174f\"" Mar 12 23:46:33.407851 containerd[2024]: time="2026-03-12T23:46:33.407771096Z" level=info msg="StartContainer for \"0362793bc89287e636fd2b45f22192e588da6e3a50fb3fc10e8995d8f91e174f\"" Mar 12 23:46:33.412417 containerd[2024]: time="2026-03-12T23:46:33.412325409Z" level=info msg="connecting to shim 0362793bc89287e636fd2b45f22192e588da6e3a50fb3fc10e8995d8f91e174f" address="unix:///run/containerd/s/c81e9ddf081fc6a1123ceb5fa9bcbdc391f5453a957574b754e15261abe2f0c0" protocol=ttrpc version=3 Mar 12 23:46:33.446300 systemd[1]: Started cri-containerd-0362793bc89287e636fd2b45f22192e588da6e3a50fb3fc10e8995d8f91e174f.scope - libcontainer container 0362793bc89287e636fd2b45f22192e588da6e3a50fb3fc10e8995d8f91e174f. Mar 12 23:46:33.504233 containerd[2024]: time="2026-03-12T23:46:33.504154701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-qd5kz,Uid:88c9b78b-868b-4006-aaac-9bf4f01e050e,Namespace:tigera-operator,Attempt:0,}" Mar 12 23:46:33.560864 containerd[2024]: time="2026-03-12T23:46:33.560705553Z" level=info msg="connecting to shim bc9cd43889d2b554208ca9a6d31ce911f23be5b07acdb89a5c722d70a73ca657" address="unix:///run/containerd/s/ec6c04992276d742cf2242099e1adfc4d0c71c80e8520c02199a63c9639a25a8" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:46:33.601049 containerd[2024]: time="2026-03-12T23:46:33.600965001Z" level=info msg="StartContainer for \"0362793bc89287e636fd2b45f22192e588da6e3a50fb3fc10e8995d8f91e174f\" returns successfully" Mar 12 23:46:33.650306 systemd[1]: Started cri-containerd-bc9cd43889d2b554208ca9a6d31ce911f23be5b07acdb89a5c722d70a73ca657.scope - libcontainer container bc9cd43889d2b554208ca9a6d31ce911f23be5b07acdb89a5c722d70a73ca657. Mar 12 23:46:33.747934 containerd[2024]: time="2026-03-12T23:46:33.746685166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-qd5kz,Uid:88c9b78b-868b-4006-aaac-9bf4f01e050e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bc9cd43889d2b554208ca9a6d31ce911f23be5b07acdb89a5c722d70a73ca657\"" Mar 12 23:46:33.751348 containerd[2024]: time="2026-03-12T23:46:33.751256614Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 12 23:46:34.372203 kubelet[3357]: I0312 23:46:34.372070 3357 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9q2vb" podStartSLOduration=2.372048021 podStartE2EDuration="2.372048021s" podCreationTimestamp="2026-03-12 23:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:46:34.371739285 +0000 UTC m=+7.503245414" watchObservedRunningTime="2026-03-12 23:46:34.372048021 +0000 UTC m=+7.503554114" Mar 12 23:46:34.874060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1811499867.mount: Deactivated successfully. Mar 12 23:46:36.402538 containerd[2024]: time="2026-03-12T23:46:36.402472439Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:36.404113 containerd[2024]: time="2026-03-12T23:46:36.403872659Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 12 23:46:36.406264 containerd[2024]: time="2026-03-12T23:46:36.405259247Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:36.408593 containerd[2024]: time="2026-03-12T23:46:36.408505511Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:36.410051 containerd[2024]: time="2026-03-12T23:46:36.409969367Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.658624781s" Mar 12 23:46:36.410051 containerd[2024]: time="2026-03-12T23:46:36.410028599Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 12 23:46:36.417679 containerd[2024]: time="2026-03-12T23:46:36.417619967Z" level=info msg="CreateContainer within sandbox \"bc9cd43889d2b554208ca9a6d31ce911f23be5b07acdb89a5c722d70a73ca657\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 12 23:46:36.431131 containerd[2024]: time="2026-03-12T23:46:36.431064936Z" level=info msg="Container f5dfd1eed594b753580bdfaac9b5a6d6feb3c08b0c1af6155dff1384669a71cb: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:46:36.437394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount465751219.mount: Deactivated successfully. Mar 12 23:46:36.442393 containerd[2024]: time="2026-03-12T23:46:36.442304832Z" level=info msg="CreateContainer within sandbox \"bc9cd43889d2b554208ca9a6d31ce911f23be5b07acdb89a5c722d70a73ca657\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f5dfd1eed594b753580bdfaac9b5a6d6feb3c08b0c1af6155dff1384669a71cb\"" Mar 12 23:46:36.444217 containerd[2024]: time="2026-03-12T23:46:36.443962164Z" level=info msg="StartContainer for \"f5dfd1eed594b753580bdfaac9b5a6d6feb3c08b0c1af6155dff1384669a71cb\"" Mar 12 23:46:36.447246 containerd[2024]: time="2026-03-12T23:46:36.447165168Z" level=info msg="connecting to shim f5dfd1eed594b753580bdfaac9b5a6d6feb3c08b0c1af6155dff1384669a71cb" address="unix:///run/containerd/s/ec6c04992276d742cf2242099e1adfc4d0c71c80e8520c02199a63c9639a25a8" protocol=ttrpc version=3 Mar 12 23:46:36.489251 systemd[1]: Started cri-containerd-f5dfd1eed594b753580bdfaac9b5a6d6feb3c08b0c1af6155dff1384669a71cb.scope - libcontainer container f5dfd1eed594b753580bdfaac9b5a6d6feb3c08b0c1af6155dff1384669a71cb. Mar 12 23:46:36.544617 containerd[2024]: time="2026-03-12T23:46:36.544495212Z" level=info msg="StartContainer for \"f5dfd1eed594b753580bdfaac9b5a6d6feb3c08b0c1af6155dff1384669a71cb\" returns successfully" Mar 12 23:46:43.544402 sudo[2378]: pam_unix(sudo:session): session closed for user root Mar 12 23:46:43.623939 sshd[2377]: Connection closed by 4.153.228.146 port 60860 Mar 12 23:46:43.625163 sshd-session[2374]: pam_unix(sshd:session): session closed for user core Mar 12 23:46:43.638274 systemd-logind[1990]: Session 7 logged out. Waiting for processes to exit. Mar 12 23:46:43.638514 systemd[1]: sshd@6-172.31.21.65:22-4.153.228.146:60860.service: Deactivated successfully. Mar 12 23:46:43.645519 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 23:46:43.648053 systemd[1]: session-7.scope: Consumed 12.883s CPU time, 226.1M memory peak. Mar 12 23:46:43.653407 systemd-logind[1990]: Removed session 7. Mar 12 23:46:53.360830 kubelet[3357]: I0312 23:46:53.360692 3357 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-qd5kz" podStartSLOduration=17.699015555 podStartE2EDuration="20.360670684s" podCreationTimestamp="2026-03-12 23:46:33 +0000 UTC" firstStartedPulling="2026-03-12 23:46:33.750531454 +0000 UTC m=+6.882037559" lastFinishedPulling="2026-03-12 23:46:36.412186595 +0000 UTC m=+9.543692688" observedRunningTime="2026-03-12 23:46:37.360489324 +0000 UTC m=+10.491995453" watchObservedRunningTime="2026-03-12 23:46:53.360670684 +0000 UTC m=+26.492176789" Mar 12 23:46:53.377581 kubelet[3357]: E0312 23:46:53.377528 3357 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ip-172-31-21-65\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-21-65' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"tigera-ca-bundle\"" type="*v1.ConfigMap" Mar 12 23:46:53.378283 kubelet[3357]: I0312 23:46:53.377831 3357 status_manager.go:895] "Failed to get status for pod" podUID="6809ab16-fe91-4ad3-959d-45875cd72ccc" pod="calico-system/calico-typha-768d55dd9b-pl5rt" err="pods \"calico-typha-768d55dd9b-pl5rt\" is forbidden: User \"system:node:ip-172-31-21-65\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-21-65' and this object" Mar 12 23:46:53.378283 kubelet[3357]: E0312 23:46:53.378067 3357 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ip-172-31-21-65\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-21-65' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"typha-certs\"" type="*v1.Secret" Mar 12 23:46:53.378283 kubelet[3357]: E0312 23:46:53.378161 3357 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-21-65\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-21-65' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Mar 12 23:46:53.382752 systemd[1]: Created slice kubepods-besteffort-pod6809ab16_fe91_4ad3_959d_45875cd72ccc.slice - libcontainer container kubepods-besteffort-pod6809ab16_fe91_4ad3_959d_45875cd72ccc.slice. Mar 12 23:46:53.421914 kubelet[3357]: I0312 23:46:53.421107 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6809ab16-fe91-4ad3-959d-45875cd72ccc-tigera-ca-bundle\") pod \"calico-typha-768d55dd9b-pl5rt\" (UID: \"6809ab16-fe91-4ad3-959d-45875cd72ccc\") " pod="calico-system/calico-typha-768d55dd9b-pl5rt" Mar 12 23:46:53.422374 kubelet[3357]: I0312 23:46:53.422320 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6809ab16-fe91-4ad3-959d-45875cd72ccc-typha-certs\") pod \"calico-typha-768d55dd9b-pl5rt\" (UID: \"6809ab16-fe91-4ad3-959d-45875cd72ccc\") " pod="calico-system/calico-typha-768d55dd9b-pl5rt" Mar 12 23:46:53.422517 kubelet[3357]: I0312 23:46:53.422423 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t75km\" (UniqueName: \"kubernetes.io/projected/6809ab16-fe91-4ad3-959d-45875cd72ccc-kube-api-access-t75km\") pod \"calico-typha-768d55dd9b-pl5rt\" (UID: \"6809ab16-fe91-4ad3-959d-45875cd72ccc\") " pod="calico-system/calico-typha-768d55dd9b-pl5rt" Mar 12 23:46:53.550442 systemd[1]: Created slice kubepods-besteffort-pod78788e99_ced5_4831_83b6_a5a81f13d0fd.slice - libcontainer container kubepods-besteffort-pod78788e99_ced5_4831_83b6_a5a81f13d0fd.slice. Mar 12 23:46:53.624514 kubelet[3357]: I0312 23:46:53.623614 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/78788e99-ced5-4831-83b6-a5a81f13d0fd-cni-bin-dir\") pod \"calico-node-vdhsg\" (UID: \"78788e99-ced5-4831-83b6-a5a81f13d0fd\") " pod="calico-system/calico-node-vdhsg" Mar 12 23:46:53.624514 kubelet[3357]: I0312 23:46:53.623694 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/78788e99-ced5-4831-83b6-a5a81f13d0fd-policysync\") pod \"calico-node-vdhsg\" (UID: \"78788e99-ced5-4831-83b6-a5a81f13d0fd\") " pod="calico-system/calico-node-vdhsg" Mar 12 23:46:53.624514 kubelet[3357]: I0312 23:46:53.623738 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/78788e99-ced5-4831-83b6-a5a81f13d0fd-sys-fs\") pod \"calico-node-vdhsg\" (UID: \"78788e99-ced5-4831-83b6-a5a81f13d0fd\") " pod="calico-system/calico-node-vdhsg" Mar 12 23:46:53.624514 kubelet[3357]: I0312 23:46:53.623776 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78788e99-ced5-4831-83b6-a5a81f13d0fd-tigera-ca-bundle\") pod \"calico-node-vdhsg\" (UID: \"78788e99-ced5-4831-83b6-a5a81f13d0fd\") " pod="calico-system/calico-node-vdhsg" Mar 12 23:46:53.624514 kubelet[3357]: I0312 23:46:53.623817 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/78788e99-ced5-4831-83b6-a5a81f13d0fd-node-certs\") pod \"calico-node-vdhsg\" (UID: \"78788e99-ced5-4831-83b6-a5a81f13d0fd\") " pod="calico-system/calico-node-vdhsg" Mar 12 23:46:53.624883 kubelet[3357]: I0312 23:46:53.623853 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/78788e99-ced5-4831-83b6-a5a81f13d0fd-var-run-calico\") pod \"calico-node-vdhsg\" (UID: \"78788e99-ced5-4831-83b6-a5a81f13d0fd\") " pod="calico-system/calico-node-vdhsg" Mar 12 23:46:53.624883 kubelet[3357]: I0312 23:46:53.623907 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvspg\" (UniqueName: \"kubernetes.io/projected/78788e99-ced5-4831-83b6-a5a81f13d0fd-kube-api-access-vvspg\") pod \"calico-node-vdhsg\" (UID: \"78788e99-ced5-4831-83b6-a5a81f13d0fd\") " pod="calico-system/calico-node-vdhsg" Mar 12 23:46:53.626045 kubelet[3357]: I0312 23:46:53.625950 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78788e99-ced5-4831-83b6-a5a81f13d0fd-lib-modules\") pod \"calico-node-vdhsg\" (UID: \"78788e99-ced5-4831-83b6-a5a81f13d0fd\") " pod="calico-system/calico-node-vdhsg" Mar 12 23:46:53.626045 kubelet[3357]: I0312 23:46:53.626036 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78788e99-ced5-4831-83b6-a5a81f13d0fd-xtables-lock\") pod \"calico-node-vdhsg\" (UID: \"78788e99-ced5-4831-83b6-a5a81f13d0fd\") " pod="calico-system/calico-node-vdhsg" Mar 12 23:46:53.626291 kubelet[3357]: I0312 23:46:53.626075 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/78788e99-ced5-4831-83b6-a5a81f13d0fd-cni-net-dir\") pod \"calico-node-vdhsg\" (UID: \"78788e99-ced5-4831-83b6-a5a81f13d0fd\") " pod="calico-system/calico-node-vdhsg" Mar 12 23:46:53.626291 kubelet[3357]: I0312 23:46:53.626131 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/78788e99-ced5-4831-83b6-a5a81f13d0fd-var-lib-calico\") pod \"calico-node-vdhsg\" (UID: \"78788e99-ced5-4831-83b6-a5a81f13d0fd\") " pod="calico-system/calico-node-vdhsg" Mar 12 23:46:53.626291 kubelet[3357]: I0312 23:46:53.626172 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/78788e99-ced5-4831-83b6-a5a81f13d0fd-nodeproc\") pod \"calico-node-vdhsg\" (UID: \"78788e99-ced5-4831-83b6-a5a81f13d0fd\") " pod="calico-system/calico-node-vdhsg" Mar 12 23:46:53.626291 kubelet[3357]: I0312 23:46:53.626211 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/78788e99-ced5-4831-83b6-a5a81f13d0fd-bpffs\") pod \"calico-node-vdhsg\" (UID: \"78788e99-ced5-4831-83b6-a5a81f13d0fd\") " pod="calico-system/calico-node-vdhsg" Mar 12 23:46:53.626291 kubelet[3357]: I0312 23:46:53.626248 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/78788e99-ced5-4831-83b6-a5a81f13d0fd-cni-log-dir\") pod \"calico-node-vdhsg\" (UID: \"78788e99-ced5-4831-83b6-a5a81f13d0fd\") " pod="calico-system/calico-node-vdhsg" Mar 12 23:46:53.626555 kubelet[3357]: I0312 23:46:53.626289 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/78788e99-ced5-4831-83b6-a5a81f13d0fd-flexvol-driver-host\") pod \"calico-node-vdhsg\" (UID: \"78788e99-ced5-4831-83b6-a5a81f13d0fd\") " pod="calico-system/calico-node-vdhsg" Mar 12 23:46:53.655715 kubelet[3357]: E0312 23:46:53.655008 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgscd" podUID="d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66" Mar 12 23:46:53.726775 kubelet[3357]: I0312 23:46:53.726684 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66-socket-dir\") pod \"csi-node-driver-xgscd\" (UID: \"d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66\") " pod="calico-system/csi-node-driver-xgscd" Mar 12 23:46:53.726775 kubelet[3357]: I0312 23:46:53.726776 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66-kubelet-dir\") pod \"csi-node-driver-xgscd\" (UID: \"d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66\") " pod="calico-system/csi-node-driver-xgscd" Mar 12 23:46:53.727040 kubelet[3357]: I0312 23:46:53.726974 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66-registration-dir\") pod \"csi-node-driver-xgscd\" (UID: \"d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66\") " pod="calico-system/csi-node-driver-xgscd" Mar 12 23:46:53.727040 kubelet[3357]: I0312 23:46:53.727015 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z59gk\" (UniqueName: \"kubernetes.io/projected/d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66-kube-api-access-z59gk\") pod \"csi-node-driver-xgscd\" (UID: \"d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66\") " pod="calico-system/csi-node-driver-xgscd" Mar 12 23:46:53.727150 kubelet[3357]: I0312 23:46:53.727074 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66-varrun\") pod \"csi-node-driver-xgscd\" (UID: \"d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66\") " pod="calico-system/csi-node-driver-xgscd" Mar 12 23:46:53.741115 kubelet[3357]: E0312 23:46:53.741055 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.741115 kubelet[3357]: W0312 23:46:53.741100 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.741325 kubelet[3357]: E0312 23:46:53.741153 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.829160 kubelet[3357]: E0312 23:46:53.829002 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.829943 kubelet[3357]: W0312 23:46:53.829720 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.829943 kubelet[3357]: E0312 23:46:53.829772 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.831672 kubelet[3357]: E0312 23:46:53.831123 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.831672 kubelet[3357]: W0312 23:46:53.831157 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.831672 kubelet[3357]: E0312 23:46:53.831190 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.832537 kubelet[3357]: E0312 23:46:53.832407 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.832976 kubelet[3357]: W0312 23:46:53.832666 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.832976 kubelet[3357]: E0312 23:46:53.832704 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.833841 kubelet[3357]: E0312 23:46:53.833788 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.834369 kubelet[3357]: W0312 23:46:53.834025 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.834369 kubelet[3357]: E0312 23:46:53.834059 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.835334 kubelet[3357]: E0312 23:46:53.835293 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.835877 kubelet[3357]: W0312 23:46:53.835552 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.835877 kubelet[3357]: E0312 23:46:53.835584 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.837563 kubelet[3357]: E0312 23:46:53.837294 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.837563 kubelet[3357]: W0312 23:46:53.837326 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.837563 kubelet[3357]: E0312 23:46:53.837358 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.838799 kubelet[3357]: E0312 23:46:53.838548 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.838799 kubelet[3357]: W0312 23:46:53.838579 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.838799 kubelet[3357]: E0312 23:46:53.838608 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.839785 kubelet[3357]: E0312 23:46:53.839523 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.839785 kubelet[3357]: W0312 23:46:53.839649 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.839785 kubelet[3357]: E0312 23:46:53.839679 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.840929 kubelet[3357]: E0312 23:46:53.840671 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.840929 kubelet[3357]: W0312 23:46:53.840701 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.840929 kubelet[3357]: E0312 23:46:53.840728 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.841634 kubelet[3357]: E0312 23:46:53.841607 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.842120 kubelet[3357]: W0312 23:46:53.841850 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.842120 kubelet[3357]: E0312 23:46:53.841915 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.844001 kubelet[3357]: E0312 23:46:53.842777 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.844001 kubelet[3357]: W0312 23:46:53.842837 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.844001 kubelet[3357]: E0312 23:46:53.842868 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.844670 kubelet[3357]: E0312 23:46:53.844641 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.844943 kubelet[3357]: W0312 23:46:53.844788 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.845507 kubelet[3357]: E0312 23:46:53.844827 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.847286 kubelet[3357]: E0312 23:46:53.846188 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.847286 kubelet[3357]: W0312 23:46:53.846225 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.847286 kubelet[3357]: E0312 23:46:53.846257 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.848143 kubelet[3357]: E0312 23:46:53.847976 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.848143 kubelet[3357]: W0312 23:46:53.848008 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.848143 kubelet[3357]: E0312 23:46:53.848040 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.849292 kubelet[3357]: E0312 23:46:53.849099 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.849292 kubelet[3357]: W0312 23:46:53.849133 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.849292 kubelet[3357]: E0312 23:46:53.849167 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.851542 kubelet[3357]: E0312 23:46:53.851253 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.851542 kubelet[3357]: W0312 23:46:53.851289 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.851542 kubelet[3357]: E0312 23:46:53.851320 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.852936 kubelet[3357]: E0312 23:46:53.852125 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.853958 kubelet[3357]: W0312 23:46:53.853751 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.853958 kubelet[3357]: E0312 23:46:53.853798 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.855151 kubelet[3357]: E0312 23:46:53.855077 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.855760 kubelet[3357]: W0312 23:46:53.855309 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.855760 kubelet[3357]: E0312 23:46:53.855348 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.857751 kubelet[3357]: E0312 23:46:53.856720 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.857751 kubelet[3357]: W0312 23:46:53.856755 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.857751 kubelet[3357]: E0312 23:46:53.856787 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.858771 kubelet[3357]: E0312 23:46:53.858597 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.858771 kubelet[3357]: W0312 23:46:53.858630 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.858771 kubelet[3357]: E0312 23:46:53.858661 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.860132 kubelet[3357]: E0312 23:46:53.859881 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.860132 kubelet[3357]: W0312 23:46:53.859947 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.860132 kubelet[3357]: E0312 23:46:53.859981 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.861427 kubelet[3357]: E0312 23:46:53.861392 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.862181 kubelet[3357]: W0312 23:46:53.861932 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.862181 kubelet[3357]: E0312 23:46:53.861979 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.864082 kubelet[3357]: E0312 23:46:53.864043 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.864524 kubelet[3357]: W0312 23:46:53.864233 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.864524 kubelet[3357]: E0312 23:46:53.864273 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.865237 kubelet[3357]: E0312 23:46:53.865181 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.865237 kubelet[3357]: W0312 23:46:53.865221 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.865985 kubelet[3357]: E0312 23:46:53.865257 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:53.867201 kubelet[3357]: E0312 23:46:53.867152 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:53.867201 kubelet[3357]: W0312 23:46:53.867191 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:53.867401 kubelet[3357]: E0312 23:46:53.867227 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:54.523270 kubelet[3357]: E0312 23:46:54.523229 3357 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 23:46:54.523836 kubelet[3357]: E0312 23:46:54.523350 3357 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6809ab16-fe91-4ad3-959d-45875cd72ccc-tigera-ca-bundle podName:6809ab16-fe91-4ad3-959d-45875cd72ccc nodeName:}" failed. No retries permitted until 2026-03-12 23:46:55.023317617 +0000 UTC m=+28.154823722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/6809ab16-fe91-4ad3-959d-45875cd72ccc-tigera-ca-bundle") pod "calico-typha-768d55dd9b-pl5rt" (UID: "6809ab16-fe91-4ad3-959d-45875cd72ccc") : failed to sync configmap cache: timed out waiting for the condition Mar 12 23:46:54.525370 kubelet[3357]: E0312 23:46:54.525300 3357 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Mar 12 23:46:54.525518 kubelet[3357]: E0312 23:46:54.525448 3357 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6809ab16-fe91-4ad3-959d-45875cd72ccc-typha-certs podName:6809ab16-fe91-4ad3-959d-45875cd72ccc nodeName:}" failed. No retries permitted until 2026-03-12 23:46:55.025420041 +0000 UTC m=+28.156926158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/6809ab16-fe91-4ad3-959d-45875cd72ccc-typha-certs") pod "calico-typha-768d55dd9b-pl5rt" (UID: "6809ab16-fe91-4ad3-959d-45875cd72ccc") : failed to sync secret cache: timed out waiting for the condition Mar 12 23:46:54.549918 kubelet[3357]: E0312 23:46:54.548417 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:54.549918 kubelet[3357]: W0312 23:46:54.548460 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:54.549918 kubelet[3357]: E0312 23:46:54.548493 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:54.551489 kubelet[3357]: E0312 23:46:54.551166 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:54.551489 kubelet[3357]: W0312 23:46:54.551206 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:54.551489 kubelet[3357]: E0312 23:46:54.551238 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:54.558285 kubelet[3357]: E0312 23:46:54.558239 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:54.558285 kubelet[3357]: W0312 23:46:54.558276 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:54.558485 kubelet[3357]: E0312 23:46:54.558310 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:54.560328 kubelet[3357]: E0312 23:46:54.560088 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:54.560328 kubelet[3357]: W0312 23:46:54.560122 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:54.560328 kubelet[3357]: E0312 23:46:54.560149 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:54.561737 kubelet[3357]: E0312 23:46:54.561631 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:54.561737 kubelet[3357]: W0312 23:46:54.561672 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:54.561737 kubelet[3357]: E0312 23:46:54.561704 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:54.651931 kubelet[3357]: E0312 23:46:54.651844 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:54.652217 kubelet[3357]: W0312 23:46:54.651875 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:54.652217 kubelet[3357]: E0312 23:46:54.652112 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:54.652626 kubelet[3357]: E0312 23:46:54.652606 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:54.652729 kubelet[3357]: W0312 23:46:54.652708 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:54.652838 kubelet[3357]: E0312 23:46:54.652817 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:54.728928 kubelet[3357]: E0312 23:46:54.728601 3357 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 23:46:54.728928 kubelet[3357]: E0312 23:46:54.728714 3357 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78788e99-ced5-4831-83b6-a5a81f13d0fd-tigera-ca-bundle podName:78788e99-ced5-4831-83b6-a5a81f13d0fd nodeName:}" failed. No retries permitted until 2026-03-12 23:46:55.22868515 +0000 UTC m=+28.360191279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/78788e99-ced5-4831-83b6-a5a81f13d0fd-tigera-ca-bundle") pod "calico-node-vdhsg" (UID: "78788e99-ced5-4831-83b6-a5a81f13d0fd") : failed to sync configmap cache: timed out waiting for the condition Mar 12 23:46:54.754458 kubelet[3357]: E0312 23:46:54.754292 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:54.754458 kubelet[3357]: W0312 23:46:54.754325 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:54.754458 kubelet[3357]: E0312 23:46:54.754353 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:54.755199 kubelet[3357]: E0312 23:46:54.755040 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:54.755199 kubelet[3357]: W0312 23:46:54.755063 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:54.755199 kubelet[3357]: E0312 23:46:54.755085 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:54.755784 kubelet[3357]: E0312 23:46:54.755658 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:54.755784 kubelet[3357]: W0312 23:46:54.755684 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:54.755784 kubelet[3357]: E0312 23:46:54.755712 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:54.857473 kubelet[3357]: E0312 23:46:54.857313 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:54.857473 kubelet[3357]: W0312 23:46:54.857343 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:54.857473 kubelet[3357]: E0312 23:46:54.857373 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:54.858244 kubelet[3357]: E0312 23:46:54.858040 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:54.858244 kubelet[3357]: W0312 23:46:54.858064 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:54.858244 kubelet[3357]: E0312 23:46:54.858102 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:54.858841 kubelet[3357]: E0312 23:46:54.858819 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:54.859065 kubelet[3357]: W0312 23:46:54.858964 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:54.859065 kubelet[3357]: E0312 23:46:54.858996 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:54.960579 kubelet[3357]: E0312 23:46:54.960522 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:54.960579 kubelet[3357]: W0312 23:46:54.960560 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:54.960789 kubelet[3357]: E0312 23:46:54.960613 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:54.961159 kubelet[3357]: E0312 23:46:54.961115 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:54.961159 kubelet[3357]: W0312 23:46:54.961145 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:54.961291 kubelet[3357]: E0312 23:46:54.961169 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:54.961697 kubelet[3357]: E0312 23:46:54.961666 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:54.961697 kubelet[3357]: W0312 23:46:54.961694 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:54.961821 kubelet[3357]: E0312 23:46:54.961740 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.062513 kubelet[3357]: E0312 23:46:55.062458 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.062513 kubelet[3357]: W0312 23:46:55.062494 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.062681 kubelet[3357]: E0312 23:46:55.062525 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.062880 kubelet[3357]: E0312 23:46:55.062852 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.063008 kubelet[3357]: W0312 23:46:55.062936 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.063008 kubelet[3357]: E0312 23:46:55.062963 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.063405 kubelet[3357]: E0312 23:46:55.063376 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.063487 kubelet[3357]: W0312 23:46:55.063404 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.063487 kubelet[3357]: E0312 23:46:55.063427 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.063811 kubelet[3357]: E0312 23:46:55.063784 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.063926 kubelet[3357]: W0312 23:46:55.063810 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.063926 kubelet[3357]: E0312 23:46:55.063833 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.064174 kubelet[3357]: E0312 23:46:55.064141 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.064174 kubelet[3357]: W0312 23:46:55.064168 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.064337 kubelet[3357]: E0312 23:46:55.064190 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.064480 kubelet[3357]: E0312 23:46:55.064454 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.064549 kubelet[3357]: W0312 23:46:55.064478 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.064549 kubelet[3357]: E0312 23:46:55.064500 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.064791 kubelet[3357]: E0312 23:46:55.064765 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.064869 kubelet[3357]: W0312 23:46:55.064789 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.064869 kubelet[3357]: E0312 23:46:55.064810 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.065144 kubelet[3357]: E0312 23:46:55.065118 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.065215 kubelet[3357]: W0312 23:46:55.065142 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.065215 kubelet[3357]: E0312 23:46:55.065165 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.065455 kubelet[3357]: E0312 23:46:55.065430 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.065455 kubelet[3357]: W0312 23:46:55.065453 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.065626 kubelet[3357]: E0312 23:46:55.065473 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.065811 kubelet[3357]: E0312 23:46:55.065784 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.065914 kubelet[3357]: W0312 23:46:55.065810 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.065914 kubelet[3357]: E0312 23:46:55.065831 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.067479 kubelet[3357]: E0312 23:46:55.066947 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.067479 kubelet[3357]: W0312 23:46:55.066982 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.067479 kubelet[3357]: E0312 23:46:55.067026 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.068164 kubelet[3357]: E0312 23:46:55.068116 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.068257 kubelet[3357]: W0312 23:46:55.068155 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.068257 kubelet[3357]: E0312 23:46:55.068197 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.079040 kubelet[3357]: E0312 23:46:55.079002 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.080700 kubelet[3357]: W0312 23:46:55.080647 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.083057 kubelet[3357]: E0312 23:46:55.080868 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.165068 kubelet[3357]: E0312 23:46:55.164934 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.165068 kubelet[3357]: W0312 23:46:55.164971 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.165068 kubelet[3357]: E0312 23:46:55.165003 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.194461 containerd[2024]: time="2026-03-12T23:46:55.194375261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-768d55dd9b-pl5rt,Uid:6809ab16-fe91-4ad3-959d-45875cd72ccc,Namespace:calico-system,Attempt:0,}" Mar 12 23:46:55.196883 kubelet[3357]: E0312 23:46:55.196799 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgscd" podUID="d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66" Mar 12 23:46:55.244357 containerd[2024]: time="2026-03-12T23:46:55.244274705Z" level=info msg="connecting to shim ad5ba1e5919d98be14ab6097a372e33c27001ce05a18a886139bb62f1d5a908a" address="unix:///run/containerd/s/82aba8ba14825bcc1c3b2a81712aed92c231145a88bffb084171432f85655441" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:46:55.269842 kubelet[3357]: E0312 23:46:55.269058 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.269842 kubelet[3357]: W0312 23:46:55.269091 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.269842 kubelet[3357]: E0312 23:46:55.269123 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.269842 kubelet[3357]: E0312 23:46:55.269613 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.269842 kubelet[3357]: W0312 23:46:55.269633 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.269842 kubelet[3357]: E0312 23:46:55.269658 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.270706 kubelet[3357]: E0312 23:46:55.270390 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.270706 kubelet[3357]: W0312 23:46:55.270417 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.270706 kubelet[3357]: E0312 23:46:55.270442 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.271257 kubelet[3357]: E0312 23:46:55.270989 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.271257 kubelet[3357]: W0312 23:46:55.271012 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.271257 kubelet[3357]: E0312 23:46:55.271036 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.271920 kubelet[3357]: E0312 23:46:55.271523 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.271920 kubelet[3357]: W0312 23:46:55.271547 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.271920 kubelet[3357]: E0312 23:46:55.271573 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.276427 kubelet[3357]: E0312 23:46:55.276304 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:55.276427 kubelet[3357]: W0312 23:46:55.276338 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:55.276427 kubelet[3357]: E0312 23:46:55.276369 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:55.306199 systemd[1]: Started cri-containerd-ad5ba1e5919d98be14ab6097a372e33c27001ce05a18a886139bb62f1d5a908a.scope - libcontainer container ad5ba1e5919d98be14ab6097a372e33c27001ce05a18a886139bb62f1d5a908a. Mar 12 23:46:55.361079 containerd[2024]: time="2026-03-12T23:46:55.361006914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vdhsg,Uid:78788e99-ced5-4831-83b6-a5a81f13d0fd,Namespace:calico-system,Attempt:0,}" Mar 12 23:46:55.383499 containerd[2024]: time="2026-03-12T23:46:55.383405190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-768d55dd9b-pl5rt,Uid:6809ab16-fe91-4ad3-959d-45875cd72ccc,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad5ba1e5919d98be14ab6097a372e33c27001ce05a18a886139bb62f1d5a908a\"" Mar 12 23:46:55.387046 containerd[2024]: time="2026-03-12T23:46:55.386797374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 12 23:46:55.417557 containerd[2024]: time="2026-03-12T23:46:55.417155718Z" level=info msg="connecting to shim 629a5a5e0fd53e32390c306e1c830cc143749d5fee055ffa25c87792cae4b9dd" address="unix:///run/containerd/s/8cab7f216f14192c09c3b25b9e95d2230d3993d36c66e640f1882b61d07a09b5" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:46:55.473257 systemd[1]: Started cri-containerd-629a5a5e0fd53e32390c306e1c830cc143749d5fee055ffa25c87792cae4b9dd.scope - libcontainer container 629a5a5e0fd53e32390c306e1c830cc143749d5fee055ffa25c87792cae4b9dd. Mar 12 23:46:55.539152 containerd[2024]: time="2026-03-12T23:46:55.539079834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vdhsg,Uid:78788e99-ced5-4831-83b6-a5a81f13d0fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"629a5a5e0fd53e32390c306e1c830cc143749d5fee055ffa25c87792cae4b9dd\"" Mar 12 23:46:56.705967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2304091858.mount: Deactivated successfully. Mar 12 23:46:57.196371 kubelet[3357]: E0312 23:46:57.196016 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgscd" podUID="d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66" Mar 12 23:46:57.521545 containerd[2024]: time="2026-03-12T23:46:57.521139356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:57.523159 containerd[2024]: time="2026-03-12T23:46:57.523114088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 12 23:46:57.525425 containerd[2024]: time="2026-03-12T23:46:57.525378260Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:57.531390 containerd[2024]: time="2026-03-12T23:46:57.531208364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:57.532637 containerd[2024]: time="2026-03-12T23:46:57.532592588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.14573465s" Mar 12 23:46:57.532880 containerd[2024]: time="2026-03-12T23:46:57.532747148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 12 23:46:57.535678 containerd[2024]: time="2026-03-12T23:46:57.535607024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 12 23:46:57.575653 containerd[2024]: time="2026-03-12T23:46:57.574511325Z" level=info msg="CreateContainer within sandbox \"ad5ba1e5919d98be14ab6097a372e33c27001ce05a18a886139bb62f1d5a908a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 12 23:46:57.613923 containerd[2024]: time="2026-03-12T23:46:57.612353649Z" level=info msg="Container f82eafab1b9e334572c49028cd5c41a9d13cc77ad34ea90ed118526e69103f3b: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:46:57.638156 containerd[2024]: time="2026-03-12T23:46:57.637972989Z" level=info msg="CreateContainer within sandbox \"ad5ba1e5919d98be14ab6097a372e33c27001ce05a18a886139bb62f1d5a908a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f82eafab1b9e334572c49028cd5c41a9d13cc77ad34ea90ed118526e69103f3b\"" Mar 12 23:46:57.639341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603809787.mount: Deactivated successfully. Mar 12 23:46:57.642079 containerd[2024]: time="2026-03-12T23:46:57.641293977Z" level=info msg="StartContainer for \"f82eafab1b9e334572c49028cd5c41a9d13cc77ad34ea90ed118526e69103f3b\"" Mar 12 23:46:57.644539 containerd[2024]: time="2026-03-12T23:46:57.644485689Z" level=info msg="connecting to shim f82eafab1b9e334572c49028cd5c41a9d13cc77ad34ea90ed118526e69103f3b" address="unix:///run/containerd/s/82aba8ba14825bcc1c3b2a81712aed92c231145a88bffb084171432f85655441" protocol=ttrpc version=3 Mar 12 23:46:57.680344 systemd[1]: Started cri-containerd-f82eafab1b9e334572c49028cd5c41a9d13cc77ad34ea90ed118526e69103f3b.scope - libcontainer container f82eafab1b9e334572c49028cd5c41a9d13cc77ad34ea90ed118526e69103f3b. Mar 12 23:46:57.773259 containerd[2024]: time="2026-03-12T23:46:57.772107958Z" level=info msg="StartContainer for \"f82eafab1b9e334572c49028cd5c41a9d13cc77ad34ea90ed118526e69103f3b\" returns successfully" Mar 12 23:46:58.505436 kubelet[3357]: E0312 23:46:58.505387 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.505436 kubelet[3357]: W0312 23:46:58.505428 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.506198 kubelet[3357]: E0312 23:46:58.505463 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.506198 kubelet[3357]: E0312 23:46:58.505748 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.506198 kubelet[3357]: W0312 23:46:58.505788 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.506198 kubelet[3357]: E0312 23:46:58.505854 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.506198 kubelet[3357]: E0312 23:46:58.506235 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.506541 kubelet[3357]: W0312 23:46:58.506255 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.506541 kubelet[3357]: E0312 23:46:58.506279 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.506725 kubelet[3357]: E0312 23:46:58.506613 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.506725 kubelet[3357]: W0312 23:46:58.506631 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.506725 kubelet[3357]: E0312 23:46:58.506654 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.507031 kubelet[3357]: E0312 23:46:58.506956 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.507031 kubelet[3357]: W0312 23:46:58.506972 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.507031 kubelet[3357]: E0312 23:46:58.506992 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.507446 kubelet[3357]: E0312 23:46:58.507267 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.507446 kubelet[3357]: W0312 23:46:58.507282 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.507446 kubelet[3357]: E0312 23:46:58.507300 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.507635 kubelet[3357]: E0312 23:46:58.507558 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.507635 kubelet[3357]: W0312 23:46:58.507573 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.507635 kubelet[3357]: E0312 23:46:58.507591 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.507880 kubelet[3357]: E0312 23:46:58.507846 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.507880 kubelet[3357]: W0312 23:46:58.507871 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.508067 kubelet[3357]: E0312 23:46:58.507919 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.508222 kubelet[3357]: E0312 23:46:58.508196 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.508281 kubelet[3357]: W0312 23:46:58.508221 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.508281 kubelet[3357]: E0312 23:46:58.508242 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.508522 kubelet[3357]: E0312 23:46:58.508497 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.508582 kubelet[3357]: W0312 23:46:58.508520 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.508582 kubelet[3357]: E0312 23:46:58.508541 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.508839 kubelet[3357]: E0312 23:46:58.508813 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.508929 kubelet[3357]: W0312 23:46:58.508837 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.508929 kubelet[3357]: E0312 23:46:58.508857 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.509185 kubelet[3357]: E0312 23:46:58.509158 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.509376 kubelet[3357]: W0312 23:46:58.509182 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.509376 kubelet[3357]: E0312 23:46:58.509203 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.509505 kubelet[3357]: E0312 23:46:58.509481 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.509505 kubelet[3357]: W0312 23:46:58.509499 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.509611 kubelet[3357]: E0312 23:46:58.509521 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.509823 kubelet[3357]: E0312 23:46:58.509794 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.509823 kubelet[3357]: W0312 23:46:58.509820 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.509984 kubelet[3357]: E0312 23:46:58.509841 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.510181 kubelet[3357]: E0312 23:46:58.510151 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.510181 kubelet[3357]: W0312 23:46:58.510178 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.510313 kubelet[3357]: E0312 23:46:58.510202 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.595933 kubelet[3357]: E0312 23:46:58.595859 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.595933 kubelet[3357]: W0312 23:46:58.595921 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.596204 kubelet[3357]: E0312 23:46:58.595955 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.596321 kubelet[3357]: E0312 23:46:58.596293 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.596392 kubelet[3357]: W0312 23:46:58.596320 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.596392 kubelet[3357]: E0312 23:46:58.596341 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.596700 kubelet[3357]: E0312 23:46:58.596672 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.596784 kubelet[3357]: W0312 23:46:58.596699 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.596784 kubelet[3357]: E0312 23:46:58.596721 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.597189 kubelet[3357]: E0312 23:46:58.597158 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.597437 kubelet[3357]: W0312 23:46:58.597272 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.597437 kubelet[3357]: E0312 23:46:58.597392 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.598001 kubelet[3357]: E0312 23:46:58.597849 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.598001 kubelet[3357]: W0312 23:46:58.597880 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.598001 kubelet[3357]: E0312 23:46:58.597944 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.598567 kubelet[3357]: E0312 23:46:58.598450 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.598567 kubelet[3357]: W0312 23:46:58.598492 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.598567 kubelet[3357]: E0312 23:46:58.598526 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.599291 kubelet[3357]: E0312 23:46:58.599075 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.599291 kubelet[3357]: W0312 23:46:58.599274 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.599460 kubelet[3357]: E0312 23:46:58.599305 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.599886 kubelet[3357]: E0312 23:46:58.599608 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.599886 kubelet[3357]: W0312 23:46:58.599637 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.599886 kubelet[3357]: E0312 23:46:58.599662 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.600350 kubelet[3357]: E0312 23:46:58.599982 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.600350 kubelet[3357]: W0312 23:46:58.600000 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.600350 kubelet[3357]: E0312 23:46:58.600025 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.600503 kubelet[3357]: E0312 23:46:58.600441 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.600503 kubelet[3357]: W0312 23:46:58.600473 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.602207 kubelet[3357]: E0312 23:46:58.600626 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.602207 kubelet[3357]: E0312 23:46:58.601140 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.602207 kubelet[3357]: W0312 23:46:58.601162 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.602207 kubelet[3357]: E0312 23:46:58.601186 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.602207 kubelet[3357]: E0312 23:46:58.601687 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.602207 kubelet[3357]: W0312 23:46:58.601712 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.602207 kubelet[3357]: E0312 23:46:58.601963 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.602829 kubelet[3357]: E0312 23:46:58.602634 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.602829 kubelet[3357]: W0312 23:46:58.602672 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.602829 kubelet[3357]: E0312 23:46:58.602703 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.603466 kubelet[3357]: E0312 23:46:58.603424 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.603466 kubelet[3357]: W0312 23:46:58.603457 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.603660 kubelet[3357]: E0312 23:46:58.603486 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.604085 kubelet[3357]: E0312 23:46:58.603798 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.604085 kubelet[3357]: W0312 23:46:58.603828 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.604085 kubelet[3357]: E0312 23:46:58.603852 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.604496 kubelet[3357]: E0312 23:46:58.604198 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.604496 kubelet[3357]: W0312 23:46:58.604215 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.604496 kubelet[3357]: E0312 23:46:58.604244 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.604797 kubelet[3357]: E0312 23:46:58.604595 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.604797 kubelet[3357]: W0312 23:46:58.604614 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.604797 kubelet[3357]: E0312 23:46:58.604635 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.605406 kubelet[3357]: E0312 23:46:58.605298 3357 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 23:46:58.605406 kubelet[3357]: W0312 23:46:58.605331 3357 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 23:46:58.605406 kubelet[3357]: E0312 23:46:58.605357 3357 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 23:46:58.764031 containerd[2024]: time="2026-03-12T23:46:58.763699918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:58.769183 containerd[2024]: time="2026-03-12T23:46:58.769137058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 12 23:46:58.772141 containerd[2024]: time="2026-03-12T23:46:58.772042102Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:58.776205 containerd[2024]: time="2026-03-12T23:46:58.776139694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:46:58.777723 containerd[2024]: time="2026-03-12T23:46:58.777675970Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.241792394s" Mar 12 23:46:58.777937 containerd[2024]: time="2026-03-12T23:46:58.777882646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 12 23:46:58.788343 containerd[2024]: time="2026-03-12T23:46:58.788280299Z" level=info msg="CreateContainer within sandbox \"629a5a5e0fd53e32390c306e1c830cc143749d5fee055ffa25c87792cae4b9dd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 12 23:46:58.811321 containerd[2024]: time="2026-03-12T23:46:58.811244387Z" level=info msg="Container 5950da11195d328e49fa55160a5eb79eb95734f357998487b278434be25b162f: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:46:58.843069 containerd[2024]: time="2026-03-12T23:46:58.842992079Z" level=info msg="CreateContainer within sandbox \"629a5a5e0fd53e32390c306e1c830cc143749d5fee055ffa25c87792cae4b9dd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5950da11195d328e49fa55160a5eb79eb95734f357998487b278434be25b162f\"" Mar 12 23:46:58.846786 containerd[2024]: time="2026-03-12T23:46:58.846268607Z" level=info msg="StartContainer for \"5950da11195d328e49fa55160a5eb79eb95734f357998487b278434be25b162f\"" Mar 12 23:46:58.854200 containerd[2024]: time="2026-03-12T23:46:58.854085131Z" level=info msg="connecting to shim 5950da11195d328e49fa55160a5eb79eb95734f357998487b278434be25b162f" address="unix:///run/containerd/s/8cab7f216f14192c09c3b25b9e95d2230d3993d36c66e640f1882b61d07a09b5" protocol=ttrpc version=3 Mar 12 23:46:58.900221 systemd[1]: Started cri-containerd-5950da11195d328e49fa55160a5eb79eb95734f357998487b278434be25b162f.scope - libcontainer container 5950da11195d328e49fa55160a5eb79eb95734f357998487b278434be25b162f. Mar 12 23:46:59.024530 containerd[2024]: time="2026-03-12T23:46:59.023755256Z" level=info msg="StartContainer for \"5950da11195d328e49fa55160a5eb79eb95734f357998487b278434be25b162f\" returns successfully" Mar 12 23:46:59.057447 systemd[1]: cri-containerd-5950da11195d328e49fa55160a5eb79eb95734f357998487b278434be25b162f.scope: Deactivated successfully. Mar 12 23:46:59.066223 containerd[2024]: time="2026-03-12T23:46:59.066173936Z" level=info msg="received container exit event container_id:\"5950da11195d328e49fa55160a5eb79eb95734f357998487b278434be25b162f\" id:\"5950da11195d328e49fa55160a5eb79eb95734f357998487b278434be25b162f\" pid:4301 exited_at:{seconds:1773359219 nanos:65030180}" Mar 12 23:46:59.112883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5950da11195d328e49fa55160a5eb79eb95734f357998487b278434be25b162f-rootfs.mount: Deactivated successfully. Mar 12 23:46:59.201445 kubelet[3357]: E0312 23:46:59.201045 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgscd" podUID="d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66" Mar 12 23:46:59.438402 kubelet[3357]: I0312 23:46:59.438367 3357 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 23:46:59.470926 kubelet[3357]: I0312 23:46:59.469434 3357 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-768d55dd9b-pl5rt" podStartSLOduration=4.321332912 podStartE2EDuration="6.469413346s" podCreationTimestamp="2026-03-12 23:46:53 +0000 UTC" firstStartedPulling="2026-03-12 23:46:55.38632593 +0000 UTC m=+28.517832047" lastFinishedPulling="2026-03-12 23:46:57.534406388 +0000 UTC m=+30.665912481" observedRunningTime="2026-03-12 23:46:58.451702653 +0000 UTC m=+31.583208770" watchObservedRunningTime="2026-03-12 23:46:59.469413346 +0000 UTC m=+32.600919475" Mar 12 23:47:00.450669 containerd[2024]: time="2026-03-12T23:47:00.450564167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 12 23:47:01.195809 kubelet[3357]: E0312 23:47:01.195729 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgscd" podUID="d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66" Mar 12 23:47:03.196313 kubelet[3357]: E0312 23:47:03.196222 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgscd" podUID="d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66" Mar 12 23:47:03.814118 kubelet[3357]: I0312 23:47:03.814073 3357 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 23:47:05.196125 kubelet[3357]: E0312 23:47:05.196047 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgscd" podUID="d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66" Mar 12 23:47:06.760346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1099871504.mount: Deactivated successfully. Mar 12 23:47:06.827004 containerd[2024]: time="2026-03-12T23:47:06.826188378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:06.830166 containerd[2024]: time="2026-03-12T23:47:06.830051106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 12 23:47:06.835106 containerd[2024]: time="2026-03-12T23:47:06.834798367Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:06.842745 containerd[2024]: time="2026-03-12T23:47:06.842659327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:06.846591 containerd[2024]: time="2026-03-12T23:47:06.845987011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.39536276s" Mar 12 23:47:06.846591 containerd[2024]: time="2026-03-12T23:47:06.846069739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 12 23:47:06.856797 containerd[2024]: time="2026-03-12T23:47:06.856717483Z" level=info msg="CreateContainer within sandbox \"629a5a5e0fd53e32390c306e1c830cc143749d5fee055ffa25c87792cae4b9dd\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 12 23:47:06.883094 containerd[2024]: time="2026-03-12T23:47:06.881376439Z" level=info msg="Container b103ecaee913817887455b5d91ebebffedf4748441f044f15d661eaaf3f4039b: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:06.892510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1090829782.mount: Deactivated successfully. Mar 12 23:47:06.913636 containerd[2024]: time="2026-03-12T23:47:06.913579003Z" level=info msg="CreateContainer within sandbox \"629a5a5e0fd53e32390c306e1c830cc143749d5fee055ffa25c87792cae4b9dd\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"b103ecaee913817887455b5d91ebebffedf4748441f044f15d661eaaf3f4039b\"" Mar 12 23:47:06.916964 containerd[2024]: time="2026-03-12T23:47:06.916639519Z" level=info msg="StartContainer for \"b103ecaee913817887455b5d91ebebffedf4748441f044f15d661eaaf3f4039b\"" Mar 12 23:47:06.919765 containerd[2024]: time="2026-03-12T23:47:06.919713571Z" level=info msg="connecting to shim b103ecaee913817887455b5d91ebebffedf4748441f044f15d661eaaf3f4039b" address="unix:///run/containerd/s/8cab7f216f14192c09c3b25b9e95d2230d3993d36c66e640f1882b61d07a09b5" protocol=ttrpc version=3 Mar 12 23:47:06.957284 systemd[1]: Started cri-containerd-b103ecaee913817887455b5d91ebebffedf4748441f044f15d661eaaf3f4039b.scope - libcontainer container b103ecaee913817887455b5d91ebebffedf4748441f044f15d661eaaf3f4039b. Mar 12 23:47:07.063711 containerd[2024]: time="2026-03-12T23:47:07.063541780Z" level=info msg="StartContainer for \"b103ecaee913817887455b5d91ebebffedf4748441f044f15d661eaaf3f4039b\" returns successfully" Mar 12 23:47:07.197530 kubelet[3357]: E0312 23:47:07.197457 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgscd" podUID="d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66" Mar 12 23:47:07.262565 systemd[1]: cri-containerd-b103ecaee913817887455b5d91ebebffedf4748441f044f15d661eaaf3f4039b.scope: Deactivated successfully. Mar 12 23:47:07.266936 containerd[2024]: time="2026-03-12T23:47:07.266846405Z" level=info msg="received container exit event container_id:\"b103ecaee913817887455b5d91ebebffedf4748441f044f15d661eaaf3f4039b\" id:\"b103ecaee913817887455b5d91ebebffedf4748441f044f15d661eaaf3f4039b\" pid:4360 exited_at:{seconds:1773359227 nanos:266474957}" Mar 12 23:47:07.760464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b103ecaee913817887455b5d91ebebffedf4748441f044f15d661eaaf3f4039b-rootfs.mount: Deactivated successfully. Mar 12 23:47:08.485937 containerd[2024]: time="2026-03-12T23:47:08.484407475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 12 23:47:09.196380 kubelet[3357]: E0312 23:47:09.196307 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgscd" podUID="d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66" Mar 12 23:47:11.196952 kubelet[3357]: E0312 23:47:11.196675 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgscd" podUID="d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66" Mar 12 23:47:11.299772 containerd[2024]: time="2026-03-12T23:47:11.299676597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:11.302152 containerd[2024]: time="2026-03-12T23:47:11.302090481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 12 23:47:11.303287 containerd[2024]: time="2026-03-12T23:47:11.303074109Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:11.309021 containerd[2024]: time="2026-03-12T23:47:11.308935713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:11.311480 containerd[2024]: time="2026-03-12T23:47:11.311285097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.824916798s" Mar 12 23:47:11.311480 containerd[2024]: time="2026-03-12T23:47:11.311339085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 12 23:47:11.319684 containerd[2024]: time="2026-03-12T23:47:11.319543821Z" level=info msg="CreateContainer within sandbox \"629a5a5e0fd53e32390c306e1c830cc143749d5fee055ffa25c87792cae4b9dd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 12 23:47:11.334391 containerd[2024]: time="2026-03-12T23:47:11.334254189Z" level=info msg="Container 81ff8e57f648086cea5b1b0b4c58bd4c2d7c4c18e2e0f80613799e41973d1255: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:11.346765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount779096973.mount: Deactivated successfully. Mar 12 23:47:11.355183 containerd[2024]: time="2026-03-12T23:47:11.355096161Z" level=info msg="CreateContainer within sandbox \"629a5a5e0fd53e32390c306e1c830cc143749d5fee055ffa25c87792cae4b9dd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"81ff8e57f648086cea5b1b0b4c58bd4c2d7c4c18e2e0f80613799e41973d1255\"" Mar 12 23:47:11.356265 containerd[2024]: time="2026-03-12T23:47:11.356207157Z" level=info msg="StartContainer for \"81ff8e57f648086cea5b1b0b4c58bd4c2d7c4c18e2e0f80613799e41973d1255\"" Mar 12 23:47:11.360068 containerd[2024]: time="2026-03-12T23:47:11.359951745Z" level=info msg="connecting to shim 81ff8e57f648086cea5b1b0b4c58bd4c2d7c4c18e2e0f80613799e41973d1255" address="unix:///run/containerd/s/8cab7f216f14192c09c3b25b9e95d2230d3993d36c66e640f1882b61d07a09b5" protocol=ttrpc version=3 Mar 12 23:47:11.402223 systemd[1]: Started cri-containerd-81ff8e57f648086cea5b1b0b4c58bd4c2d7c4c18e2e0f80613799e41973d1255.scope - libcontainer container 81ff8e57f648086cea5b1b0b4c58bd4c2d7c4c18e2e0f80613799e41973d1255. Mar 12 23:47:11.532639 containerd[2024]: time="2026-03-12T23:47:11.532420330Z" level=info msg="StartContainer for \"81ff8e57f648086cea5b1b0b4c58bd4c2d7c4c18e2e0f80613799e41973d1255\" returns successfully" Mar 12 23:47:13.196540 kubelet[3357]: E0312 23:47:13.196036 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xgscd" podUID="d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66" Mar 12 23:47:13.228908 systemd[1]: cri-containerd-81ff8e57f648086cea5b1b0b4c58bd4c2d7c4c18e2e0f80613799e41973d1255.scope: Deactivated successfully. Mar 12 23:47:13.230003 systemd[1]: cri-containerd-81ff8e57f648086cea5b1b0b4c58bd4c2d7c4c18e2e0f80613799e41973d1255.scope: Consumed 979ms CPU time, 184M memory peak, 4.1M read from disk, 171.3M written to disk. Mar 12 23:47:13.237255 containerd[2024]: time="2026-03-12T23:47:13.236886490Z" level=info msg="received container exit event container_id:\"81ff8e57f648086cea5b1b0b4c58bd4c2d7c4c18e2e0f80613799e41973d1255\" id:\"81ff8e57f648086cea5b1b0b4c58bd4c2d7c4c18e2e0f80613799e41973d1255\" pid:4419 exited_at:{seconds:1773359233 nanos:236508202}" Mar 12 23:47:13.260985 kubelet[3357]: I0312 23:47:13.260366 3357 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 12 23:47:13.369388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81ff8e57f648086cea5b1b0b4c58bd4c2d7c4c18e2e0f80613799e41973d1255-rootfs.mount: Deactivated successfully. Mar 12 23:47:13.406415 systemd[1]: Created slice kubepods-besteffort-podaeeeac9e_eff5_42ea_903a_01682d16a2c1.slice - libcontainer container kubepods-besteffort-podaeeeac9e_eff5_42ea_903a_01682d16a2c1.slice. Mar 12 23:47:13.432297 kubelet[3357]: I0312 23:47:13.431201 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpd57\" (UniqueName: \"kubernetes.io/projected/9c5c2018-747c-459a-ad4c-0b4fa811d698-kube-api-access-dpd57\") pod \"calico-apiserver-96c4f6954-j5klg\" (UID: \"9c5c2018-747c-459a-ad4c-0b4fa811d698\") " pod="calico-system/calico-apiserver-96c4f6954-j5klg" Mar 12 23:47:13.432297 kubelet[3357]: I0312 23:47:13.431278 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/aeeeac9e-eff5-42ea-903a-01682d16a2c1-nginx-config\") pod \"whisker-5bd9b759cc-lpmnh\" (UID: \"aeeeac9e-eff5-42ea-903a-01682d16a2c1\") " pod="calico-system/whisker-5bd9b759cc-lpmnh" Mar 12 23:47:13.432297 kubelet[3357]: I0312 23:47:13.431322 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x2t4\" (UniqueName: \"kubernetes.io/projected/f2877bb0-6b5f-460a-8752-03e954e499f9-kube-api-access-2x2t4\") pod \"coredns-674b8bbfcf-qhx57\" (UID: \"f2877bb0-6b5f-460a-8752-03e954e499f9\") " pod="kube-system/coredns-674b8bbfcf-qhx57" Mar 12 23:47:13.432297 kubelet[3357]: I0312 23:47:13.431361 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dd1b291-6c99-4d9b-b57e-b54a5e164f4c-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-kmm7z\" (UID: \"7dd1b291-6c99-4d9b-b57e-b54a5e164f4c\") " pod="calico-system/goldmane-5b85766d88-kmm7z" Mar 12 23:47:13.432297 kubelet[3357]: I0312 23:47:13.431401 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bjdp\" (UniqueName: \"kubernetes.io/projected/124c3195-be33-46e0-a14d-d1edb9412c07-kube-api-access-4bjdp\") pod \"coredns-674b8bbfcf-jthnw\" (UID: \"124c3195-be33-46e0-a14d-d1edb9412c07\") " pod="kube-system/coredns-674b8bbfcf-jthnw" Mar 12 23:47:13.432717 kubelet[3357]: I0312 23:47:13.431456 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aeeeac9e-eff5-42ea-903a-01682d16a2c1-whisker-backend-key-pair\") pod \"whisker-5bd9b759cc-lpmnh\" (UID: \"aeeeac9e-eff5-42ea-903a-01682d16a2c1\") " pod="calico-system/whisker-5bd9b759cc-lpmnh" Mar 12 23:47:13.432717 kubelet[3357]: I0312 23:47:13.431493 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aeeeac9e-eff5-42ea-903a-01682d16a2c1-whisker-ca-bundle\") pod \"whisker-5bd9b759cc-lpmnh\" (UID: \"aeeeac9e-eff5-42ea-903a-01682d16a2c1\") " pod="calico-system/whisker-5bd9b759cc-lpmnh" Mar 12 23:47:13.432717 kubelet[3357]: I0312 23:47:13.431529 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0aa24a97-cdcd-41d3-827c-7d2de2d0d07d-tigera-ca-bundle\") pod \"calico-kube-controllers-bf8f5f48c-5x29h\" (UID: \"0aa24a97-cdcd-41d3-827c-7d2de2d0d07d\") " pod="calico-system/calico-kube-controllers-bf8f5f48c-5x29h" Mar 12 23:47:13.432717 kubelet[3357]: I0312 23:47:13.431568 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vsjf\" (UniqueName: \"kubernetes.io/projected/aeeeac9e-eff5-42ea-903a-01682d16a2c1-kube-api-access-4vsjf\") pod \"whisker-5bd9b759cc-lpmnh\" (UID: \"aeeeac9e-eff5-42ea-903a-01682d16a2c1\") " pod="calico-system/whisker-5bd9b759cc-lpmnh" Mar 12 23:47:13.432717 kubelet[3357]: I0312 23:47:13.431605 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c5c2018-747c-459a-ad4c-0b4fa811d698-calico-apiserver-certs\") pod \"calico-apiserver-96c4f6954-j5klg\" (UID: \"9c5c2018-747c-459a-ad4c-0b4fa811d698\") " pod="calico-system/calico-apiserver-96c4f6954-j5klg" Mar 12 23:47:13.433078 kubelet[3357]: I0312 23:47:13.431643 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2877bb0-6b5f-460a-8752-03e954e499f9-config-volume\") pod \"coredns-674b8bbfcf-qhx57\" (UID: \"f2877bb0-6b5f-460a-8752-03e954e499f9\") " pod="kube-system/coredns-674b8bbfcf-qhx57" Mar 12 23:47:13.433078 kubelet[3357]: I0312 23:47:13.431681 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hrr4\" (UniqueName: \"kubernetes.io/projected/f1a13fa1-fd17-43ff-a5e3-be119533e7ce-kube-api-access-7hrr4\") pod \"calico-apiserver-96c4f6954-g4777\" (UID: \"f1a13fa1-fd17-43ff-a5e3-be119533e7ce\") " pod="calico-system/calico-apiserver-96c4f6954-g4777" Mar 12 23:47:13.433078 kubelet[3357]: I0312 23:47:13.431723 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd1b291-6c99-4d9b-b57e-b54a5e164f4c-config\") pod \"goldmane-5b85766d88-kmm7z\" (UID: \"7dd1b291-6c99-4d9b-b57e-b54a5e164f4c\") " pod="calico-system/goldmane-5b85766d88-kmm7z" Mar 12 23:47:13.433078 kubelet[3357]: I0312 23:47:13.431760 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7dd1b291-6c99-4d9b-b57e-b54a5e164f4c-goldmane-key-pair\") pod \"goldmane-5b85766d88-kmm7z\" (UID: \"7dd1b291-6c99-4d9b-b57e-b54a5e164f4c\") " pod="calico-system/goldmane-5b85766d88-kmm7z" Mar 12 23:47:13.433078 kubelet[3357]: I0312 23:47:13.431807 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9xkb\" (UniqueName: \"kubernetes.io/projected/0aa24a97-cdcd-41d3-827c-7d2de2d0d07d-kube-api-access-x9xkb\") pod \"calico-kube-controllers-bf8f5f48c-5x29h\" (UID: \"0aa24a97-cdcd-41d3-827c-7d2de2d0d07d\") " pod="calico-system/calico-kube-controllers-bf8f5f48c-5x29h" Mar 12 23:47:13.433338 kubelet[3357]: I0312 23:47:13.431851 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f1a13fa1-fd17-43ff-a5e3-be119533e7ce-calico-apiserver-certs\") pod \"calico-apiserver-96c4f6954-g4777\" (UID: \"f1a13fa1-fd17-43ff-a5e3-be119533e7ce\") " pod="calico-system/calico-apiserver-96c4f6954-g4777" Mar 12 23:47:13.436095 kubelet[3357]: I0312 23:47:13.431886 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7mc6\" (UniqueName: \"kubernetes.io/projected/7dd1b291-6c99-4d9b-b57e-b54a5e164f4c-kube-api-access-r7mc6\") pod \"goldmane-5b85766d88-kmm7z\" (UID: \"7dd1b291-6c99-4d9b-b57e-b54a5e164f4c\") " pod="calico-system/goldmane-5b85766d88-kmm7z" Mar 12 23:47:13.436433 kubelet[3357]: I0312 23:47:13.436369 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/124c3195-be33-46e0-a14d-d1edb9412c07-config-volume\") pod \"coredns-674b8bbfcf-jthnw\" (UID: \"124c3195-be33-46e0-a14d-d1edb9412c07\") " pod="kube-system/coredns-674b8bbfcf-jthnw" Mar 12 23:47:13.443045 systemd[1]: Created slice kubepods-burstable-pod124c3195_be33_46e0_a14d_d1edb9412c07.slice - libcontainer container kubepods-burstable-pod124c3195_be33_46e0_a14d_d1edb9412c07.slice. Mar 12 23:47:13.463013 systemd[1]: Created slice kubepods-burstable-podf2877bb0_6b5f_460a_8752_03e954e499f9.slice - libcontainer container kubepods-burstable-podf2877bb0_6b5f_460a_8752_03e954e499f9.slice. Mar 12 23:47:13.484764 systemd[1]: Created slice kubepods-besteffort-pod9c5c2018_747c_459a_ad4c_0b4fa811d698.slice - libcontainer container kubepods-besteffort-pod9c5c2018_747c_459a_ad4c_0b4fa811d698.slice. Mar 12 23:47:13.508858 systemd[1]: Created slice kubepods-besteffort-podf1a13fa1_fd17_43ff_a5e3_be119533e7ce.slice - libcontainer container kubepods-besteffort-podf1a13fa1_fd17_43ff_a5e3_be119533e7ce.slice. Mar 12 23:47:13.539810 systemd[1]: Created slice kubepods-besteffort-pod7dd1b291_6c99_4d9b_b57e_b54a5e164f4c.slice - libcontainer container kubepods-besteffort-pod7dd1b291_6c99_4d9b_b57e_b54a5e164f4c.slice. Mar 12 23:47:13.654746 systemd[1]: Created slice kubepods-besteffort-pod0aa24a97_cdcd_41d3_827c_7d2de2d0d07d.slice - libcontainer container kubepods-besteffort-pod0aa24a97_cdcd_41d3_827c_7d2de2d0d07d.slice. Mar 12 23:47:13.729275 containerd[2024]: time="2026-03-12T23:47:13.725103937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bd9b759cc-lpmnh,Uid:aeeeac9e-eff5-42ea-903a-01682d16a2c1,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:13.735370 containerd[2024]: time="2026-03-12T23:47:13.732042277Z" level=info msg="CreateContainer within sandbox \"629a5a5e0fd53e32390c306e1c830cc143749d5fee055ffa25c87792cae4b9dd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 12 23:47:13.757001 containerd[2024]: time="2026-03-12T23:47:13.756922369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jthnw,Uid:124c3195-be33-46e0-a14d-d1edb9412c07,Namespace:kube-system,Attempt:0,}" Mar 12 23:47:13.783075 containerd[2024]: time="2026-03-12T23:47:13.783024865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qhx57,Uid:f2877bb0-6b5f-460a-8752-03e954e499f9,Namespace:kube-system,Attempt:0,}" Mar 12 23:47:13.794582 containerd[2024]: time="2026-03-12T23:47:13.794420353Z" level=info msg="Container 60e07b18b8df631671e37d9a96778677af92a95385c3e4cfb3b8b54c8024ee75: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:13.799925 containerd[2024]: time="2026-03-12T23:47:13.797911465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c4f6954-j5klg,Uid:9c5c2018-747c-459a-ad4c-0b4fa811d698,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:13.830919 containerd[2024]: time="2026-03-12T23:47:13.829660645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c4f6954-g4777,Uid:f1a13fa1-fd17-43ff-a5e3-be119533e7ce,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:13.866561 containerd[2024]: time="2026-03-12T23:47:13.866184433Z" level=info msg="CreateContainer within sandbox \"629a5a5e0fd53e32390c306e1c830cc143749d5fee055ffa25c87792cae4b9dd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"60e07b18b8df631671e37d9a96778677af92a95385c3e4cfb3b8b54c8024ee75\"" Mar 12 23:47:13.884940 containerd[2024]: time="2026-03-12T23:47:13.884844350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-kmm7z,Uid:7dd1b291-6c99-4d9b-b57e-b54a5e164f4c,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:13.885841 containerd[2024]: time="2026-03-12T23:47:13.885798506Z" level=info msg="StartContainer for \"60e07b18b8df631671e37d9a96778677af92a95385c3e4cfb3b8b54c8024ee75\"" Mar 12 23:47:13.902910 containerd[2024]: time="2026-03-12T23:47:13.902805326Z" level=info msg="connecting to shim 60e07b18b8df631671e37d9a96778677af92a95385c3e4cfb3b8b54c8024ee75" address="unix:///run/containerd/s/8cab7f216f14192c09c3b25b9e95d2230d3993d36c66e640f1882b61d07a09b5" protocol=ttrpc version=3 Mar 12 23:47:13.979638 containerd[2024]: time="2026-03-12T23:47:13.979047026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bf8f5f48c-5x29h,Uid:0aa24a97-cdcd-41d3-827c-7d2de2d0d07d,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:14.015216 systemd[1]: Started cri-containerd-60e07b18b8df631671e37d9a96778677af92a95385c3e4cfb3b8b54c8024ee75.scope - libcontainer container 60e07b18b8df631671e37d9a96778677af92a95385c3e4cfb3b8b54c8024ee75. Mar 12 23:47:14.345031 containerd[2024]: time="2026-03-12T23:47:14.344466060Z" level=info msg="StartContainer for \"60e07b18b8df631671e37d9a96778677af92a95385c3e4cfb3b8b54c8024ee75\" returns successfully" Mar 12 23:47:14.495970 containerd[2024]: time="2026-03-12T23:47:14.494248453Z" level=error msg="Failed to destroy network for sandbox \"886461ee8d5abb82edb374266cea827320551669daa05466398b577dded404de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.504480 systemd[1]: run-netns-cni\x2ded549bfa\x2dd114\x2df866\x2dd55d\x2dd89735848869.mount: Deactivated successfully. Mar 12 23:47:14.507025 containerd[2024]: time="2026-03-12T23:47:14.506935225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-kmm7z,Uid:7dd1b291-6c99-4d9b-b57e-b54a5e164f4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"886461ee8d5abb82edb374266cea827320551669daa05466398b577dded404de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.507995 kubelet[3357]: E0312 23:47:14.507272 3357 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"886461ee8d5abb82edb374266cea827320551669daa05466398b577dded404de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.507995 kubelet[3357]: E0312 23:47:14.507387 3357 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"886461ee8d5abb82edb374266cea827320551669daa05466398b577dded404de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-kmm7z" Mar 12 23:47:14.507995 kubelet[3357]: E0312 23:47:14.507425 3357 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"886461ee8d5abb82edb374266cea827320551669daa05466398b577dded404de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-kmm7z" Mar 12 23:47:14.508644 kubelet[3357]: E0312 23:47:14.507503 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-kmm7z_calico-system(7dd1b291-6c99-4d9b-b57e-b54a5e164f4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-kmm7z_calico-system(7dd1b291-6c99-4d9b-b57e-b54a5e164f4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"886461ee8d5abb82edb374266cea827320551669daa05466398b577dded404de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-kmm7z" podUID="7dd1b291-6c99-4d9b-b57e-b54a5e164f4c" Mar 12 23:47:14.576457 containerd[2024]: time="2026-03-12T23:47:14.575878717Z" level=error msg="Failed to destroy network for sandbox \"c2330067a492f0020af583dfde31fbd0d01f216aff2954e5dd491d5d39fabc6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.587719 containerd[2024]: time="2026-03-12T23:47:14.587160277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jthnw,Uid:124c3195-be33-46e0-a14d-d1edb9412c07,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2330067a492f0020af583dfde31fbd0d01f216aff2954e5dd491d5d39fabc6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.587779 systemd[1]: run-netns-cni\x2da0320568\x2de4ad\x2ded55\x2d57bc\x2dd6eae9285df0.mount: Deactivated successfully. Mar 12 23:47:14.590161 kubelet[3357]: E0312 23:47:14.590087 3357 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2330067a492f0020af583dfde31fbd0d01f216aff2954e5dd491d5d39fabc6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.590507 kubelet[3357]: E0312 23:47:14.590188 3357 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2330067a492f0020af583dfde31fbd0d01f216aff2954e5dd491d5d39fabc6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jthnw" Mar 12 23:47:14.590507 kubelet[3357]: E0312 23:47:14.590224 3357 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2330067a492f0020af583dfde31fbd0d01f216aff2954e5dd491d5d39fabc6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jthnw" Mar 12 23:47:14.590507 kubelet[3357]: E0312 23:47:14.590298 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jthnw_kube-system(124c3195-be33-46e0-a14d-d1edb9412c07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jthnw_kube-system(124c3195-be33-46e0-a14d-d1edb9412c07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2330067a492f0020af583dfde31fbd0d01f216aff2954e5dd491d5d39fabc6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jthnw" podUID="124c3195-be33-46e0-a14d-d1edb9412c07" Mar 12 23:47:14.600628 containerd[2024]: time="2026-03-12T23:47:14.599908201Z" level=error msg="Failed to destroy network for sandbox \"e890cc168f4e005f4d1308ca4b3ab9ca551af8d315877cb299d638edcdc9648a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.608685 containerd[2024]: time="2026-03-12T23:47:14.607837585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bd9b759cc-lpmnh,Uid:aeeeac9e-eff5-42ea-903a-01682d16a2c1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e890cc168f4e005f4d1308ca4b3ab9ca551af8d315877cb299d638edcdc9648a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.609429 kubelet[3357]: E0312 23:47:14.609356 3357 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e890cc168f4e005f4d1308ca4b3ab9ca551af8d315877cb299d638edcdc9648a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.610188 kubelet[3357]: E0312 23:47:14.609451 3357 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e890cc168f4e005f4d1308ca4b3ab9ca551af8d315877cb299d638edcdc9648a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bd9b759cc-lpmnh" Mar 12 23:47:14.610188 kubelet[3357]: E0312 23:47:14.609488 3357 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e890cc168f4e005f4d1308ca4b3ab9ca551af8d315877cb299d638edcdc9648a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bd9b759cc-lpmnh" Mar 12 23:47:14.610188 kubelet[3357]: E0312 23:47:14.609560 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5bd9b759cc-lpmnh_calico-system(aeeeac9e-eff5-42ea-903a-01682d16a2c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5bd9b759cc-lpmnh_calico-system(aeeeac9e-eff5-42ea-903a-01682d16a2c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e890cc168f4e005f4d1308ca4b3ab9ca551af8d315877cb299d638edcdc9648a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5bd9b759cc-lpmnh" podUID="aeeeac9e-eff5-42ea-903a-01682d16a2c1" Mar 12 23:47:14.610604 systemd[1]: run-netns-cni\x2dd2fca6ef\x2d3fd0\x2daccb\x2d10bd\x2def06c1370209.mount: Deactivated successfully. Mar 12 23:47:14.635965 containerd[2024]: time="2026-03-12T23:47:14.635876329Z" level=error msg="Failed to destroy network for sandbox \"e6e7a52a20d0c4d1f03319c6faaf41428eea6e33a5250237f59b417c6a68bca2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.643470 systemd[1]: run-netns-cni\x2d81fe9020\x2d5f90\x2d311d\x2d371b\x2d6fab1d6e809d.mount: Deactivated successfully. Mar 12 23:47:14.646865 containerd[2024]: time="2026-03-12T23:47:14.646692037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qhx57,Uid:f2877bb0-6b5f-460a-8752-03e954e499f9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e7a52a20d0c4d1f03319c6faaf41428eea6e33a5250237f59b417c6a68bca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.650988 kubelet[3357]: E0312 23:47:14.650885 3357 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e7a52a20d0c4d1f03319c6faaf41428eea6e33a5250237f59b417c6a68bca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.651155 kubelet[3357]: E0312 23:47:14.651030 3357 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e7a52a20d0c4d1f03319c6faaf41428eea6e33a5250237f59b417c6a68bca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qhx57" Mar 12 23:47:14.651155 kubelet[3357]: E0312 23:47:14.651066 3357 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e7a52a20d0c4d1f03319c6faaf41428eea6e33a5250237f59b417c6a68bca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qhx57" Mar 12 23:47:14.651747 kubelet[3357]: E0312 23:47:14.651680 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qhx57_kube-system(f2877bb0-6b5f-460a-8752-03e954e499f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qhx57_kube-system(f2877bb0-6b5f-460a-8752-03e954e499f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6e7a52a20d0c4d1f03319c6faaf41428eea6e33a5250237f59b417c6a68bca2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qhx57" podUID="f2877bb0-6b5f-460a-8752-03e954e499f9" Mar 12 23:47:14.654619 containerd[2024]: time="2026-03-12T23:47:14.654442741Z" level=error msg="Failed to destroy network for sandbox \"a8897b8f3005def1dcc76e458e08e154359814c24fddc57a89b269b04405f19a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.658951 containerd[2024]: time="2026-03-12T23:47:14.658798189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bf8f5f48c-5x29h,Uid:0aa24a97-cdcd-41d3-827c-7d2de2d0d07d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8897b8f3005def1dcc76e458e08e154359814c24fddc57a89b269b04405f19a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.661105 kubelet[3357]: E0312 23:47:14.659480 3357 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8897b8f3005def1dcc76e458e08e154359814c24fddc57a89b269b04405f19a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.661319 containerd[2024]: time="2026-03-12T23:47:14.660790597Z" level=error msg="Failed to destroy network for sandbox \"159dd61bc519948c0ba30e79fbc91dc9c8255a020632805704c95af450eb24c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.661388 kubelet[3357]: E0312 23:47:14.661213 3357 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8897b8f3005def1dcc76e458e08e154359814c24fddc57a89b269b04405f19a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bf8f5f48c-5x29h" Mar 12 23:47:14.661683 kubelet[3357]: E0312 23:47:14.661396 3357 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8897b8f3005def1dcc76e458e08e154359814c24fddc57a89b269b04405f19a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bf8f5f48c-5x29h" Mar 12 23:47:14.662097 kubelet[3357]: E0312 23:47:14.661683 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bf8f5f48c-5x29h_calico-system(0aa24a97-cdcd-41d3-827c-7d2de2d0d07d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bf8f5f48c-5x29h_calico-system(0aa24a97-cdcd-41d3-827c-7d2de2d0d07d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8897b8f3005def1dcc76e458e08e154359814c24fddc57a89b269b04405f19a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bf8f5f48c-5x29h" podUID="0aa24a97-cdcd-41d3-827c-7d2de2d0d07d" Mar 12 23:47:14.670116 containerd[2024]: time="2026-03-12T23:47:14.669927217Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c4f6954-j5klg,Uid:9c5c2018-747c-459a-ad4c-0b4fa811d698,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"159dd61bc519948c0ba30e79fbc91dc9c8255a020632805704c95af450eb24c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.670939 kubelet[3357]: E0312 23:47:14.670650 3357 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"159dd61bc519948c0ba30e79fbc91dc9c8255a020632805704c95af450eb24c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.671065 kubelet[3357]: E0312 23:47:14.670975 3357 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"159dd61bc519948c0ba30e79fbc91dc9c8255a020632805704c95af450eb24c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-96c4f6954-j5klg" Mar 12 23:47:14.671065 kubelet[3357]: E0312 23:47:14.671040 3357 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"159dd61bc519948c0ba30e79fbc91dc9c8255a020632805704c95af450eb24c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-96c4f6954-j5klg" Mar 12 23:47:14.672190 kubelet[3357]: E0312 23:47:14.672108 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96c4f6954-j5klg_calico-system(9c5c2018-747c-459a-ad4c-0b4fa811d698)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96c4f6954-j5klg_calico-system(9c5c2018-747c-459a-ad4c-0b4fa811d698)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"159dd61bc519948c0ba30e79fbc91dc9c8255a020632805704c95af450eb24c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-96c4f6954-j5klg" podUID="9c5c2018-747c-459a-ad4c-0b4fa811d698" Mar 12 23:47:14.686060 containerd[2024]: time="2026-03-12T23:47:14.685878062Z" level=error msg="Failed to destroy network for sandbox \"c26be319a83ec1bc93d2ae16add0c7f3fe39cc5caa5c53edd6825d6836b28807\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.690311 containerd[2024]: time="2026-03-12T23:47:14.690234050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c4f6954-g4777,Uid:f1a13fa1-fd17-43ff-a5e3-be119533e7ce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26be319a83ec1bc93d2ae16add0c7f3fe39cc5caa5c53edd6825d6836b28807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.690835 kubelet[3357]: E0312 23:47:14.690770 3357 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26be319a83ec1bc93d2ae16add0c7f3fe39cc5caa5c53edd6825d6836b28807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 23:47:14.691831 kubelet[3357]: E0312 23:47:14.690859 3357 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26be319a83ec1bc93d2ae16add0c7f3fe39cc5caa5c53edd6825d6836b28807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-96c4f6954-g4777" Mar 12 23:47:14.691831 kubelet[3357]: E0312 23:47:14.691279 3357 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26be319a83ec1bc93d2ae16add0c7f3fe39cc5caa5c53edd6825d6836b28807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-96c4f6954-g4777" Mar 12 23:47:14.693623 kubelet[3357]: E0312 23:47:14.692865 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96c4f6954-g4777_calico-system(f1a13fa1-fd17-43ff-a5e3-be119533e7ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96c4f6954-g4777_calico-system(f1a13fa1-fd17-43ff-a5e3-be119533e7ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c26be319a83ec1bc93d2ae16add0c7f3fe39cc5caa5c53edd6825d6836b28807\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-96c4f6954-g4777" podUID="f1a13fa1-fd17-43ff-a5e3-be119533e7ce" Mar 12 23:47:14.817494 kubelet[3357]: I0312 23:47:14.817381 3357 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vdhsg" podStartSLOduration=6.046031515 podStartE2EDuration="21.817331438s" podCreationTimestamp="2026-03-12 23:46:53 +0000 UTC" firstStartedPulling="2026-03-12 23:46:55.541448082 +0000 UTC m=+28.672954199" lastFinishedPulling="2026-03-12 23:47:11.312748029 +0000 UTC m=+44.444254122" observedRunningTime="2026-03-12 23:47:14.739451618 +0000 UTC m=+47.870957723" watchObservedRunningTime="2026-03-12 23:47:14.817331438 +0000 UTC m=+47.948837531" Mar 12 23:47:14.860225 kubelet[3357]: I0312 23:47:14.860031 3357 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vsjf\" (UniqueName: \"kubernetes.io/projected/aeeeac9e-eff5-42ea-903a-01682d16a2c1-kube-api-access-4vsjf\") pod \"aeeeac9e-eff5-42ea-903a-01682d16a2c1\" (UID: \"aeeeac9e-eff5-42ea-903a-01682d16a2c1\") " Mar 12 23:47:14.862925 kubelet[3357]: I0312 23:47:14.861067 3357 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aeeeac9e-eff5-42ea-903a-01682d16a2c1-whisker-backend-key-pair\") pod \"aeeeac9e-eff5-42ea-903a-01682d16a2c1\" (UID: \"aeeeac9e-eff5-42ea-903a-01682d16a2c1\") " Mar 12 23:47:14.862925 kubelet[3357]: I0312 23:47:14.861121 3357 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aeeeac9e-eff5-42ea-903a-01682d16a2c1-whisker-ca-bundle\") pod \"aeeeac9e-eff5-42ea-903a-01682d16a2c1\" (UID: \"aeeeac9e-eff5-42ea-903a-01682d16a2c1\") " Mar 12 23:47:14.862925 kubelet[3357]: I0312 23:47:14.861167 3357 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/aeeeac9e-eff5-42ea-903a-01682d16a2c1-nginx-config\") pod \"aeeeac9e-eff5-42ea-903a-01682d16a2c1\" (UID: \"aeeeac9e-eff5-42ea-903a-01682d16a2c1\") " Mar 12 23:47:14.862925 kubelet[3357]: I0312 23:47:14.861762 3357 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeeeac9e-eff5-42ea-903a-01682d16a2c1-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "aeeeac9e-eff5-42ea-903a-01682d16a2c1" (UID: "aeeeac9e-eff5-42ea-903a-01682d16a2c1"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 23:47:14.867837 kubelet[3357]: I0312 23:47:14.867668 3357 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeeeac9e-eff5-42ea-903a-01682d16a2c1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "aeeeac9e-eff5-42ea-903a-01682d16a2c1" (UID: "aeeeac9e-eff5-42ea-903a-01682d16a2c1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 23:47:14.875414 kubelet[3357]: I0312 23:47:14.875157 3357 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeeeac9e-eff5-42ea-903a-01682d16a2c1-kube-api-access-4vsjf" (OuterVolumeSpecName: "kube-api-access-4vsjf") pod "aeeeac9e-eff5-42ea-903a-01682d16a2c1" (UID: "aeeeac9e-eff5-42ea-903a-01682d16a2c1"). InnerVolumeSpecName "kube-api-access-4vsjf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 23:47:14.879311 kubelet[3357]: I0312 23:47:14.879242 3357 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeeeac9e-eff5-42ea-903a-01682d16a2c1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "aeeeac9e-eff5-42ea-903a-01682d16a2c1" (UID: "aeeeac9e-eff5-42ea-903a-01682d16a2c1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 23:47:14.962525 kubelet[3357]: I0312 23:47:14.962370 3357 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aeeeac9e-eff5-42ea-903a-01682d16a2c1-whisker-backend-key-pair\") on node \"ip-172-31-21-65\" DevicePath \"\"" Mar 12 23:47:14.962525 kubelet[3357]: I0312 23:47:14.962425 3357 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aeeeac9e-eff5-42ea-903a-01682d16a2c1-whisker-ca-bundle\") on node \"ip-172-31-21-65\" DevicePath \"\"" Mar 12 23:47:14.962525 kubelet[3357]: I0312 23:47:14.962449 3357 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/aeeeac9e-eff5-42ea-903a-01682d16a2c1-nginx-config\") on node \"ip-172-31-21-65\" DevicePath \"\"" Mar 12 23:47:14.962525 kubelet[3357]: I0312 23:47:14.962489 3357 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4vsjf\" (UniqueName: \"kubernetes.io/projected/aeeeac9e-eff5-42ea-903a-01682d16a2c1-kube-api-access-4vsjf\") on node \"ip-172-31-21-65\" DevicePath \"\"" Mar 12 23:47:15.214680 systemd[1]: Created slice kubepods-besteffort-podd8bbe8b1_f8e6_4ee5_86c1_14bad7db0d66.slice - libcontainer container kubepods-besteffort-podd8bbe8b1_f8e6_4ee5_86c1_14bad7db0d66.slice. Mar 12 23:47:15.219026 systemd[1]: Removed slice kubepods-besteffort-podaeeeac9e_eff5_42ea_903a_01682d16a2c1.slice - libcontainer container kubepods-besteffort-podaeeeac9e_eff5_42ea_903a_01682d16a2c1.slice. Mar 12 23:47:15.225638 containerd[2024]: time="2026-03-12T23:47:15.225588852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xgscd,Uid:d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:15.375930 systemd[1]: run-netns-cni\x2d9a1ea9b5\x2d862a\x2da73c\x2d4137\x2db1460a4a46c2.mount: Deactivated successfully. Mar 12 23:47:15.376102 systemd[1]: run-netns-cni\x2d19ccf279\x2d8bb3\x2d0d39\x2d42fd\x2d6bb04d5c23b3.mount: Deactivated successfully. Mar 12 23:47:15.376223 systemd[1]: run-netns-cni\x2d90660606\x2d5218\x2dd41f\x2d8491\x2d70363a269ed2.mount: Deactivated successfully. Mar 12 23:47:15.376350 systemd[1]: var-lib-kubelet-pods-aeeeac9e\x2deff5\x2d42ea\x2d903a\x2d01682d16a2c1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4vsjf.mount: Deactivated successfully. Mar 12 23:47:15.376485 systemd[1]: var-lib-kubelet-pods-aeeeac9e\x2deff5\x2d42ea\x2d903a\x2d01682d16a2c1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 12 23:47:15.490969 systemd-networkd[1867]: calic693ea06a8e: Link UP Mar 12 23:47:15.491450 systemd-networkd[1867]: calic693ea06a8e: Gained carrier Mar 12 23:47:15.502268 (udev-worker)[4720]: Network interface NamePolicy= disabled on kernel command line. Mar 12 23:47:15.520446 containerd[2024]: 2026-03-12 23:47:15.266 [ERROR][4700] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 23:47:15.520446 containerd[2024]: 2026-03-12 23:47:15.306 [INFO][4700] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--65-k8s-csi--node--driver--xgscd-eth0 csi-node-driver- calico-system d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66 720 0 2026-03-12 23:46:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-21-65 csi-node-driver-xgscd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic693ea06a8e [] [] }} ContainerID="8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" Namespace="calico-system" Pod="csi-node-driver-xgscd" WorkloadEndpoint="ip--172--31--21--65-k8s-csi--node--driver--xgscd-" Mar 12 23:47:15.520446 containerd[2024]: 2026-03-12 23:47:15.306 [INFO][4700] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" Namespace="calico-system" Pod="csi-node-driver-xgscd" WorkloadEndpoint="ip--172--31--21--65-k8s-csi--node--driver--xgscd-eth0" Mar 12 23:47:15.520446 containerd[2024]: 2026-03-12 23:47:15.402 [INFO][4711] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" HandleID="k8s-pod-network.8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" Workload="ip--172--31--21--65-k8s-csi--node--driver--xgscd-eth0" Mar 12 23:47:15.521239 containerd[2024]: 2026-03-12 23:47:15.419 [INFO][4711] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" HandleID="k8s-pod-network.8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" Workload="ip--172--31--21--65-k8s-csi--node--driver--xgscd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000387c10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-65", "pod":"csi-node-driver-xgscd", "timestamp":"2026-03-12 23:47:15.402957637 +0000 UTC"}, Hostname:"ip-172-31-21-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000442580)} Mar 12 23:47:15.521239 containerd[2024]: 2026-03-12 23:47:15.419 [INFO][4711] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 23:47:15.521239 containerd[2024]: 2026-03-12 23:47:15.419 [INFO][4711] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 23:47:15.521239 containerd[2024]: 2026-03-12 23:47:15.420 [INFO][4711] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-65' Mar 12 23:47:15.521239 containerd[2024]: 2026-03-12 23:47:15.423 [INFO][4711] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" host="ip-172-31-21-65" Mar 12 23:47:15.521239 containerd[2024]: 2026-03-12 23:47:15.431 [INFO][4711] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-21-65" Mar 12 23:47:15.521239 containerd[2024]: 2026-03-12 23:47:15.438 [INFO][4711] ipam/ipam.go 526: Trying affinity for 192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:15.521239 containerd[2024]: 2026-03-12 23:47:15.441 [INFO][4711] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:15.521239 containerd[2024]: 2026-03-12 23:47:15.444 [INFO][4711] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:15.523270 containerd[2024]: 2026-03-12 23:47:15.445 [INFO][4711] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" host="ip-172-31-21-65" Mar 12 23:47:15.523270 containerd[2024]: 2026-03-12 23:47:15.447 [INFO][4711] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a Mar 12 23:47:15.523270 containerd[2024]: 2026-03-12 23:47:15.454 [INFO][4711] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" host="ip-172-31-21-65" Mar 12 23:47:15.523270 containerd[2024]: 2026-03-12 23:47:15.466 [INFO][4711] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.193/26] block=192.168.99.192/26 handle="k8s-pod-network.8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" host="ip-172-31-21-65" Mar 12 23:47:15.523270 containerd[2024]: 2026-03-12 23:47:15.466 [INFO][4711] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.193/26] handle="k8s-pod-network.8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" host="ip-172-31-21-65" Mar 12 23:47:15.523270 containerd[2024]: 2026-03-12 23:47:15.467 [INFO][4711] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 23:47:15.523270 containerd[2024]: 2026-03-12 23:47:15.467 [INFO][4711] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.193/26] IPv6=[] ContainerID="8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" HandleID="k8s-pod-network.8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" Workload="ip--172--31--21--65-k8s-csi--node--driver--xgscd-eth0" Mar 12 23:47:15.523597 containerd[2024]: 2026-03-12 23:47:15.475 [INFO][4700] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" Namespace="calico-system" Pod="csi-node-driver-xgscd" WorkloadEndpoint="ip--172--31--21--65-k8s-csi--node--driver--xgscd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--65-k8s-csi--node--driver--xgscd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-65", ContainerID:"", Pod:"csi-node-driver-xgscd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic693ea06a8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:15.524926 containerd[2024]: 2026-03-12 23:47:15.475 [INFO][4700] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.193/32] ContainerID="8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" Namespace="calico-system" Pod="csi-node-driver-xgscd" WorkloadEndpoint="ip--172--31--21--65-k8s-csi--node--driver--xgscd-eth0" Mar 12 23:47:15.524926 containerd[2024]: 2026-03-12 23:47:15.476 [INFO][4700] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic693ea06a8e ContainerID="8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" Namespace="calico-system" Pod="csi-node-driver-xgscd" WorkloadEndpoint="ip--172--31--21--65-k8s-csi--node--driver--xgscd-eth0" Mar 12 23:47:15.524926 containerd[2024]: 2026-03-12 23:47:15.490 [INFO][4700] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" Namespace="calico-system" Pod="csi-node-driver-xgscd" WorkloadEndpoint="ip--172--31--21--65-k8s-csi--node--driver--xgscd-eth0" Mar 12 23:47:15.525836 containerd[2024]: 2026-03-12 23:47:15.493 [INFO][4700] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" Namespace="calico-system" Pod="csi-node-driver-xgscd" WorkloadEndpoint="ip--172--31--21--65-k8s-csi--node--driver--xgscd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--65-k8s-csi--node--driver--xgscd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-65", ContainerID:"8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a", Pod:"csi-node-driver-xgscd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic693ea06a8e", MAC:"a2:97:f8:d2:ea:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:15.526410 containerd[2024]: 2026-03-12 23:47:15.513 [INFO][4700] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" Namespace="calico-system" Pod="csi-node-driver-xgscd" WorkloadEndpoint="ip--172--31--21--65-k8s-csi--node--driver--xgscd-eth0" Mar 12 23:47:15.606772 containerd[2024]: time="2026-03-12T23:47:15.606697826Z" level=info msg="connecting to shim 8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a" address="unix:///run/containerd/s/c3f9927be8bf2d6e6d92620f6ec34a0ed46ca0a75b89f6a83236d6e36b891044" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:15.649257 systemd[1]: Started cri-containerd-8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a.scope - libcontainer container 8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a. Mar 12 23:47:15.703136 containerd[2024]: time="2026-03-12T23:47:15.702963807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xgscd,Uid:d8bbe8b1-f8e6-4ee5-86c1-14bad7db0d66,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a\"" Mar 12 23:47:15.706551 containerd[2024]: time="2026-03-12T23:47:15.706507503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 12 23:47:15.715673 kubelet[3357]: I0312 23:47:15.714997 3357 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 23:47:15.843847 systemd[1]: Created slice kubepods-besteffort-pod2030e334_13a7_4a2b_ae77_a3cdb6229a10.slice - libcontainer container kubepods-besteffort-pod2030e334_13a7_4a2b_ae77_a3cdb6229a10.slice. Mar 12 23:47:15.869331 kubelet[3357]: I0312 23:47:15.869267 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2030e334-13a7-4a2b-ae77-a3cdb6229a10-whisker-ca-bundle\") pod \"whisker-7f54c4b5c8-dwz7w\" (UID: \"2030e334-13a7-4a2b-ae77-a3cdb6229a10\") " pod="calico-system/whisker-7f54c4b5c8-dwz7w" Mar 12 23:47:15.873561 kubelet[3357]: I0312 23:47:15.871029 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvfnw\" (UniqueName: \"kubernetes.io/projected/2030e334-13a7-4a2b-ae77-a3cdb6229a10-kube-api-access-nvfnw\") pod \"whisker-7f54c4b5c8-dwz7w\" (UID: \"2030e334-13a7-4a2b-ae77-a3cdb6229a10\") " pod="calico-system/whisker-7f54c4b5c8-dwz7w" Mar 12 23:47:15.874090 kubelet[3357]: I0312 23:47:15.873811 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2030e334-13a7-4a2b-ae77-a3cdb6229a10-nginx-config\") pod \"whisker-7f54c4b5c8-dwz7w\" (UID: \"2030e334-13a7-4a2b-ae77-a3cdb6229a10\") " pod="calico-system/whisker-7f54c4b5c8-dwz7w" Mar 12 23:47:15.874090 kubelet[3357]: I0312 23:47:15.873914 3357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2030e334-13a7-4a2b-ae77-a3cdb6229a10-whisker-backend-key-pair\") pod \"whisker-7f54c4b5c8-dwz7w\" (UID: \"2030e334-13a7-4a2b-ae77-a3cdb6229a10\") " pod="calico-system/whisker-7f54c4b5c8-dwz7w" Mar 12 23:47:16.165737 containerd[2024]: time="2026-03-12T23:47:16.165664645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f54c4b5c8-dwz7w,Uid:2030e334-13a7-4a2b-ae77-a3cdb6229a10,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:16.389331 (udev-worker)[4719]: Network interface NamePolicy= disabled on kernel command line. Mar 12 23:47:16.393067 systemd-networkd[1867]: calib3d0922ad59: Link UP Mar 12 23:47:16.393466 systemd-networkd[1867]: calib3d0922ad59: Gained carrier Mar 12 23:47:16.423921 containerd[2024]: 2026-03-12 23:47:16.209 [ERROR][4775] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 23:47:16.423921 containerd[2024]: 2026-03-12 23:47:16.238 [INFO][4775] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--65-k8s-whisker--7f54c4b5c8--dwz7w-eth0 whisker-7f54c4b5c8- calico-system 2030e334-13a7-4a2b-ae77-a3cdb6229a10 915 0 2026-03-12 23:47:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f54c4b5c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-21-65 whisker-7f54c4b5c8-dwz7w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib3d0922ad59 [] [] }} ContainerID="051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" Namespace="calico-system" Pod="whisker-7f54c4b5c8-dwz7w" WorkloadEndpoint="ip--172--31--21--65-k8s-whisker--7f54c4b5c8--dwz7w-" Mar 12 23:47:16.423921 containerd[2024]: 2026-03-12 23:47:16.239 [INFO][4775] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" Namespace="calico-system" Pod="whisker-7f54c4b5c8-dwz7w" WorkloadEndpoint="ip--172--31--21--65-k8s-whisker--7f54c4b5c8--dwz7w-eth0" Mar 12 23:47:16.423921 containerd[2024]: 2026-03-12 23:47:16.294 [INFO][4810] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" HandleID="k8s-pod-network.051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" Workload="ip--172--31--21--65-k8s-whisker--7f54c4b5c8--dwz7w-eth0" Mar 12 23:47:16.424261 containerd[2024]: 2026-03-12 23:47:16.317 [INFO][4810] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" HandleID="k8s-pod-network.051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" Workload="ip--172--31--21--65-k8s-whisker--7f54c4b5c8--dwz7w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-65", "pod":"whisker-7f54c4b5c8-dwz7w", "timestamp":"2026-03-12 23:47:16.29473373 +0000 UTC"}, Hostname:"ip-172-31-21-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000daf20)} Mar 12 23:47:16.424261 containerd[2024]: 2026-03-12 23:47:16.317 [INFO][4810] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 23:47:16.424261 containerd[2024]: 2026-03-12 23:47:16.317 [INFO][4810] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 23:47:16.424261 containerd[2024]: 2026-03-12 23:47:16.317 [INFO][4810] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-65' Mar 12 23:47:16.424261 containerd[2024]: 2026-03-12 23:47:16.321 [INFO][4810] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" host="ip-172-31-21-65" Mar 12 23:47:16.424261 containerd[2024]: 2026-03-12 23:47:16.330 [INFO][4810] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-21-65" Mar 12 23:47:16.424261 containerd[2024]: 2026-03-12 23:47:16.339 [INFO][4810] ipam/ipam.go 526: Trying affinity for 192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:16.424261 containerd[2024]: 2026-03-12 23:47:16.343 [INFO][4810] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:16.424261 containerd[2024]: 2026-03-12 23:47:16.347 [INFO][4810] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:16.424814 containerd[2024]: 2026-03-12 23:47:16.347 [INFO][4810] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" host="ip-172-31-21-65" Mar 12 23:47:16.424814 containerd[2024]: 2026-03-12 23:47:16.351 [INFO][4810] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648 Mar 12 23:47:16.424814 containerd[2024]: 2026-03-12 23:47:16.359 [INFO][4810] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" host="ip-172-31-21-65" Mar 12 23:47:16.424814 containerd[2024]: 2026-03-12 23:47:16.378 [INFO][4810] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.194/26] block=192.168.99.192/26 handle="k8s-pod-network.051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" host="ip-172-31-21-65" Mar 12 23:47:16.424814 containerd[2024]: 2026-03-12 23:47:16.378 [INFO][4810] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.194/26] handle="k8s-pod-network.051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" host="ip-172-31-21-65" Mar 12 23:47:16.424814 containerd[2024]: 2026-03-12 23:47:16.379 [INFO][4810] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 23:47:16.424814 containerd[2024]: 2026-03-12 23:47:16.379 [INFO][4810] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.194/26] IPv6=[] ContainerID="051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" HandleID="k8s-pod-network.051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" Workload="ip--172--31--21--65-k8s-whisker--7f54c4b5c8--dwz7w-eth0" Mar 12 23:47:16.425266 containerd[2024]: 2026-03-12 23:47:16.385 [INFO][4775] cni-plugin/k8s.go 418: Populated endpoint ContainerID="051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" Namespace="calico-system" Pod="whisker-7f54c4b5c8-dwz7w" WorkloadEndpoint="ip--172--31--21--65-k8s-whisker--7f54c4b5c8--dwz7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--65-k8s-whisker--7f54c4b5c8--dwz7w-eth0", GenerateName:"whisker-7f54c4b5c8-", Namespace:"calico-system", SelfLink:"", UID:"2030e334-13a7-4a2b-ae77-a3cdb6229a10", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 47, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f54c4b5c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-65", ContainerID:"", Pod:"whisker-7f54c4b5c8-dwz7w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib3d0922ad59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:16.425266 containerd[2024]: 2026-03-12 23:47:16.385 [INFO][4775] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.194/32] ContainerID="051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" Namespace="calico-system" Pod="whisker-7f54c4b5c8-dwz7w" WorkloadEndpoint="ip--172--31--21--65-k8s-whisker--7f54c4b5c8--dwz7w-eth0" Mar 12 23:47:16.425484 containerd[2024]: 2026-03-12 23:47:16.385 [INFO][4775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3d0922ad59 ContainerID="051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" Namespace="calico-system" Pod="whisker-7f54c4b5c8-dwz7w" WorkloadEndpoint="ip--172--31--21--65-k8s-whisker--7f54c4b5c8--dwz7w-eth0" Mar 12 23:47:16.425484 containerd[2024]: 2026-03-12 23:47:16.395 [INFO][4775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" Namespace="calico-system" Pod="whisker-7f54c4b5c8-dwz7w" WorkloadEndpoint="ip--172--31--21--65-k8s-whisker--7f54c4b5c8--dwz7w-eth0" Mar 12 23:47:16.425622 containerd[2024]: 2026-03-12 23:47:16.396 [INFO][4775] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" Namespace="calico-system" Pod="whisker-7f54c4b5c8-dwz7w" WorkloadEndpoint="ip--172--31--21--65-k8s-whisker--7f54c4b5c8--dwz7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--65-k8s-whisker--7f54c4b5c8--dwz7w-eth0", GenerateName:"whisker-7f54c4b5c8-", Namespace:"calico-system", SelfLink:"", UID:"2030e334-13a7-4a2b-ae77-a3cdb6229a10", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 47, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f54c4b5c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-65", ContainerID:"051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648", Pod:"whisker-7f54c4b5c8-dwz7w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib3d0922ad59", MAC:"3e:9a:b7:2b:86:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:16.425734 containerd[2024]: 2026-03-12 23:47:16.417 [INFO][4775] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" Namespace="calico-system" Pod="whisker-7f54c4b5c8-dwz7w" WorkloadEndpoint="ip--172--31--21--65-k8s-whisker--7f54c4b5c8--dwz7w-eth0" Mar 12 23:47:16.500921 containerd[2024]: time="2026-03-12T23:47:16.500446779Z" level=info msg="connecting to shim 051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648" address="unix:///run/containerd/s/70de8ec15e76cc7383d16dc1befa3375c755111451b3a78f03e5552cf2aaa0d0" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:16.600281 systemd[1]: Started cri-containerd-051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648.scope - libcontainer container 051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648. Mar 12 23:47:16.781781 containerd[2024]: time="2026-03-12T23:47:16.781627384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f54c4b5c8-dwz7w,Uid:2030e334-13a7-4a2b-ae77-a3cdb6229a10,Namespace:calico-system,Attempt:0,} returns sandbox id \"051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648\"" Mar 12 23:47:17.072379 systemd-networkd[1867]: calic693ea06a8e: Gained IPv6LL Mar 12 23:47:17.213829 kubelet[3357]: I0312 23:47:17.212790 3357 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeeeac9e-eff5-42ea-903a-01682d16a2c1" path="/var/lib/kubelet/pods/aeeeac9e-eff5-42ea-903a-01682d16a2c1/volumes" Mar 12 23:47:17.288872 containerd[2024]: time="2026-03-12T23:47:17.287660234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:17.290041 containerd[2024]: time="2026-03-12T23:47:17.289993862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 12 23:47:17.292248 containerd[2024]: time="2026-03-12T23:47:17.292138418Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:17.301859 containerd[2024]: time="2026-03-12T23:47:17.301718979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:17.306575 containerd[2024]: time="2026-03-12T23:47:17.306318531Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.599341768s" Mar 12 23:47:17.306575 containerd[2024]: time="2026-03-12T23:47:17.306525735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 12 23:47:17.318638 containerd[2024]: time="2026-03-12T23:47:17.318352023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 12 23:47:17.326625 containerd[2024]: time="2026-03-12T23:47:17.325535031Z" level=info msg="CreateContainer within sandbox \"8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 12 23:47:17.371320 containerd[2024]: time="2026-03-12T23:47:17.370117143Z" level=info msg="Container b642f3ec55925fe23311376fb92610b2c94ec164aacebb857933658288c9eab8: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:17.387760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1823983170.mount: Deactivated successfully. Mar 12 23:47:17.454594 containerd[2024]: time="2026-03-12T23:47:17.454509795Z" level=info msg="CreateContainer within sandbox \"8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b642f3ec55925fe23311376fb92610b2c94ec164aacebb857933658288c9eab8\"" Mar 12 23:47:17.457059 containerd[2024]: time="2026-03-12T23:47:17.456849279Z" level=info msg="StartContainer for \"b642f3ec55925fe23311376fb92610b2c94ec164aacebb857933658288c9eab8\"" Mar 12 23:47:17.467350 containerd[2024]: time="2026-03-12T23:47:17.467046039Z" level=info msg="connecting to shim b642f3ec55925fe23311376fb92610b2c94ec164aacebb857933658288c9eab8" address="unix:///run/containerd/s/c3f9927be8bf2d6e6d92620f6ec34a0ed46ca0a75b89f6a83236d6e36b891044" protocol=ttrpc version=3 Mar 12 23:47:17.532799 systemd[1]: Started cri-containerd-b642f3ec55925fe23311376fb92610b2c94ec164aacebb857933658288c9eab8.scope - libcontainer container b642f3ec55925fe23311376fb92610b2c94ec164aacebb857933658288c9eab8. Mar 12 23:47:17.734527 containerd[2024]: time="2026-03-12T23:47:17.734347217Z" level=info msg="StartContainer for \"b642f3ec55925fe23311376fb92610b2c94ec164aacebb857933658288c9eab8\" returns successfully" Mar 12 23:47:17.780368 systemd-networkd[1867]: calib3d0922ad59: Gained IPv6LL Mar 12 23:47:17.814944 kubelet[3357]: I0312 23:47:17.812978 3357 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 23:47:18.540317 systemd-networkd[1867]: vxlan.calico: Link UP Mar 12 23:47:18.540334 systemd-networkd[1867]: vxlan.calico: Gained carrier Mar 12 23:47:18.547295 (udev-worker)[5082]: Network interface NamePolicy= disabled on kernel command line. Mar 12 23:47:18.890602 containerd[2024]: time="2026-03-12T23:47:18.889292370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:18.891346 containerd[2024]: time="2026-03-12T23:47:18.891300102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 12 23:47:18.893676 containerd[2024]: time="2026-03-12T23:47:18.893611878Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:18.902054 containerd[2024]: time="2026-03-12T23:47:18.901999638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:18.903775 containerd[2024]: time="2026-03-12T23:47:18.903009054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.582940995s" Mar 12 23:47:18.904551 containerd[2024]: time="2026-03-12T23:47:18.904495542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 12 23:47:18.907001 containerd[2024]: time="2026-03-12T23:47:18.906956886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 12 23:47:18.914928 containerd[2024]: time="2026-03-12T23:47:18.914381875Z" level=info msg="CreateContainer within sandbox \"051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 12 23:47:18.938938 containerd[2024]: time="2026-03-12T23:47:18.938393935Z" level=info msg="Container 031c9b298147e3555da44a4d8c48f8420f75b94cb10067acc5ef5629b655b5df: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:18.969304 containerd[2024]: time="2026-03-12T23:47:18.966357859Z" level=info msg="CreateContainer within sandbox \"051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"031c9b298147e3555da44a4d8c48f8420f75b94cb10067acc5ef5629b655b5df\"" Mar 12 23:47:18.970596 containerd[2024]: time="2026-03-12T23:47:18.970296871Z" level=info msg="StartContainer for \"031c9b298147e3555da44a4d8c48f8420f75b94cb10067acc5ef5629b655b5df\"" Mar 12 23:47:18.971313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount623118924.mount: Deactivated successfully. Mar 12 23:47:18.980821 containerd[2024]: time="2026-03-12T23:47:18.980750983Z" level=info msg="connecting to shim 031c9b298147e3555da44a4d8c48f8420f75b94cb10067acc5ef5629b655b5df" address="unix:///run/containerd/s/70de8ec15e76cc7383d16dc1befa3375c755111451b3a78f03e5552cf2aaa0d0" protocol=ttrpc version=3 Mar 12 23:47:19.032444 systemd[1]: Started cri-containerd-031c9b298147e3555da44a4d8c48f8420f75b94cb10067acc5ef5629b655b5df.scope - libcontainer container 031c9b298147e3555da44a4d8c48f8420f75b94cb10067acc5ef5629b655b5df. Mar 12 23:47:19.206349 containerd[2024]: time="2026-03-12T23:47:19.204675352Z" level=info msg="StartContainer for \"031c9b298147e3555da44a4d8c48f8420f75b94cb10067acc5ef5629b655b5df\" returns successfully" Mar 12 23:47:20.335278 systemd-networkd[1867]: vxlan.calico: Gained IPv6LL Mar 12 23:47:20.503190 containerd[2024]: time="2026-03-12T23:47:20.503121090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:20.505961 containerd[2024]: time="2026-03-12T23:47:20.505781466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 12 23:47:20.508405 containerd[2024]: time="2026-03-12T23:47:20.508326558Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:20.512477 containerd[2024]: time="2026-03-12T23:47:20.512393466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:20.514318 containerd[2024]: time="2026-03-12T23:47:20.513794094Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.605782192s" Mar 12 23:47:20.514318 containerd[2024]: time="2026-03-12T23:47:20.513852762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 12 23:47:20.516826 containerd[2024]: time="2026-03-12T23:47:20.516762486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 12 23:47:20.523923 containerd[2024]: time="2026-03-12T23:47:20.523840135Z" level=info msg="CreateContainer within sandbox \"8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 12 23:47:20.543442 containerd[2024]: time="2026-03-12T23:47:20.543375007Z" level=info msg="Container e18f4577b5c5ec1a6daa32615c2155a5b8c7b032757222b10af80df85934cb12: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:20.557680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount337162233.mount: Deactivated successfully. Mar 12 23:47:20.570770 containerd[2024]: time="2026-03-12T23:47:20.570694543Z" level=info msg="CreateContainer within sandbox \"8d375f36a7607e21c7741dfd8b718428c40c61b33432bdbc954923f44ac5fb6a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e18f4577b5c5ec1a6daa32615c2155a5b8c7b032757222b10af80df85934cb12\"" Mar 12 23:47:20.572677 containerd[2024]: time="2026-03-12T23:47:20.571861555Z" level=info msg="StartContainer for \"e18f4577b5c5ec1a6daa32615c2155a5b8c7b032757222b10af80df85934cb12\"" Mar 12 23:47:20.575549 containerd[2024]: time="2026-03-12T23:47:20.575455711Z" level=info msg="connecting to shim e18f4577b5c5ec1a6daa32615c2155a5b8c7b032757222b10af80df85934cb12" address="unix:///run/containerd/s/c3f9927be8bf2d6e6d92620f6ec34a0ed46ca0a75b89f6a83236d6e36b891044" protocol=ttrpc version=3 Mar 12 23:47:20.618238 systemd[1]: Started cri-containerd-e18f4577b5c5ec1a6daa32615c2155a5b8c7b032757222b10af80df85934cb12.scope - libcontainer container e18f4577b5c5ec1a6daa32615c2155a5b8c7b032757222b10af80df85934cb12. Mar 12 23:47:20.732662 containerd[2024]: time="2026-03-12T23:47:20.732504680Z" level=info msg="StartContainer for \"e18f4577b5c5ec1a6daa32615c2155a5b8c7b032757222b10af80df85934cb12\" returns successfully" Mar 12 23:47:20.818921 kubelet[3357]: I0312 23:47:20.818318 3357 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xgscd" podStartSLOduration=23.008145457 podStartE2EDuration="27.818291672s" podCreationTimestamp="2026-03-12 23:46:53 +0000 UTC" firstStartedPulling="2026-03-12 23:47:15.706107159 +0000 UTC m=+48.837613264" lastFinishedPulling="2026-03-12 23:47:20.51625329 +0000 UTC m=+53.647759479" observedRunningTime="2026-03-12 23:47:20.814577816 +0000 UTC m=+53.946083921" watchObservedRunningTime="2026-03-12 23:47:20.818291672 +0000 UTC m=+53.949797777" Mar 12 23:47:21.399709 kubelet[3357]: I0312 23:47:21.399649 3357 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 12 23:47:21.399709 kubelet[3357]: I0312 23:47:21.399702 3357 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 12 23:47:22.218931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1675764358.mount: Deactivated successfully. Mar 12 23:47:22.247943 containerd[2024]: time="2026-03-12T23:47:22.246953515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:22.248817 containerd[2024]: time="2026-03-12T23:47:22.248678011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 12 23:47:22.252554 containerd[2024]: time="2026-03-12T23:47:22.251366671Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:22.257174 containerd[2024]: time="2026-03-12T23:47:22.257118907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:22.258593 containerd[2024]: time="2026-03-12T23:47:22.258534439Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.741707333s" Mar 12 23:47:22.258691 containerd[2024]: time="2026-03-12T23:47:22.258593047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 12 23:47:22.268379 containerd[2024]: time="2026-03-12T23:47:22.268334767Z" level=info msg="CreateContainer within sandbox \"051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 12 23:47:22.286471 containerd[2024]: time="2026-03-12T23:47:22.286384651Z" level=info msg="Container 11c7af1e401a964f2c32cd7785a7aa37299c85901f94f2c52a12e4bb73a70baf: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:22.297474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196154457.mount: Deactivated successfully. Mar 12 23:47:22.310867 containerd[2024]: time="2026-03-12T23:47:22.310805623Z" level=info msg="CreateContainer within sandbox \"051f9dcd4db26640627adc6e171ff9d184e67ffb29c2db5acfed85c3866b3648\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"11c7af1e401a964f2c32cd7785a7aa37299c85901f94f2c52a12e4bb73a70baf\"" Mar 12 23:47:22.312438 containerd[2024]: time="2026-03-12T23:47:22.311945311Z" level=info msg="StartContainer for \"11c7af1e401a964f2c32cd7785a7aa37299c85901f94f2c52a12e4bb73a70baf\"" Mar 12 23:47:22.314450 containerd[2024]: time="2026-03-12T23:47:22.314395483Z" level=info msg="connecting to shim 11c7af1e401a964f2c32cd7785a7aa37299c85901f94f2c52a12e4bb73a70baf" address="unix:///run/containerd/s/70de8ec15e76cc7383d16dc1befa3375c755111451b3a78f03e5552cf2aaa0d0" protocol=ttrpc version=3 Mar 12 23:47:22.393216 systemd[1]: Started cri-containerd-11c7af1e401a964f2c32cd7785a7aa37299c85901f94f2c52a12e4bb73a70baf.scope - libcontainer container 11c7af1e401a964f2c32cd7785a7aa37299c85901f94f2c52a12e4bb73a70baf. Mar 12 23:47:22.611317 systemd[1]: Started sshd@7-172.31.21.65:22-4.153.228.146:53990.service - OpenSSH per-connection server daemon (4.153.228.146:53990). Mar 12 23:47:22.647256 containerd[2024]: time="2026-03-12T23:47:22.647026965Z" level=info msg="StartContainer for \"11c7af1e401a964f2c32cd7785a7aa37299c85901f94f2c52a12e4bb73a70baf\" returns successfully" Mar 12 23:47:22.749233 ntpd[2239]: Listen normally on 6 vxlan.calico 192.168.99.192:123 Mar 12 23:47:22.749333 ntpd[2239]: Listen normally on 7 calic693ea06a8e [fe80::ecee:eeff:feee:eeee%4]:123 Mar 12 23:47:22.749781 ntpd[2239]: 12 Mar 23:47:22 ntpd[2239]: Listen normally on 6 vxlan.calico 192.168.99.192:123 Mar 12 23:47:22.749781 ntpd[2239]: 12 Mar 23:47:22 ntpd[2239]: Listen normally on 7 calic693ea06a8e [fe80::ecee:eeff:feee:eeee%4]:123 Mar 12 23:47:22.749781 ntpd[2239]: 12 Mar 23:47:22 ntpd[2239]: Listen normally on 8 calib3d0922ad59 [fe80::ecee:eeff:feee:eeee%5]:123 Mar 12 23:47:22.749781 ntpd[2239]: 12 Mar 23:47:22 ntpd[2239]: Listen normally on 9 vxlan.calico [fe80::6451:48ff:fe7c:100d%6]:123 Mar 12 23:47:22.749381 ntpd[2239]: Listen normally on 8 calib3d0922ad59 [fe80::ecee:eeff:feee:eeee%5]:123 Mar 12 23:47:22.749426 ntpd[2239]: Listen normally on 9 vxlan.calico [fe80::6451:48ff:fe7c:100d%6]:123 Mar 12 23:47:23.090317 sshd[5269]: Accepted publickey for core from 4.153.228.146 port 53990 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:47:23.095765 sshd-session[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:47:23.108341 systemd-logind[1990]: New session 8 of user core. Mar 12 23:47:23.112134 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 23:47:23.495579 sshd[5281]: Connection closed by 4.153.228.146 port 53990 Mar 12 23:47:23.496488 sshd-session[5269]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:23.505499 systemd[1]: sshd@7-172.31.21.65:22-4.153.228.146:53990.service: Deactivated successfully. Mar 12 23:47:23.510375 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 23:47:23.514595 systemd-logind[1990]: Session 8 logged out. Waiting for processes to exit. Mar 12 23:47:23.519019 systemd-logind[1990]: Removed session 8. Mar 12 23:47:25.199625 containerd[2024]: time="2026-03-12T23:47:25.198192766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bf8f5f48c-5x29h,Uid:0aa24a97-cdcd-41d3-827c-7d2de2d0d07d,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:25.201065 containerd[2024]: time="2026-03-12T23:47:25.199799626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c4f6954-j5klg,Uid:9c5c2018-747c-459a-ad4c-0b4fa811d698,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:25.471655 systemd-networkd[1867]: cali5ac557b9f0a: Link UP Mar 12 23:47:25.473851 systemd-networkd[1867]: cali5ac557b9f0a: Gained carrier Mar 12 23:47:25.481436 (udev-worker)[5343]: Network interface NamePolicy= disabled on kernel command line. Mar 12 23:47:25.496353 kubelet[3357]: I0312 23:47:25.496258 3357 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7f54c4b5c8-dwz7w" podStartSLOduration=5.023707468 podStartE2EDuration="10.496234307s" podCreationTimestamp="2026-03-12 23:47:15 +0000 UTC" firstStartedPulling="2026-03-12 23:47:16.787925056 +0000 UTC m=+49.919431149" lastFinishedPulling="2026-03-12 23:47:22.260451883 +0000 UTC m=+55.391957988" observedRunningTime="2026-03-12 23:47:22.845950354 +0000 UTC m=+55.977456471" watchObservedRunningTime="2026-03-12 23:47:25.496234307 +0000 UTC m=+58.627740412" Mar 12 23:47:25.512805 containerd[2024]: 2026-03-12 23:47:25.309 [INFO][5306] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--65-k8s-calico--kube--controllers--bf8f5f48c--5x29h-eth0 calico-kube-controllers-bf8f5f48c- calico-system 0aa24a97-cdcd-41d3-827c-7d2de2d0d07d 859 0 2026-03-12 23:46:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bf8f5f48c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-21-65 calico-kube-controllers-bf8f5f48c-5x29h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5ac557b9f0a [] [] }} ContainerID="c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" Namespace="calico-system" Pod="calico-kube-controllers-bf8f5f48c-5x29h" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--kube--controllers--bf8f5f48c--5x29h-" Mar 12 23:47:25.512805 containerd[2024]: 2026-03-12 23:47:25.310 [INFO][5306] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" Namespace="calico-system" Pod="calico-kube-controllers-bf8f5f48c-5x29h" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--kube--controllers--bf8f5f48c--5x29h-eth0" Mar 12 23:47:25.512805 containerd[2024]: 2026-03-12 23:47:25.382 [INFO][5328] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" HandleID="k8s-pod-network.c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" Workload="ip--172--31--21--65-k8s-calico--kube--controllers--bf8f5f48c--5x29h-eth0" Mar 12 23:47:25.513165 containerd[2024]: 2026-03-12 23:47:25.399 [INFO][5328] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" HandleID="k8s-pod-network.c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" Workload="ip--172--31--21--65-k8s-calico--kube--controllers--bf8f5f48c--5x29h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-65", "pod":"calico-kube-controllers-bf8f5f48c-5x29h", "timestamp":"2026-03-12 23:47:25.382123643 +0000 UTC"}, Hostname:"ip-172-31-21-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186840)} Mar 12 23:47:25.513165 containerd[2024]: 2026-03-12 23:47:25.400 [INFO][5328] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 23:47:25.513165 containerd[2024]: 2026-03-12 23:47:25.400 [INFO][5328] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 23:47:25.513165 containerd[2024]: 2026-03-12 23:47:25.400 [INFO][5328] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-65' Mar 12 23:47:25.513165 containerd[2024]: 2026-03-12 23:47:25.409 [INFO][5328] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" host="ip-172-31-21-65" Mar 12 23:47:25.513165 containerd[2024]: 2026-03-12 23:47:25.419 [INFO][5328] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-21-65" Mar 12 23:47:25.513165 containerd[2024]: 2026-03-12 23:47:25.427 [INFO][5328] ipam/ipam.go 526: Trying affinity for 192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:25.513165 containerd[2024]: 2026-03-12 23:47:25.434 [INFO][5328] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:25.513165 containerd[2024]: 2026-03-12 23:47:25.438 [INFO][5328] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:25.513581 containerd[2024]: 2026-03-12 23:47:25.438 [INFO][5328] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" host="ip-172-31-21-65" Mar 12 23:47:25.513581 containerd[2024]: 2026-03-12 23:47:25.440 [INFO][5328] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835 Mar 12 23:47:25.513581 containerd[2024]: 2026-03-12 23:47:25.447 [INFO][5328] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" host="ip-172-31-21-65" Mar 12 23:47:25.513581 containerd[2024]: 2026-03-12 23:47:25.459 [INFO][5328] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.195/26] block=192.168.99.192/26 handle="k8s-pod-network.c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" host="ip-172-31-21-65" Mar 12 23:47:25.513581 containerd[2024]: 2026-03-12 23:47:25.459 [INFO][5328] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.195/26] handle="k8s-pod-network.c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" host="ip-172-31-21-65" Mar 12 23:47:25.513581 containerd[2024]: 2026-03-12 23:47:25.459 [INFO][5328] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 23:47:25.513581 containerd[2024]: 2026-03-12 23:47:25.459 [INFO][5328] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.195/26] IPv6=[] ContainerID="c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" HandleID="k8s-pod-network.c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" Workload="ip--172--31--21--65-k8s-calico--kube--controllers--bf8f5f48c--5x29h-eth0" Mar 12 23:47:25.517390 containerd[2024]: 2026-03-12 23:47:25.466 [INFO][5306] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" Namespace="calico-system" Pod="calico-kube-controllers-bf8f5f48c-5x29h" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--kube--controllers--bf8f5f48c--5x29h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--65-k8s-calico--kube--controllers--bf8f5f48c--5x29h-eth0", GenerateName:"calico-kube-controllers-bf8f5f48c-", Namespace:"calico-system", SelfLink:"", UID:"0aa24a97-cdcd-41d3-827c-7d2de2d0d07d", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bf8f5f48c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-65", ContainerID:"", Pod:"calico-kube-controllers-bf8f5f48c-5x29h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ac557b9f0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:25.518069 containerd[2024]: 2026-03-12 23:47:25.466 [INFO][5306] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.195/32] ContainerID="c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" Namespace="calico-system" Pod="calico-kube-controllers-bf8f5f48c-5x29h" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--kube--controllers--bf8f5f48c--5x29h-eth0" Mar 12 23:47:25.518069 containerd[2024]: 2026-03-12 23:47:25.466 [INFO][5306] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ac557b9f0a ContainerID="c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" Namespace="calico-system" Pod="calico-kube-controllers-bf8f5f48c-5x29h" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--kube--controllers--bf8f5f48c--5x29h-eth0" Mar 12 23:47:25.518069 containerd[2024]: 2026-03-12 23:47:25.474 [INFO][5306] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" Namespace="calico-system" Pod="calico-kube-controllers-bf8f5f48c-5x29h" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--kube--controllers--bf8f5f48c--5x29h-eth0" Mar 12 23:47:25.519046 containerd[2024]: 2026-03-12 23:47:25.476 [INFO][5306] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" Namespace="calico-system" Pod="calico-kube-controllers-bf8f5f48c-5x29h" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--kube--controllers--bf8f5f48c--5x29h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--65-k8s-calico--kube--controllers--bf8f5f48c--5x29h-eth0", GenerateName:"calico-kube-controllers-bf8f5f48c-", Namespace:"calico-system", SelfLink:"", UID:"0aa24a97-cdcd-41d3-827c-7d2de2d0d07d", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bf8f5f48c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-65", ContainerID:"c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835", Pod:"calico-kube-controllers-bf8f5f48c-5x29h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ac557b9f0a", MAC:"92:8b:68:96:59:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:25.519215 containerd[2024]: 2026-03-12 23:47:25.495 [INFO][5306] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" Namespace="calico-system" Pod="calico-kube-controllers-bf8f5f48c-5x29h" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--kube--controllers--bf8f5f48c--5x29h-eth0" Mar 12 23:47:25.610155 containerd[2024]: time="2026-03-12T23:47:25.610069308Z" level=info msg="connecting to shim c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835" address="unix:///run/containerd/s/e531edbeadcfcf9e8b3c54d4abb7be25ef12f14876efc4bd5245ea87a2f57034" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:25.614384 (udev-worker)[5345]: Network interface NamePolicy= disabled on kernel command line. Mar 12 23:47:25.623663 systemd-networkd[1867]: cali203010e4306: Link UP Mar 12 23:47:25.626470 systemd-networkd[1867]: cali203010e4306: Gained carrier Mar 12 23:47:25.700536 containerd[2024]: 2026-03-12 23:47:25.325 [INFO][5311] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--j5klg-eth0 calico-apiserver-96c4f6954- calico-system 9c5c2018-747c-459a-ad4c-0b4fa811d698 854 0 2026-03-12 23:46:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:96c4f6954 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-65 calico-apiserver-96c4f6954-j5klg eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali203010e4306 [] [] }} ContainerID="297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" Namespace="calico-system" Pod="calico-apiserver-96c4f6954-j5klg" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--j5klg-" Mar 12 23:47:25.700536 containerd[2024]: 2026-03-12 23:47:25.325 [INFO][5311] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" Namespace="calico-system" Pod="calico-apiserver-96c4f6954-j5klg" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--j5klg-eth0" Mar 12 23:47:25.700536 containerd[2024]: 2026-03-12 23:47:25.392 [INFO][5333] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" HandleID="k8s-pod-network.297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" Workload="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--j5klg-eth0" Mar 12 23:47:25.700943 containerd[2024]: 2026-03-12 23:47:25.418 [INFO][5333] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" HandleID="k8s-pod-network.297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" Workload="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--j5klg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbaf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-65", "pod":"calico-apiserver-96c4f6954-j5klg", "timestamp":"2026-03-12 23:47:25.392368151 +0000 UTC"}, Hostname:"ip-172-31-21-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003bb340)} Mar 12 23:47:25.700943 containerd[2024]: 2026-03-12 23:47:25.418 [INFO][5333] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 23:47:25.700943 containerd[2024]: 2026-03-12 23:47:25.459 [INFO][5333] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 23:47:25.700943 containerd[2024]: 2026-03-12 23:47:25.460 [INFO][5333] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-65' Mar 12 23:47:25.700943 containerd[2024]: 2026-03-12 23:47:25.508 [INFO][5333] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" host="ip-172-31-21-65" Mar 12 23:47:25.700943 containerd[2024]: 2026-03-12 23:47:25.522 [INFO][5333] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-21-65" Mar 12 23:47:25.700943 containerd[2024]: 2026-03-12 23:47:25.536 [INFO][5333] ipam/ipam.go 526: Trying affinity for 192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:25.700943 containerd[2024]: 2026-03-12 23:47:25.541 [INFO][5333] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:25.700943 containerd[2024]: 2026-03-12 23:47:25.546 [INFO][5333] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:25.701456 containerd[2024]: 2026-03-12 23:47:25.547 [INFO][5333] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" host="ip-172-31-21-65" Mar 12 23:47:25.701456 containerd[2024]: 2026-03-12 23:47:25.552 [INFO][5333] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51 Mar 12 23:47:25.701456 containerd[2024]: 2026-03-12 23:47:25.562 [INFO][5333] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" host="ip-172-31-21-65" Mar 12 23:47:25.701456 containerd[2024]: 2026-03-12 23:47:25.590 [INFO][5333] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.196/26] block=192.168.99.192/26 handle="k8s-pod-network.297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" host="ip-172-31-21-65" Mar 12 23:47:25.701456 containerd[2024]: 2026-03-12 23:47:25.591 [INFO][5333] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.196/26] handle="k8s-pod-network.297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" host="ip-172-31-21-65" Mar 12 23:47:25.701456 containerd[2024]: 2026-03-12 23:47:25.592 [INFO][5333] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 23:47:25.701456 containerd[2024]: 2026-03-12 23:47:25.592 [INFO][5333] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.196/26] IPv6=[] ContainerID="297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" HandleID="k8s-pod-network.297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" Workload="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--j5klg-eth0" Mar 12 23:47:25.701772 containerd[2024]: 2026-03-12 23:47:25.607 [INFO][5311] cni-plugin/k8s.go 418: Populated endpoint ContainerID="297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" Namespace="calico-system" Pod="calico-apiserver-96c4f6954-j5klg" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--j5klg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--j5klg-eth0", GenerateName:"calico-apiserver-96c4f6954-", Namespace:"calico-system", SelfLink:"", UID:"9c5c2018-747c-459a-ad4c-0b4fa811d698", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96c4f6954", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-65", ContainerID:"", Pod:"calico-apiserver-96c4f6954-j5klg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali203010e4306", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:25.701945 containerd[2024]: 2026-03-12 23:47:25.608 [INFO][5311] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.196/32] ContainerID="297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" Namespace="calico-system" Pod="calico-apiserver-96c4f6954-j5klg" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--j5klg-eth0" Mar 12 23:47:25.701945 containerd[2024]: 2026-03-12 23:47:25.608 [INFO][5311] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali203010e4306 ContainerID="297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" Namespace="calico-system" Pod="calico-apiserver-96c4f6954-j5klg" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--j5klg-eth0" Mar 12 23:47:25.701945 containerd[2024]: 2026-03-12 23:47:25.630 [INFO][5311] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" Namespace="calico-system" Pod="calico-apiserver-96c4f6954-j5klg" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--j5klg-eth0" Mar 12 23:47:25.702752 containerd[2024]: 2026-03-12 23:47:25.632 [INFO][5311] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" Namespace="calico-system" Pod="calico-apiserver-96c4f6954-j5klg" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--j5klg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--j5klg-eth0", GenerateName:"calico-apiserver-96c4f6954-", Namespace:"calico-system", SelfLink:"", UID:"9c5c2018-747c-459a-ad4c-0b4fa811d698", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96c4f6954", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-65", ContainerID:"297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51", Pod:"calico-apiserver-96c4f6954-j5klg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali203010e4306", MAC:"da:05:7e:c4:a2:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:25.703147 containerd[2024]: 2026-03-12 23:47:25.676 [INFO][5311] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" Namespace="calico-system" Pod="calico-apiserver-96c4f6954-j5klg" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--j5klg-eth0" Mar 12 23:47:25.728493 systemd[1]: Started cri-containerd-c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835.scope - libcontainer container c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835. Mar 12 23:47:25.800668 containerd[2024]: time="2026-03-12T23:47:25.800423593Z" level=info msg="connecting to shim 297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51" address="unix:///run/containerd/s/84ec3f9e8cce9634950cf82f1ea83f42296682ab11166b84fcd4dbdb223b5432" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:25.909630 systemd[1]: Started cri-containerd-297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51.scope - libcontainer container 297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51. Mar 12 23:47:25.936493 containerd[2024]: time="2026-03-12T23:47:25.936425653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bf8f5f48c-5x29h,Uid:0aa24a97-cdcd-41d3-827c-7d2de2d0d07d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835\"" Mar 12 23:47:25.942296 containerd[2024]: time="2026-03-12T23:47:25.942170065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 12 23:47:26.040662 containerd[2024]: time="2026-03-12T23:47:26.040562422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c4f6954-j5klg,Uid:9c5c2018-747c-459a-ad4c-0b4fa811d698,Namespace:calico-system,Attempt:0,} returns sandbox id \"297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51\"" Mar 12 23:47:26.197012 containerd[2024]: time="2026-03-12T23:47:26.196872755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c4f6954-g4777,Uid:f1a13fa1-fd17-43ff-a5e3-be119533e7ce,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:26.416803 systemd-networkd[1867]: calibe83d0b6c4d: Link UP Mar 12 23:47:26.418788 systemd-networkd[1867]: calibe83d0b6c4d: Gained carrier Mar 12 23:47:26.450223 containerd[2024]: 2026-03-12 23:47:26.283 [INFO][5474] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--g4777-eth0 calico-apiserver-96c4f6954- calico-system f1a13fa1-fd17-43ff-a5e3-be119533e7ce 855 0 2026-03-12 23:46:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:96c4f6954 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-65 calico-apiserver-96c4f6954-g4777 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibe83d0b6c4d [] [] }} ContainerID="199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" Namespace="calico-system" Pod="calico-apiserver-96c4f6954-g4777" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--g4777-" Mar 12 23:47:26.450223 containerd[2024]: 2026-03-12 23:47:26.283 [INFO][5474] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" Namespace="calico-system" Pod="calico-apiserver-96c4f6954-g4777" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--g4777-eth0" Mar 12 23:47:26.450223 containerd[2024]: 2026-03-12 23:47:26.339 [INFO][5485] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" HandleID="k8s-pod-network.199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" Workload="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--g4777-eth0" Mar 12 23:47:26.451707 containerd[2024]: 2026-03-12 23:47:26.353 [INFO][5485] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" HandleID="k8s-pod-network.199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" Workload="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--g4777-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f9d60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-65", "pod":"calico-apiserver-96c4f6954-g4777", "timestamp":"2026-03-12 23:47:26.339127439 +0000 UTC"}, Hostname:"ip-172-31-21-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000f8420)} Mar 12 23:47:26.451707 containerd[2024]: 2026-03-12 23:47:26.353 [INFO][5485] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 23:47:26.451707 containerd[2024]: 2026-03-12 23:47:26.353 [INFO][5485] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 23:47:26.451707 containerd[2024]: 2026-03-12 23:47:26.354 [INFO][5485] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-65' Mar 12 23:47:26.451707 containerd[2024]: 2026-03-12 23:47:26.357 [INFO][5485] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" host="ip-172-31-21-65" Mar 12 23:47:26.451707 containerd[2024]: 2026-03-12 23:47:26.365 [INFO][5485] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-21-65" Mar 12 23:47:26.451707 containerd[2024]: 2026-03-12 23:47:26.372 [INFO][5485] ipam/ipam.go 526: Trying affinity for 192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:26.451707 containerd[2024]: 2026-03-12 23:47:26.376 [INFO][5485] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:26.451707 containerd[2024]: 2026-03-12 23:47:26.380 [INFO][5485] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:26.453049 containerd[2024]: 2026-03-12 23:47:26.380 [INFO][5485] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" host="ip-172-31-21-65" Mar 12 23:47:26.453049 containerd[2024]: 2026-03-12 23:47:26.384 [INFO][5485] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579 Mar 12 23:47:26.453049 containerd[2024]: 2026-03-12 23:47:26.391 [INFO][5485] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" host="ip-172-31-21-65" Mar 12 23:47:26.453049 containerd[2024]: 2026-03-12 23:47:26.404 [INFO][5485] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.197/26] block=192.168.99.192/26 handle="k8s-pod-network.199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" host="ip-172-31-21-65" Mar 12 23:47:26.453049 containerd[2024]: 2026-03-12 23:47:26.404 [INFO][5485] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.197/26] handle="k8s-pod-network.199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" host="ip-172-31-21-65" Mar 12 23:47:26.453049 containerd[2024]: 2026-03-12 23:47:26.404 [INFO][5485] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 23:47:26.453049 containerd[2024]: 2026-03-12 23:47:26.404 [INFO][5485] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.197/26] IPv6=[] ContainerID="199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" HandleID="k8s-pod-network.199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" Workload="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--g4777-eth0" Mar 12 23:47:26.453369 containerd[2024]: 2026-03-12 23:47:26.409 [INFO][5474] cni-plugin/k8s.go 418: Populated endpoint ContainerID="199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" Namespace="calico-system" Pod="calico-apiserver-96c4f6954-g4777" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--g4777-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--g4777-eth0", GenerateName:"calico-apiserver-96c4f6954-", Namespace:"calico-system", SelfLink:"", UID:"f1a13fa1-fd17-43ff-a5e3-be119533e7ce", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96c4f6954", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-65", ContainerID:"", Pod:"calico-apiserver-96c4f6954-g4777", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibe83d0b6c4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:26.453539 containerd[2024]: 2026-03-12 23:47:26.409 [INFO][5474] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.197/32] ContainerID="199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" Namespace="calico-system" Pod="calico-apiserver-96c4f6954-g4777" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--g4777-eth0" Mar 12 23:47:26.453539 containerd[2024]: 2026-03-12 23:47:26.409 [INFO][5474] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe83d0b6c4d ContainerID="199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" Namespace="calico-system" Pod="calico-apiserver-96c4f6954-g4777" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--g4777-eth0" Mar 12 23:47:26.453539 containerd[2024]: 2026-03-12 23:47:26.415 [INFO][5474] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" Namespace="calico-system" Pod="calico-apiserver-96c4f6954-g4777" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--g4777-eth0" Mar 12 23:47:26.453689 containerd[2024]: 2026-03-12 23:47:26.416 [INFO][5474] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" Namespace="calico-system" Pod="calico-apiserver-96c4f6954-g4777" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--g4777-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--g4777-eth0", GenerateName:"calico-apiserver-96c4f6954-", Namespace:"calico-system", SelfLink:"", UID:"f1a13fa1-fd17-43ff-a5e3-be119533e7ce", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96c4f6954", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-65", ContainerID:"199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579", Pod:"calico-apiserver-96c4f6954-g4777", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibe83d0b6c4d", MAC:"fa:33:0f:90:78:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:26.453803 containerd[2024]: 2026-03-12 23:47:26.437 [INFO][5474] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" Namespace="calico-system" Pod="calico-apiserver-96c4f6954-g4777" WorkloadEndpoint="ip--172--31--21--65-k8s-calico--apiserver--96c4f6954--g4777-eth0" Mar 12 23:47:26.516363 containerd[2024]: time="2026-03-12T23:47:26.516132720Z" level=info msg="connecting to shim 199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579" address="unix:///run/containerd/s/a9ea1ca37fe406a10ca1fa24e72f288772cef73ccb5998c4d27d4347b0a09c48" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:26.584497 systemd[1]: Started cri-containerd-199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579.scope - libcontainer container 199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579. Mar 12 23:47:26.692273 containerd[2024]: time="2026-03-12T23:47:26.691691797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c4f6954-g4777,Uid:f1a13fa1-fd17-43ff-a5e3-be119533e7ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579\"" Mar 12 23:47:26.991183 systemd-networkd[1867]: cali203010e4306: Gained IPv6LL Mar 12 23:47:27.055235 systemd-networkd[1867]: cali5ac557b9f0a: Gained IPv6LL Mar 12 23:47:28.198068 containerd[2024]: time="2026-03-12T23:47:28.197843845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jthnw,Uid:124c3195-be33-46e0-a14d-d1edb9412c07,Namespace:kube-system,Attempt:0,}" Mar 12 23:47:28.399664 systemd-networkd[1867]: calibe83d0b6c4d: Gained IPv6LL Mar 12 23:47:28.600158 systemd[1]: Started sshd@8-172.31.21.65:22-4.153.228.146:53998.service - OpenSSH per-connection server daemon (4.153.228.146:53998). Mar 12 23:47:28.629645 systemd-networkd[1867]: cali9e5932aa6bf: Link UP Mar 12 23:47:28.630012 systemd-networkd[1867]: cali9e5932aa6bf: Gained carrier Mar 12 23:47:28.712551 containerd[2024]: 2026-03-12 23:47:28.329 [INFO][5576] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--65-k8s-coredns--674b8bbfcf--jthnw-eth0 coredns-674b8bbfcf- kube-system 124c3195-be33-46e0-a14d-d1edb9412c07 861 0 2026-03-12 23:46:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-65 coredns-674b8bbfcf-jthnw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9e5932aa6bf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jthnw" WorkloadEndpoint="ip--172--31--21--65-k8s-coredns--674b8bbfcf--jthnw-" Mar 12 23:47:28.712551 containerd[2024]: 2026-03-12 23:47:28.330 [INFO][5576] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jthnw" WorkloadEndpoint="ip--172--31--21--65-k8s-coredns--674b8bbfcf--jthnw-eth0" Mar 12 23:47:28.712551 containerd[2024]: 2026-03-12 23:47:28.436 [INFO][5587] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" HandleID="k8s-pod-network.35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" Workload="ip--172--31--21--65-k8s-coredns--674b8bbfcf--jthnw-eth0" Mar 12 23:47:28.713381 containerd[2024]: 2026-03-12 23:47:28.476 [INFO][5587] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" HandleID="k8s-pod-network.35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" Workload="ip--172--31--21--65-k8s-coredns--674b8bbfcf--jthnw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cc80), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-65", "pod":"coredns-674b8bbfcf-jthnw", "timestamp":"2026-03-12 23:47:28.436399406 +0000 UTC"}, Hostname:"ip-172-31-21-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000400000)} Mar 12 23:47:28.713381 containerd[2024]: 2026-03-12 23:47:28.477 [INFO][5587] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 23:47:28.713381 containerd[2024]: 2026-03-12 23:47:28.477 [INFO][5587] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 23:47:28.713381 containerd[2024]: 2026-03-12 23:47:28.477 [INFO][5587] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-65' Mar 12 23:47:28.713381 containerd[2024]: 2026-03-12 23:47:28.483 [INFO][5587] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" host="ip-172-31-21-65" Mar 12 23:47:28.713381 containerd[2024]: 2026-03-12 23:47:28.500 [INFO][5587] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-21-65" Mar 12 23:47:28.713381 containerd[2024]: 2026-03-12 23:47:28.530 [INFO][5587] ipam/ipam.go 526: Trying affinity for 192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:28.713381 containerd[2024]: 2026-03-12 23:47:28.539 [INFO][5587] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:28.713381 containerd[2024]: 2026-03-12 23:47:28.557 [INFO][5587] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:28.715070 containerd[2024]: 2026-03-12 23:47:28.559 [INFO][5587] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" host="ip-172-31-21-65" Mar 12 23:47:28.715070 containerd[2024]: 2026-03-12 23:47:28.564 [INFO][5587] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e Mar 12 23:47:28.715070 containerd[2024]: 2026-03-12 23:47:28.578 [INFO][5587] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" host="ip-172-31-21-65" Mar 12 23:47:28.715070 containerd[2024]: 2026-03-12 23:47:28.608 [INFO][5587] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.198/26] block=192.168.99.192/26 handle="k8s-pod-network.35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" host="ip-172-31-21-65" Mar 12 23:47:28.715070 containerd[2024]: 2026-03-12 23:47:28.608 [INFO][5587] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.198/26] handle="k8s-pod-network.35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" host="ip-172-31-21-65" Mar 12 23:47:28.715070 containerd[2024]: 2026-03-12 23:47:28.608 [INFO][5587] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 23:47:28.715070 containerd[2024]: 2026-03-12 23:47:28.608 [INFO][5587] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.198/26] IPv6=[] ContainerID="35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" HandleID="k8s-pod-network.35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" Workload="ip--172--31--21--65-k8s-coredns--674b8bbfcf--jthnw-eth0" Mar 12 23:47:28.715411 containerd[2024]: 2026-03-12 23:47:28.620 [INFO][5576] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jthnw" WorkloadEndpoint="ip--172--31--21--65-k8s-coredns--674b8bbfcf--jthnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--65-k8s-coredns--674b8bbfcf--jthnw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"124c3195-be33-46e0-a14d-d1edb9412c07", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-65", ContainerID:"", Pod:"coredns-674b8bbfcf-jthnw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e5932aa6bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:28.715411 containerd[2024]: 2026-03-12 23:47:28.621 [INFO][5576] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.198/32] ContainerID="35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jthnw" WorkloadEndpoint="ip--172--31--21--65-k8s-coredns--674b8bbfcf--jthnw-eth0" Mar 12 23:47:28.715411 containerd[2024]: 2026-03-12 23:47:28.622 [INFO][5576] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e5932aa6bf ContainerID="35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jthnw" WorkloadEndpoint="ip--172--31--21--65-k8s-coredns--674b8bbfcf--jthnw-eth0" Mar 12 23:47:28.715411 containerd[2024]: 2026-03-12 23:47:28.628 [INFO][5576] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jthnw" WorkloadEndpoint="ip--172--31--21--65-k8s-coredns--674b8bbfcf--jthnw-eth0" Mar 12 23:47:28.715411 containerd[2024]: 2026-03-12 23:47:28.629 [INFO][5576] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jthnw" WorkloadEndpoint="ip--172--31--21--65-k8s-coredns--674b8bbfcf--jthnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--65-k8s-coredns--674b8bbfcf--jthnw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"124c3195-be33-46e0-a14d-d1edb9412c07", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-65", ContainerID:"35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e", Pod:"coredns-674b8bbfcf-jthnw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e5932aa6bf", MAC:"6a:cc:c1:6c:94:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:28.715411 containerd[2024]: 2026-03-12 23:47:28.671 [INFO][5576] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jthnw" WorkloadEndpoint="ip--172--31--21--65-k8s-coredns--674b8bbfcf--jthnw-eth0" Mar 12 23:47:28.787828 containerd[2024]: time="2026-03-12T23:47:28.787644256Z" level=info msg="connecting to shim 35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e" address="unix:///run/containerd/s/36d53d824e56c136d236e91638dd696d32efad2fd6ef073ab1f4a2587c845025" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:28.894749 systemd[1]: Started cri-containerd-35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e.scope - libcontainer container 35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e. Mar 12 23:47:29.092928 containerd[2024]: time="2026-03-12T23:47:29.091428529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jthnw,Uid:124c3195-be33-46e0-a14d-d1edb9412c07,Namespace:kube-system,Attempt:0,} returns sandbox id \"35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e\"" Mar 12 23:47:29.112822 containerd[2024]: time="2026-03-12T23:47:29.112455721Z" level=info msg="CreateContainer within sandbox \"35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 23:47:29.149709 containerd[2024]: time="2026-03-12T23:47:29.149209837Z" level=info msg="Container 7569e265d82bfa4c9a0391340ad3ad7d69a162c0ca600dbac9b27394314cd99e: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:29.160361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2969343611.mount: Deactivated successfully. Mar 12 23:47:29.169826 containerd[2024]: time="2026-03-12T23:47:29.169759621Z" level=info msg="CreateContainer within sandbox \"35317a65261fc967410c68f18744cd0b50f061e21d3f7dd313faa5655eeff41e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7569e265d82bfa4c9a0391340ad3ad7d69a162c0ca600dbac9b27394314cd99e\"" Mar 12 23:47:29.173341 containerd[2024]: time="2026-03-12T23:47:29.171983497Z" level=info msg="StartContainer for \"7569e265d82bfa4c9a0391340ad3ad7d69a162c0ca600dbac9b27394314cd99e\"" Mar 12 23:47:29.176355 containerd[2024]: time="2026-03-12T23:47:29.176301937Z" level=info msg="connecting to shim 7569e265d82bfa4c9a0391340ad3ad7d69a162c0ca600dbac9b27394314cd99e" address="unix:///run/containerd/s/36d53d824e56c136d236e91638dd696d32efad2fd6ef073ab1f4a2587c845025" protocol=ttrpc version=3 Mar 12 23:47:29.194291 sshd[5598]: Accepted publickey for core from 4.153.228.146 port 53998 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:47:29.200919 containerd[2024]: time="2026-03-12T23:47:29.200004230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-kmm7z,Uid:7dd1b291-6c99-4d9b-b57e-b54a5e164f4c,Namespace:calico-system,Attempt:0,}" Mar 12 23:47:29.201931 sshd-session[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:47:29.202961 containerd[2024]: time="2026-03-12T23:47:29.201564902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qhx57,Uid:f2877bb0-6b5f-460a-8752-03e954e499f9,Namespace:kube-system,Attempt:0,}" Mar 12 23:47:29.237982 systemd-logind[1990]: New session 9 of user core. Mar 12 23:47:29.254489 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 23:47:29.333488 systemd[1]: Started cri-containerd-7569e265d82bfa4c9a0391340ad3ad7d69a162c0ca600dbac9b27394314cd99e.scope - libcontainer container 7569e265d82bfa4c9a0391340ad3ad7d69a162c0ca600dbac9b27394314cd99e. Mar 12 23:47:29.645795 containerd[2024]: time="2026-03-12T23:47:29.644614624Z" level=info msg="StartContainer for \"7569e265d82bfa4c9a0391340ad3ad7d69a162c0ca600dbac9b27394314cd99e\" returns successfully" Mar 12 23:47:29.681185 containerd[2024]: time="2026-03-12T23:47:29.681129436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:29.682257 containerd[2024]: time="2026-03-12T23:47:29.681962308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 12 23:47:29.696593 containerd[2024]: time="2026-03-12T23:47:29.696513916Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:29.719223 containerd[2024]: time="2026-03-12T23:47:29.719146960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:29.740174 containerd[2024]: time="2026-03-12T23:47:29.739835056Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.796689871s" Mar 12 23:47:29.740174 containerd[2024]: time="2026-03-12T23:47:29.739919944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 12 23:47:29.745819 containerd[2024]: time="2026-03-12T23:47:29.745288300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 23:47:29.772466 sshd[5693]: Connection closed by 4.153.228.146 port 53998 Mar 12 23:47:29.775196 sshd-session[5598]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:29.802604 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 23:47:29.804095 systemd[1]: sshd@8-172.31.21.65:22-4.153.228.146:53998.service: Deactivated successfully. Mar 12 23:47:29.807613 containerd[2024]: time="2026-03-12T23:47:29.807430229Z" level=info msg="CreateContainer within sandbox \"c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 12 23:47:29.818480 systemd-logind[1990]: Session 9 logged out. Waiting for processes to exit. Mar 12 23:47:29.829522 systemd-logind[1990]: Removed session 9. Mar 12 23:47:29.845953 containerd[2024]: time="2026-03-12T23:47:29.845794781Z" level=info msg="Container 7a561c160d4c9cde59d9f5a02071243d1efe4f591647163a040fcd536a7273f9: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:29.910993 containerd[2024]: time="2026-03-12T23:47:29.909165173Z" level=info msg="CreateContainer within sandbox \"c09fb8d4feee48833c94daa1d75471cccd2717a810714878ec2b0a04a8abd835\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7a561c160d4c9cde59d9f5a02071243d1efe4f591647163a040fcd536a7273f9\"" Mar 12 23:47:29.917427 containerd[2024]: time="2026-03-12T23:47:29.917189009Z" level=info msg="StartContainer for \"7a561c160d4c9cde59d9f5a02071243d1efe4f591647163a040fcd536a7273f9\"" Mar 12 23:47:29.922034 containerd[2024]: time="2026-03-12T23:47:29.921938573Z" level=info msg="connecting to shim 7a561c160d4c9cde59d9f5a02071243d1efe4f591647163a040fcd536a7273f9" address="unix:///run/containerd/s/e531edbeadcfcf9e8b3c54d4abb7be25ef12f14876efc4bd5245ea87a2f57034" protocol=ttrpc version=3 Mar 12 23:47:29.961920 kubelet[3357]: I0312 23:47:29.961332 3357 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jthnw" podStartSLOduration=56.958743317 podStartE2EDuration="56.958743317s" podCreationTimestamp="2026-03-12 23:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:47:29.957978389 +0000 UTC m=+63.089484518" watchObservedRunningTime="2026-03-12 23:47:29.958743317 +0000 UTC m=+63.090249410" Mar 12 23:47:30.042220 systemd[1]: Started cri-containerd-7a561c160d4c9cde59d9f5a02071243d1efe4f591647163a040fcd536a7273f9.scope - libcontainer container 7a561c160d4c9cde59d9f5a02071243d1efe4f591647163a040fcd536a7273f9. Mar 12 23:47:30.143230 systemd-networkd[1867]: cali015a43f1b05: Link UP Mar 12 23:47:30.145729 systemd-networkd[1867]: cali015a43f1b05: Gained carrier Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:29.585 [INFO][5683] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--65-k8s-goldmane--5b85766d88--kmm7z-eth0 goldmane-5b85766d88- calico-system 7dd1b291-6c99-4d9b-b57e-b54a5e164f4c 862 0 2026-03-12 23:46:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-21-65 goldmane-5b85766d88-kmm7z eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali015a43f1b05 [] [] }} ContainerID="b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" Namespace="calico-system" Pod="goldmane-5b85766d88-kmm7z" WorkloadEndpoint="ip--172--31--21--65-k8s-goldmane--5b85766d88--kmm7z-" Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:29.587 [INFO][5683] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" Namespace="calico-system" Pod="goldmane-5b85766d88-kmm7z" WorkloadEndpoint="ip--172--31--21--65-k8s-goldmane--5b85766d88--kmm7z-eth0" Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:29.883 [INFO][5744] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" HandleID="k8s-pod-network.b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" Workload="ip--172--31--21--65-k8s-goldmane--5b85766d88--kmm7z-eth0" Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:29.987 [INFO][5744] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" HandleID="k8s-pod-network.b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" Workload="ip--172--31--21--65-k8s-goldmane--5b85766d88--kmm7z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f3ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-65", "pod":"goldmane-5b85766d88-kmm7z", "timestamp":"2026-03-12 23:47:29.883442513 +0000 UTC"}, Hostname:"ip-172-31-21-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000256b00)} Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:29.987 [INFO][5744] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:29.987 [INFO][5744] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:29.987 [INFO][5744] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-65' Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:30.026 [INFO][5744] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" host="ip-172-31-21-65" Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:30.059 [INFO][5744] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-21-65" Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:30.072 [INFO][5744] ipam/ipam.go 526: Trying affinity for 192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:30.081 [INFO][5744] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:30.086 [INFO][5744] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:30.087 [INFO][5744] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" host="ip-172-31-21-65" Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:30.089 [INFO][5744] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:30.103 [INFO][5744] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" host="ip-172-31-21-65" Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:30.119 [INFO][5744] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.199/26] block=192.168.99.192/26 handle="k8s-pod-network.b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" host="ip-172-31-21-65" Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:30.119 [INFO][5744] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.199/26] handle="k8s-pod-network.b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" host="ip-172-31-21-65" Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:30.119 [INFO][5744] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 23:47:30.193958 containerd[2024]: 2026-03-12 23:47:30.119 [INFO][5744] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.199/26] IPv6=[] ContainerID="b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" HandleID="k8s-pod-network.b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" Workload="ip--172--31--21--65-k8s-goldmane--5b85766d88--kmm7z-eth0" Mar 12 23:47:30.197432 containerd[2024]: 2026-03-12 23:47:30.134 [INFO][5683] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" Namespace="calico-system" Pod="goldmane-5b85766d88-kmm7z" WorkloadEndpoint="ip--172--31--21--65-k8s-goldmane--5b85766d88--kmm7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--65-k8s-goldmane--5b85766d88--kmm7z-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"7dd1b291-6c99-4d9b-b57e-b54a5e164f4c", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-65", ContainerID:"", Pod:"goldmane-5b85766d88-kmm7z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali015a43f1b05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:30.197432 containerd[2024]: 2026-03-12 23:47:30.135 [INFO][5683] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.199/32] ContainerID="b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" Namespace="calico-system" Pod="goldmane-5b85766d88-kmm7z" WorkloadEndpoint="ip--172--31--21--65-k8s-goldmane--5b85766d88--kmm7z-eth0" Mar 12 23:47:30.197432 containerd[2024]: 2026-03-12 23:47:30.135 [INFO][5683] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali015a43f1b05 ContainerID="b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" Namespace="calico-system" Pod="goldmane-5b85766d88-kmm7z" WorkloadEndpoint="ip--172--31--21--65-k8s-goldmane--5b85766d88--kmm7z-eth0" Mar 12 23:47:30.197432 containerd[2024]: 2026-03-12 23:47:30.147 [INFO][5683] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" Namespace="calico-system" Pod="goldmane-5b85766d88-kmm7z" WorkloadEndpoint="ip--172--31--21--65-k8s-goldmane--5b85766d88--kmm7z-eth0" Mar 12 23:47:30.197432 containerd[2024]: 2026-03-12 23:47:30.150 [INFO][5683] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" Namespace="calico-system" Pod="goldmane-5b85766d88-kmm7z" WorkloadEndpoint="ip--172--31--21--65-k8s-goldmane--5b85766d88--kmm7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--65-k8s-goldmane--5b85766d88--kmm7z-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"7dd1b291-6c99-4d9b-b57e-b54a5e164f4c", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-65", ContainerID:"b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b", Pod:"goldmane-5b85766d88-kmm7z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali015a43f1b05", MAC:"a6:ab:1d:86:53:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:30.197432 containerd[2024]: 2026-03-12 23:47:30.181 [INFO][5683] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" Namespace="calico-system" Pod="goldmane-5b85766d88-kmm7z" WorkloadEndpoint="ip--172--31--21--65-k8s-goldmane--5b85766d88--kmm7z-eth0" Mar 12 23:47:30.297256 systemd-networkd[1867]: calif0fcc53896a: Link UP Mar 12 23:47:30.297680 systemd-networkd[1867]: calif0fcc53896a: Gained carrier Mar 12 23:47:30.309689 containerd[2024]: time="2026-03-12T23:47:30.309604443Z" level=info msg="connecting to shim b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b" address="unix:///run/containerd/s/efafeec5f901ea99b1ab864b4cceaf87c0bd0e62f8fd23e2aa1a927d8cb1fb24" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:30.319213 systemd-networkd[1867]: cali9e5932aa6bf: Gained IPv6LL Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:29.568 [INFO][5679] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--65-k8s-coredns--674b8bbfcf--qhx57-eth0 coredns-674b8bbfcf- kube-system f2877bb0-6b5f-460a-8752-03e954e499f9 857 0 2026-03-12 23:46:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-65 coredns-674b8bbfcf-qhx57 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif0fcc53896a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" Namespace="kube-system" Pod="coredns-674b8bbfcf-qhx57" WorkloadEndpoint="ip--172--31--21--65-k8s-coredns--674b8bbfcf--qhx57-" Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:29.569 [INFO][5679] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" Namespace="kube-system" Pod="coredns-674b8bbfcf-qhx57" WorkloadEndpoint="ip--172--31--21--65-k8s-coredns--674b8bbfcf--qhx57-eth0" Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:29.940 [INFO][5740] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" HandleID="k8s-pod-network.7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" Workload="ip--172--31--21--65-k8s-coredns--674b8bbfcf--qhx57-eth0" Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:30.025 [INFO][5740] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" HandleID="k8s-pod-network.7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" Workload="ip--172--31--21--65-k8s-coredns--674b8bbfcf--qhx57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e9f70), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-65", "pod":"coredns-674b8bbfcf-qhx57", "timestamp":"2026-03-12 23:47:29.940727561 +0000 UTC"}, Hostname:"ip-172-31-21-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000459080)} Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:30.025 [INFO][5740] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:30.119 [INFO][5740] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:30.119 [INFO][5740] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-65' Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:30.125 [INFO][5740] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" host="ip-172-31-21-65" Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:30.159 [INFO][5740] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-21-65" Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:30.180 [INFO][5740] ipam/ipam.go 526: Trying affinity for 192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:30.190 [INFO][5740] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:30.200 [INFO][5740] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ip-172-31-21-65" Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:30.201 [INFO][5740] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" host="ip-172-31-21-65" Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:30.204 [INFO][5740] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:30.229 [INFO][5740] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" host="ip-172-31-21-65" Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:30.264 [INFO][5740] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.200/26] block=192.168.99.192/26 handle="k8s-pod-network.7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" host="ip-172-31-21-65" Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:30.266 [INFO][5740] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.200/26] handle="k8s-pod-network.7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" host="ip-172-31-21-65" Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:30.266 [INFO][5740] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 23:47:30.342860 containerd[2024]: 2026-03-12 23:47:30.266 [INFO][5740] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.200/26] IPv6=[] ContainerID="7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" HandleID="k8s-pod-network.7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" Workload="ip--172--31--21--65-k8s-coredns--674b8bbfcf--qhx57-eth0" Mar 12 23:47:30.345153 containerd[2024]: 2026-03-12 23:47:30.288 [INFO][5679] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" Namespace="kube-system" Pod="coredns-674b8bbfcf-qhx57" WorkloadEndpoint="ip--172--31--21--65-k8s-coredns--674b8bbfcf--qhx57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--65-k8s-coredns--674b8bbfcf--qhx57-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f2877bb0-6b5f-460a-8752-03e954e499f9", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-65", ContainerID:"", Pod:"coredns-674b8bbfcf-qhx57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0fcc53896a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:30.345153 containerd[2024]: 2026-03-12 23:47:30.289 [INFO][5679] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.200/32] ContainerID="7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" Namespace="kube-system" Pod="coredns-674b8bbfcf-qhx57" WorkloadEndpoint="ip--172--31--21--65-k8s-coredns--674b8bbfcf--qhx57-eth0" Mar 12 23:47:30.345153 containerd[2024]: 2026-03-12 23:47:30.289 [INFO][5679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0fcc53896a ContainerID="7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" Namespace="kube-system" Pod="coredns-674b8bbfcf-qhx57" WorkloadEndpoint="ip--172--31--21--65-k8s-coredns--674b8bbfcf--qhx57-eth0" Mar 12 23:47:30.345153 containerd[2024]: 2026-03-12 23:47:30.295 [INFO][5679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" Namespace="kube-system" Pod="coredns-674b8bbfcf-qhx57" WorkloadEndpoint="ip--172--31--21--65-k8s-coredns--674b8bbfcf--qhx57-eth0" Mar 12 23:47:30.345153 containerd[2024]: 2026-03-12 23:47:30.295 [INFO][5679] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" Namespace="kube-system" Pod="coredns-674b8bbfcf-qhx57" WorkloadEndpoint="ip--172--31--21--65-k8s-coredns--674b8bbfcf--qhx57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--65-k8s-coredns--674b8bbfcf--qhx57-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f2877bb0-6b5f-460a-8752-03e954e499f9", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 23, 46, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-65", ContainerID:"7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c", Pod:"coredns-674b8bbfcf-qhx57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0fcc53896a", MAC:"72:d2:e0:a6:ef:a0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 23:47:30.345153 containerd[2024]: 2026-03-12 23:47:30.323 [INFO][5679] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" Namespace="kube-system" Pod="coredns-674b8bbfcf-qhx57" WorkloadEndpoint="ip--172--31--21--65-k8s-coredns--674b8bbfcf--qhx57-eth0" Mar 12 23:47:30.419762 containerd[2024]: time="2026-03-12T23:47:30.418487176Z" level=info msg="StartContainer for \"7a561c160d4c9cde59d9f5a02071243d1efe4f591647163a040fcd536a7273f9\" returns successfully" Mar 12 23:47:30.480134 systemd[1]: Started cri-containerd-b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b.scope - libcontainer container b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b. Mar 12 23:47:30.488279 containerd[2024]: time="2026-03-12T23:47:30.488174980Z" level=info msg="connecting to shim 7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c" address="unix:///run/containerd/s/cc19a66c42ede5a6dfb154186e15646e5ef408732168989fc37263ef4e062858" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:47:30.609486 systemd[1]: Started cri-containerd-7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c.scope - libcontainer container 7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c. Mar 12 23:47:30.724340 containerd[2024]: time="2026-03-12T23:47:30.723673961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-kmm7z,Uid:7dd1b291-6c99-4d9b-b57e-b54a5e164f4c,Namespace:calico-system,Attempt:0,} returns sandbox id \"b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b\"" Mar 12 23:47:30.746291 containerd[2024]: time="2026-03-12T23:47:30.746134433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qhx57,Uid:f2877bb0-6b5f-460a-8752-03e954e499f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c\"" Mar 12 23:47:30.762427 containerd[2024]: time="2026-03-12T23:47:30.762201293Z" level=info msg="CreateContainer within sandbox \"7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 23:47:30.795782 containerd[2024]: time="2026-03-12T23:47:30.795708642Z" level=info msg="Container cc814a483b82375057e7b53f964210eb6455a35c81edc7d81169dd5c9b46784a: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:30.813613 containerd[2024]: time="2026-03-12T23:47:30.813527406Z" level=info msg="CreateContainer within sandbox \"7e5c3b7470791e8f40cdf68341d037abef246f39f7b21a88756231fa8dafa20c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cc814a483b82375057e7b53f964210eb6455a35c81edc7d81169dd5c9b46784a\"" Mar 12 23:47:30.821426 containerd[2024]: time="2026-03-12T23:47:30.820094442Z" level=info msg="StartContainer for \"cc814a483b82375057e7b53f964210eb6455a35c81edc7d81169dd5c9b46784a\"" Mar 12 23:47:30.822530 containerd[2024]: time="2026-03-12T23:47:30.822302538Z" level=info msg="connecting to shim cc814a483b82375057e7b53f964210eb6455a35c81edc7d81169dd5c9b46784a" address="unix:///run/containerd/s/cc19a66c42ede5a6dfb154186e15646e5ef408732168989fc37263ef4e062858" protocol=ttrpc version=3 Mar 12 23:47:30.882227 systemd[1]: Started cri-containerd-cc814a483b82375057e7b53f964210eb6455a35c81edc7d81169dd5c9b46784a.scope - libcontainer container cc814a483b82375057e7b53f964210eb6455a35c81edc7d81169dd5c9b46784a. Mar 12 23:47:31.027186 kubelet[3357]: I0312 23:47:31.025017 3357 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-bf8f5f48c-5x29h" podStartSLOduration=34.223695552 podStartE2EDuration="38.024993891s" podCreationTimestamp="2026-03-12 23:46:53 +0000 UTC" firstStartedPulling="2026-03-12 23:47:25.941197021 +0000 UTC m=+59.072703126" lastFinishedPulling="2026-03-12 23:47:29.74249536 +0000 UTC m=+62.874001465" observedRunningTime="2026-03-12 23:47:30.95652993 +0000 UTC m=+64.088036047" watchObservedRunningTime="2026-03-12 23:47:31.024993891 +0000 UTC m=+64.156499984" Mar 12 23:47:31.126366 containerd[2024]: time="2026-03-12T23:47:31.126099567Z" level=info msg="StartContainer for \"cc814a483b82375057e7b53f964210eb6455a35c81edc7d81169dd5c9b46784a\" returns successfully" Mar 12 23:47:31.535103 systemd-networkd[1867]: calif0fcc53896a: Gained IPv6LL Mar 12 23:47:31.984531 systemd-networkd[1867]: cali015a43f1b05: Gained IPv6LL Mar 12 23:47:32.026236 kubelet[3357]: I0312 23:47:32.026115 3357 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qhx57" podStartSLOduration=59.02608732 podStartE2EDuration="59.02608732s" podCreationTimestamp="2026-03-12 23:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:47:31.979143115 +0000 UTC m=+65.110649388" watchObservedRunningTime="2026-03-12 23:47:32.02608732 +0000 UTC m=+65.157593425" Mar 12 23:47:32.891729 containerd[2024]: time="2026-03-12T23:47:32.891123416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:32.893792 containerd[2024]: time="2026-03-12T23:47:32.893668760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 12 23:47:32.896640 containerd[2024]: time="2026-03-12T23:47:32.896511632Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:32.903344 containerd[2024]: time="2026-03-12T23:47:32.902979224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:32.907988 containerd[2024]: time="2026-03-12T23:47:32.907560716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.15946882s" Mar 12 23:47:32.907988 containerd[2024]: time="2026-03-12T23:47:32.907622936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 12 23:47:32.914187 containerd[2024]: time="2026-03-12T23:47:32.913978868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 23:47:32.923703 containerd[2024]: time="2026-03-12T23:47:32.923512736Z" level=info msg="CreateContainer within sandbox \"297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 23:47:32.946035 containerd[2024]: time="2026-03-12T23:47:32.943775360Z" level=info msg="Container 1c0c37d359882fb163927b52b22b95d5666c7aedfb24cb1a54e99e9e81812a2c: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:32.974413 containerd[2024]: time="2026-03-12T23:47:32.974331968Z" level=info msg="CreateContainer within sandbox \"297ce519013119653d648c32621a08422cf8dd0da283f9377c52ee675ab80a51\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1c0c37d359882fb163927b52b22b95d5666c7aedfb24cb1a54e99e9e81812a2c\"" Mar 12 23:47:32.976965 containerd[2024]: time="2026-03-12T23:47:32.975389732Z" level=info msg="StartContainer for \"1c0c37d359882fb163927b52b22b95d5666c7aedfb24cb1a54e99e9e81812a2c\"" Mar 12 23:47:32.978949 containerd[2024]: time="2026-03-12T23:47:32.978825656Z" level=info msg="connecting to shim 1c0c37d359882fb163927b52b22b95d5666c7aedfb24cb1a54e99e9e81812a2c" address="unix:///run/containerd/s/84ec3f9e8cce9634950cf82f1ea83f42296682ab11166b84fcd4dbdb223b5432" protocol=ttrpc version=3 Mar 12 23:47:33.029227 systemd[1]: Started cri-containerd-1c0c37d359882fb163927b52b22b95d5666c7aedfb24cb1a54e99e9e81812a2c.scope - libcontainer container 1c0c37d359882fb163927b52b22b95d5666c7aedfb24cb1a54e99e9e81812a2c. Mar 12 23:47:33.120859 containerd[2024]: time="2026-03-12T23:47:33.120714353Z" level=info msg="StartContainer for \"1c0c37d359882fb163927b52b22b95d5666c7aedfb24cb1a54e99e9e81812a2c\" returns successfully" Mar 12 23:47:33.283128 containerd[2024]: time="2026-03-12T23:47:33.282112638Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:33.286557 containerd[2024]: time="2026-03-12T23:47:33.286507314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 12 23:47:33.290070 containerd[2024]: time="2026-03-12T23:47:33.290003682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 375.831962ms" Mar 12 23:47:33.290240 containerd[2024]: time="2026-03-12T23:47:33.290212506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 12 23:47:33.293771 containerd[2024]: time="2026-03-12T23:47:33.293557806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 12 23:47:33.304936 containerd[2024]: time="2026-03-12T23:47:33.304732374Z" level=info msg="CreateContainer within sandbox \"199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 23:47:33.323505 containerd[2024]: time="2026-03-12T23:47:33.323353158Z" level=info msg="Container 4a6ca5ca108f62085d77f4e31a326fc4b5d3865b31353c0bd05879f9233aa0c1: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:33.349293 containerd[2024]: time="2026-03-12T23:47:33.349217178Z" level=info msg="CreateContainer within sandbox \"199490cb099b167051cb4cfd6d1f860a4d3f3fee5c68c5703829be45f49e9579\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4a6ca5ca108f62085d77f4e31a326fc4b5d3865b31353c0bd05879f9233aa0c1\"" Mar 12 23:47:33.352143 containerd[2024]: time="2026-03-12T23:47:33.352098210Z" level=info msg="StartContainer for \"4a6ca5ca108f62085d77f4e31a326fc4b5d3865b31353c0bd05879f9233aa0c1\"" Mar 12 23:47:33.354878 containerd[2024]: time="2026-03-12T23:47:33.354827970Z" level=info msg="connecting to shim 4a6ca5ca108f62085d77f4e31a326fc4b5d3865b31353c0bd05879f9233aa0c1" address="unix:///run/containerd/s/a9ea1ca37fe406a10ca1fa24e72f288772cef73ccb5998c4d27d4347b0a09c48" protocol=ttrpc version=3 Mar 12 23:47:33.399399 systemd[1]: Started cri-containerd-4a6ca5ca108f62085d77f4e31a326fc4b5d3865b31353c0bd05879f9233aa0c1.scope - libcontainer container 4a6ca5ca108f62085d77f4e31a326fc4b5d3865b31353c0bd05879f9233aa0c1. Mar 12 23:47:33.492810 containerd[2024]: time="2026-03-12T23:47:33.491050111Z" level=info msg="StartContainer for \"4a6ca5ca108f62085d77f4e31a326fc4b5d3865b31353c0bd05879f9233aa0c1\" returns successfully" Mar 12 23:47:34.024382 kubelet[3357]: I0312 23:47:34.024279 3357 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-96c4f6954-j5klg" podStartSLOduration=39.155894256 podStartE2EDuration="46.024254322s" podCreationTimestamp="2026-03-12 23:46:48 +0000 UTC" firstStartedPulling="2026-03-12 23:47:26.04320355 +0000 UTC m=+59.174709655" lastFinishedPulling="2026-03-12 23:47:32.911563616 +0000 UTC m=+66.043069721" observedRunningTime="2026-03-12 23:47:33.993737433 +0000 UTC m=+67.125243562" watchObservedRunningTime="2026-03-12 23:47:34.024254322 +0000 UTC m=+67.155760427" Mar 12 23:47:34.749366 ntpd[2239]: Listen normally on 10 cali5ac557b9f0a [fe80::ecee:eeff:feee:eeee%9]:123 Mar 12 23:47:34.749451 ntpd[2239]: Listen normally on 11 cali203010e4306 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 12 23:47:34.751177 ntpd[2239]: 12 Mar 23:47:34 ntpd[2239]: Listen normally on 10 cali5ac557b9f0a [fe80::ecee:eeff:feee:eeee%9]:123 Mar 12 23:47:34.751177 ntpd[2239]: 12 Mar 23:47:34 ntpd[2239]: Listen normally on 11 cali203010e4306 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 12 23:47:34.751177 ntpd[2239]: 12 Mar 23:47:34 ntpd[2239]: Listen normally on 12 calibe83d0b6c4d [fe80::ecee:eeff:feee:eeee%11]:123 Mar 12 23:47:34.751177 ntpd[2239]: 12 Mar 23:47:34 ntpd[2239]: Listen normally on 13 cali9e5932aa6bf [fe80::ecee:eeff:feee:eeee%12]:123 Mar 12 23:47:34.751177 ntpd[2239]: 12 Mar 23:47:34 ntpd[2239]: Listen normally on 14 cali015a43f1b05 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 12 23:47:34.751177 ntpd[2239]: 12 Mar 23:47:34 ntpd[2239]: Listen normally on 15 calif0fcc53896a [fe80::ecee:eeff:feee:eeee%14]:123 Mar 12 23:47:34.749498 ntpd[2239]: Listen normally on 12 calibe83d0b6c4d [fe80::ecee:eeff:feee:eeee%11]:123 Mar 12 23:47:34.749543 ntpd[2239]: Listen normally on 13 cali9e5932aa6bf [fe80::ecee:eeff:feee:eeee%12]:123 Mar 12 23:47:34.749597 ntpd[2239]: Listen normally on 14 cali015a43f1b05 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 12 23:47:34.749641 ntpd[2239]: Listen normally on 15 calif0fcc53896a [fe80::ecee:eeff:feee:eeee%14]:123 Mar 12 23:47:34.871587 systemd[1]: Started sshd@9-172.31.21.65:22-4.153.228.146:40054.service - OpenSSH per-connection server daemon (4.153.228.146:40054). Mar 12 23:47:34.998418 kubelet[3357]: I0312 23:47:34.997999 3357 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 23:47:34.998418 kubelet[3357]: I0312 23:47:34.998250 3357 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 23:47:35.392613 sshd[6100]: Accepted publickey for core from 4.153.228.146 port 40054 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:47:35.397468 sshd-session[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:47:35.415240 systemd-logind[1990]: New session 10 of user core. Mar 12 23:47:35.417297 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 23:47:35.991696 sshd[6104]: Connection closed by 4.153.228.146 port 40054 Mar 12 23:47:35.991865 sshd-session[6100]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:36.006792 kubelet[3357]: I0312 23:47:36.004638 3357 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 23:47:36.012851 systemd[1]: sshd@9-172.31.21.65:22-4.153.228.146:40054.service: Deactivated successfully. Mar 12 23:47:36.026167 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 23:47:36.033541 systemd-logind[1990]: Session 10 logged out. Waiting for processes to exit. Mar 12 23:47:36.038515 systemd-logind[1990]: Removed session 10. Mar 12 23:47:36.130598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463719176.mount: Deactivated successfully. Mar 12 23:47:37.204635 containerd[2024]: time="2026-03-12T23:47:37.204299409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:37.213374 containerd[2024]: time="2026-03-12T23:47:37.213109257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 12 23:47:37.214783 containerd[2024]: time="2026-03-12T23:47:37.214466517Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:37.228270 containerd[2024]: time="2026-03-12T23:47:37.228107073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:47:37.234272 containerd[2024]: time="2026-03-12T23:47:37.233283766Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.93770078s" Mar 12 23:47:37.234272 containerd[2024]: time="2026-03-12T23:47:37.233354866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 12 23:47:37.245106 containerd[2024]: time="2026-03-12T23:47:37.244791850Z" level=info msg="CreateContainer within sandbox \"b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 12 23:47:37.264367 containerd[2024]: time="2026-03-12T23:47:37.264312058Z" level=info msg="Container 93e969e22149e526b83f4df9e57f376d1037e68a617f0c03649ef98b18119b91: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:47:37.286505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3883442861.mount: Deactivated successfully. Mar 12 23:47:37.291369 containerd[2024]: time="2026-03-12T23:47:37.291230002Z" level=info msg="CreateContainer within sandbox \"b6e90d913f18dd1829b5a1f6101e846ff8d852bcd697c515b65ed38ac1ff7e9b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"93e969e22149e526b83f4df9e57f376d1037e68a617f0c03649ef98b18119b91\"" Mar 12 23:47:37.293886 containerd[2024]: time="2026-03-12T23:47:37.292913710Z" level=info msg="StartContainer for \"93e969e22149e526b83f4df9e57f376d1037e68a617f0c03649ef98b18119b91\"" Mar 12 23:47:37.297314 containerd[2024]: time="2026-03-12T23:47:37.297205906Z" level=info msg="connecting to shim 93e969e22149e526b83f4df9e57f376d1037e68a617f0c03649ef98b18119b91" address="unix:///run/containerd/s/efafeec5f901ea99b1ab864b4cceaf87c0bd0e62f8fd23e2aa1a927d8cb1fb24" protocol=ttrpc version=3 Mar 12 23:47:37.354322 systemd[1]: Started cri-containerd-93e969e22149e526b83f4df9e57f376d1037e68a617f0c03649ef98b18119b91.scope - libcontainer container 93e969e22149e526b83f4df9e57f376d1037e68a617f0c03649ef98b18119b91. Mar 12 23:47:37.597808 containerd[2024]: time="2026-03-12T23:47:37.597639959Z" level=info msg="StartContainer for \"93e969e22149e526b83f4df9e57f376d1037e68a617f0c03649ef98b18119b91\" returns successfully" Mar 12 23:47:38.104837 kubelet[3357]: I0312 23:47:38.104742 3357 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-kmm7z" podStartSLOduration=43.595604165 podStartE2EDuration="50.104598574s" podCreationTimestamp="2026-03-12 23:46:48 +0000 UTC" firstStartedPulling="2026-03-12 23:47:30.727848785 +0000 UTC m=+63.859354878" lastFinishedPulling="2026-03-12 23:47:37.236843182 +0000 UTC m=+70.368349287" observedRunningTime="2026-03-12 23:47:38.103439374 +0000 UTC m=+71.234945491" watchObservedRunningTime="2026-03-12 23:47:38.104598574 +0000 UTC m=+71.236104715" Mar 12 23:47:38.112499 kubelet[3357]: I0312 23:47:38.110905 3357 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-96c4f6954-g4777" podStartSLOduration=43.511786805 podStartE2EDuration="50.110840734s" podCreationTimestamp="2026-03-12 23:46:48 +0000 UTC" firstStartedPulling="2026-03-12 23:47:26.694220941 +0000 UTC m=+59.825727046" lastFinishedPulling="2026-03-12 23:47:33.293274882 +0000 UTC m=+66.424780975" observedRunningTime="2026-03-12 23:47:34.030049842 +0000 UTC m=+67.161556067" watchObservedRunningTime="2026-03-12 23:47:38.110840734 +0000 UTC m=+71.242346863" Mar 12 23:47:41.093208 systemd[1]: Started sshd@10-172.31.21.65:22-4.153.228.146:47428.service - OpenSSH per-connection server daemon (4.153.228.146:47428). Mar 12 23:47:41.571796 sshd[6234]: Accepted publickey for core from 4.153.228.146 port 47428 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:47:41.574956 sshd-session[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:47:41.584618 systemd-logind[1990]: New session 11 of user core. Mar 12 23:47:41.599153 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 23:47:42.028864 sshd[6237]: Connection closed by 4.153.228.146 port 47428 Mar 12 23:47:42.030035 sshd-session[6234]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:42.043460 systemd[1]: sshd@10-172.31.21.65:22-4.153.228.146:47428.service: Deactivated successfully. Mar 12 23:47:42.056878 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 23:47:42.061864 systemd-logind[1990]: Session 11 logged out. Waiting for processes to exit. Mar 12 23:47:42.070331 systemd-logind[1990]: Removed session 11. Mar 12 23:47:42.120484 systemd[1]: Started sshd@11-172.31.21.65:22-4.153.228.146:47434.service - OpenSSH per-connection server daemon (4.153.228.146:47434). Mar 12 23:47:42.573992 sshd[6265]: Accepted publickey for core from 4.153.228.146 port 47434 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:47:42.576643 sshd-session[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:47:42.585734 systemd-logind[1990]: New session 12 of user core. Mar 12 23:47:42.597989 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 23:47:43.028336 sshd[6269]: Connection closed by 4.153.228.146 port 47434 Mar 12 23:47:43.027315 sshd-session[6265]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:43.034711 systemd[1]: sshd@11-172.31.21.65:22-4.153.228.146:47434.service: Deactivated successfully. Mar 12 23:47:43.035486 systemd-logind[1990]: Session 12 logged out. Waiting for processes to exit. Mar 12 23:47:43.041569 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 23:47:43.046432 systemd-logind[1990]: Removed session 12. Mar 12 23:47:43.121333 systemd[1]: Started sshd@12-172.31.21.65:22-4.153.228.146:47436.service - OpenSSH per-connection server daemon (4.153.228.146:47436). Mar 12 23:47:43.586875 sshd[6279]: Accepted publickey for core from 4.153.228.146 port 47436 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:47:43.591149 sshd-session[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:47:43.602512 systemd-logind[1990]: New session 13 of user core. Mar 12 23:47:43.612649 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 23:47:43.993309 sshd[6282]: Connection closed by 4.153.228.146 port 47436 Mar 12 23:47:43.994222 sshd-session[6279]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:44.005560 systemd[1]: sshd@12-172.31.21.65:22-4.153.228.146:47436.service: Deactivated successfully. Mar 12 23:47:44.012547 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 23:47:44.017053 systemd-logind[1990]: Session 13 logged out. Waiting for processes to exit. Mar 12 23:47:44.019572 systemd-logind[1990]: Removed session 13. Mar 12 23:47:47.388220 kubelet[3357]: I0312 23:47:47.387613 3357 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 23:47:49.087268 systemd[1]: Started sshd@13-172.31.21.65:22-4.153.228.146:40686.service - OpenSSH per-connection server daemon (4.153.228.146:40686). Mar 12 23:47:49.562324 sshd[6322]: Accepted publickey for core from 4.153.228.146 port 40686 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:47:49.565789 sshd-session[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:47:49.575606 systemd-logind[1990]: New session 14 of user core. Mar 12 23:47:49.583178 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 23:47:50.002186 sshd[6325]: Connection closed by 4.153.228.146 port 40686 Mar 12 23:47:50.002735 sshd-session[6322]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:50.012053 systemd-logind[1990]: Session 14 logged out. Waiting for processes to exit. Mar 12 23:47:50.013393 systemd[1]: sshd@13-172.31.21.65:22-4.153.228.146:40686.service: Deactivated successfully. Mar 12 23:47:50.021954 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 23:47:50.027846 systemd-logind[1990]: Removed session 14. Mar 12 23:47:50.099594 systemd[1]: Started sshd@14-172.31.21.65:22-4.153.228.146:40694.service - OpenSSH per-connection server daemon (4.153.228.146:40694). Mar 12 23:47:50.564870 sshd[6338]: Accepted publickey for core from 4.153.228.146 port 40694 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:47:50.567369 sshd-session[6338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:47:50.577988 systemd-logind[1990]: New session 15 of user core. Mar 12 23:47:50.583204 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 23:47:51.299820 sshd[6341]: Connection closed by 4.153.228.146 port 40694 Mar 12 23:47:51.302447 sshd-session[6338]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:51.311560 systemd[1]: sshd@14-172.31.21.65:22-4.153.228.146:40694.service: Deactivated successfully. Mar 12 23:47:51.316752 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 23:47:51.319457 systemd-logind[1990]: Session 15 logged out. Waiting for processes to exit. Mar 12 23:47:51.322607 systemd-logind[1990]: Removed session 15. Mar 12 23:47:51.392332 systemd[1]: Started sshd@15-172.31.21.65:22-4.153.228.146:40700.service - OpenSSH per-connection server daemon (4.153.228.146:40700). Mar 12 23:47:51.858373 sshd[6358]: Accepted publickey for core from 4.153.228.146 port 40700 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:47:51.860772 sshd-session[6358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:47:51.870409 systemd-logind[1990]: New session 16 of user core. Mar 12 23:47:51.880933 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 23:47:53.172273 sshd[6361]: Connection closed by 4.153.228.146 port 40700 Mar 12 23:47:53.173773 sshd-session[6358]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:53.183678 systemd[1]: sshd@15-172.31.21.65:22-4.153.228.146:40700.service: Deactivated successfully. Mar 12 23:47:53.189836 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 23:47:53.198851 systemd-logind[1990]: Session 16 logged out. Waiting for processes to exit. Mar 12 23:47:53.201866 systemd-logind[1990]: Removed session 16. Mar 12 23:47:53.266696 systemd[1]: Started sshd@16-172.31.21.65:22-4.153.228.146:40702.service - OpenSSH per-connection server daemon (4.153.228.146:40702). Mar 12 23:47:53.732982 sshd[6386]: Accepted publickey for core from 4.153.228.146 port 40702 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:47:53.737603 sshd-session[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:47:53.756841 systemd-logind[1990]: New session 17 of user core. Mar 12 23:47:53.761837 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 23:47:54.492128 sshd[6389]: Connection closed by 4.153.228.146 port 40702 Mar 12 23:47:54.493407 sshd-session[6386]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:54.501665 systemd[1]: sshd@16-172.31.21.65:22-4.153.228.146:40702.service: Deactivated successfully. Mar 12 23:47:54.508115 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 23:47:54.510978 systemd-logind[1990]: Session 17 logged out. Waiting for processes to exit. Mar 12 23:47:54.514988 systemd-logind[1990]: Removed session 17. Mar 12 23:47:54.583843 systemd[1]: Started sshd@17-172.31.21.65:22-4.153.228.146:40716.service - OpenSSH per-connection server daemon (4.153.228.146:40716). Mar 12 23:47:55.042956 sshd[6405]: Accepted publickey for core from 4.153.228.146 port 40716 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:47:55.045121 sshd-session[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:47:55.053209 systemd-logind[1990]: New session 18 of user core. Mar 12 23:47:55.063170 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 23:47:55.416789 sshd[6408]: Connection closed by 4.153.228.146 port 40716 Mar 12 23:47:55.417679 sshd-session[6405]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:55.426470 systemd[1]: sshd@17-172.31.21.65:22-4.153.228.146:40716.service: Deactivated successfully. Mar 12 23:47:55.432468 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 23:47:55.435371 systemd-logind[1990]: Session 18 logged out. Waiting for processes to exit. Mar 12 23:47:55.440082 systemd-logind[1990]: Removed session 18. Mar 12 23:48:00.515595 systemd[1]: Started sshd@18-172.31.21.65:22-4.153.228.146:35508.service - OpenSSH per-connection server daemon (4.153.228.146:35508). Mar 12 23:48:00.998070 sshd[6432]: Accepted publickey for core from 4.153.228.146 port 35508 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:01.000230 sshd-session[6432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:01.009152 systemd-logind[1990]: New session 19 of user core. Mar 12 23:48:01.016192 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 23:48:01.378311 sshd[6460]: Connection closed by 4.153.228.146 port 35508 Mar 12 23:48:01.378195 sshd-session[6432]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:01.387564 systemd-logind[1990]: Session 19 logged out. Waiting for processes to exit. Mar 12 23:48:01.388361 systemd[1]: sshd@18-172.31.21.65:22-4.153.228.146:35508.service: Deactivated successfully. Mar 12 23:48:01.394026 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 23:48:01.397670 systemd-logind[1990]: Removed session 19. Mar 12 23:48:05.193962 update_engine[1993]: I20260312 23:48:05.193413 1993 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 12 23:48:05.193962 update_engine[1993]: I20260312 23:48:05.193485 1993 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 12 23:48:05.196237 update_engine[1993]: I20260312 23:48:05.194021 1993 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 12 23:48:05.196725 update_engine[1993]: I20260312 23:48:05.196630 1993 omaha_request_params.cc:62] Current group set to stable Mar 12 23:48:05.197074 update_engine[1993]: I20260312 23:48:05.196800 1993 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 12 23:48:05.197074 update_engine[1993]: I20260312 23:48:05.196823 1993 update_attempter.cc:643] Scheduling an action processor start. Mar 12 23:48:05.197074 update_engine[1993]: I20260312 23:48:05.196856 1993 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 12 23:48:05.197074 update_engine[1993]: I20260312 23:48:05.196934 1993 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 12 23:48:05.197277 update_engine[1993]: I20260312 23:48:05.197067 1993 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 12 23:48:05.197277 update_engine[1993]: I20260312 23:48:05.197088 1993 omaha_request_action.cc:272] Request: Mar 12 23:48:05.197277 update_engine[1993]: Mar 12 23:48:05.197277 update_engine[1993]: Mar 12 23:48:05.197277 update_engine[1993]: Mar 12 23:48:05.197277 update_engine[1993]: Mar 12 23:48:05.197277 update_engine[1993]: Mar 12 23:48:05.197277 update_engine[1993]: Mar 12 23:48:05.197277 update_engine[1993]: Mar 12 23:48:05.197277 update_engine[1993]: Mar 12 23:48:05.197277 update_engine[1993]: I20260312 23:48:05.197105 1993 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 23:48:05.205716 update_engine[1993]: I20260312 23:48:05.205592 1993 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 23:48:05.207470 update_engine[1993]: I20260312 23:48:05.207386 1993 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 23:48:05.207782 locksmithd[2041]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 12 23:48:05.215905 update_engine[1993]: E20260312 23:48:05.215811 1993 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 12 23:48:05.216028 update_engine[1993]: I20260312 23:48:05.215977 1993 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 12 23:48:06.470883 systemd[1]: Started sshd@19-172.31.21.65:22-4.153.228.146:35514.service - OpenSSH per-connection server daemon (4.153.228.146:35514). Mar 12 23:48:06.929498 sshd[6474]: Accepted publickey for core from 4.153.228.146 port 35514 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:06.931747 sshd-session[6474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:06.941024 systemd-logind[1990]: New session 20 of user core. Mar 12 23:48:06.947420 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 23:48:07.324729 sshd[6477]: Connection closed by 4.153.228.146 port 35514 Mar 12 23:48:07.324529 sshd-session[6474]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:07.333763 systemd[1]: sshd@19-172.31.21.65:22-4.153.228.146:35514.service: Deactivated successfully. Mar 12 23:48:07.339147 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 23:48:07.342189 systemd-logind[1990]: Session 20 logged out. Waiting for processes to exit. Mar 12 23:48:07.346094 systemd-logind[1990]: Removed session 20. Mar 12 23:48:12.428653 systemd[1]: Started sshd@20-172.31.21.65:22-4.153.228.146:36170.service - OpenSSH per-connection server daemon (4.153.228.146:36170). Mar 12 23:48:12.901771 sshd[6563]: Accepted publickey for core from 4.153.228.146 port 36170 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:12.904060 sshd-session[6563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:12.912857 systemd-logind[1990]: New session 21 of user core. Mar 12 23:48:12.918253 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 23:48:13.295769 sshd[6566]: Connection closed by 4.153.228.146 port 36170 Mar 12 23:48:13.297184 sshd-session[6563]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:13.304796 systemd[1]: sshd@20-172.31.21.65:22-4.153.228.146:36170.service: Deactivated successfully. Mar 12 23:48:13.309712 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 23:48:13.314736 systemd-logind[1990]: Session 21 logged out. Waiting for processes to exit. Mar 12 23:48:13.317748 systemd-logind[1990]: Removed session 21. Mar 12 23:48:15.193370 update_engine[1993]: I20260312 23:48:15.192462 1993 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 23:48:15.193370 update_engine[1993]: I20260312 23:48:15.192586 1993 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 23:48:15.193370 update_engine[1993]: I20260312 23:48:15.193147 1993 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 23:48:15.200808 update_engine[1993]: E20260312 23:48:15.200582 1993 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 12 23:48:15.200808 update_engine[1993]: I20260312 23:48:15.200737 1993 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 12 23:48:18.391272 systemd[1]: Started sshd@21-172.31.21.65:22-4.153.228.146:36178.service - OpenSSH per-connection server daemon (4.153.228.146:36178). Mar 12 23:48:18.870005 sshd[6602]: Accepted publickey for core from 4.153.228.146 port 36178 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:18.874943 sshd-session[6602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:18.886160 systemd-logind[1990]: New session 22 of user core. Mar 12 23:48:18.894166 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 12 23:48:19.234004 sshd[6605]: Connection closed by 4.153.228.146 port 36178 Mar 12 23:48:19.233798 sshd-session[6602]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:19.245354 systemd[1]: sshd@21-172.31.21.65:22-4.153.228.146:36178.service: Deactivated successfully. Mar 12 23:48:19.249658 systemd[1]: session-22.scope: Deactivated successfully. Mar 12 23:48:19.252390 systemd-logind[1990]: Session 22 logged out. Waiting for processes to exit. Mar 12 23:48:19.256977 systemd-logind[1990]: Removed session 22. Mar 12 23:48:24.325971 systemd[1]: Started sshd@22-172.31.21.65:22-4.153.228.146:54152.service - OpenSSH per-connection server daemon (4.153.228.146:54152). Mar 12 23:48:24.776560 sshd[6617]: Accepted publickey for core from 4.153.228.146 port 54152 ssh2: RSA SHA256:Yb2A+7C2VsREc83tMMGpKwh3Xx/OtDn7IxdhUqKM9Jc Mar 12 23:48:24.779003 sshd-session[6617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:24.786954 systemd-logind[1990]: New session 23 of user core. Mar 12 23:48:24.796180 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 12 23:48:25.144987 sshd[6620]: Connection closed by 4.153.228.146 port 54152 Mar 12 23:48:25.146161 sshd-session[6617]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:25.152014 systemd[1]: sshd@22-172.31.21.65:22-4.153.228.146:54152.service: Deactivated successfully. Mar 12 23:48:25.156873 systemd[1]: session-23.scope: Deactivated successfully. Mar 12 23:48:25.163048 systemd-logind[1990]: Session 23 logged out. Waiting for processes to exit. Mar 12 23:48:25.165242 systemd-logind[1990]: Removed session 23. Mar 12 23:48:25.191635 update_engine[1993]: I20260312 23:48:25.190989 1993 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 23:48:25.191635 update_engine[1993]: I20260312 23:48:25.191087 1993 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 23:48:25.192471 update_engine[1993]: I20260312 23:48:25.192426 1993 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 23:48:25.193325 update_engine[1993]: E20260312 23:48:25.193290 1993 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 12 23:48:25.193547 update_engine[1993]: I20260312 23:48:25.193506 1993 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 12 23:48:35.193022 update_engine[1993]: I20260312 23:48:35.192410 1993 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 23:48:35.193022 update_engine[1993]: I20260312 23:48:35.192524 1993 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 23:48:35.193847 update_engine[1993]: I20260312 23:48:35.193790 1993 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 23:48:35.206380 update_engine[1993]: E20260312 23:48:35.204998 1993 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 12 23:48:35.206380 update_engine[1993]: I20260312 23:48:35.205139 1993 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 12 23:48:35.206380 update_engine[1993]: I20260312 23:48:35.205158 1993 omaha_request_action.cc:617] Omaha request response: Mar 12 23:48:35.206380 update_engine[1993]: E20260312 23:48:35.205283 1993 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 12 23:48:35.206380 update_engine[1993]: I20260312 23:48:35.205325 1993 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 12 23:48:35.206380 update_engine[1993]: I20260312 23:48:35.205340 1993 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 12 23:48:35.206380 update_engine[1993]: I20260312 23:48:35.205354 1993 update_attempter.cc:306] Processing Done. Mar 12 23:48:35.206380 update_engine[1993]: E20260312 23:48:35.205380 1993 update_attempter.cc:619] Update failed. Mar 12 23:48:35.206380 update_engine[1993]: I20260312 23:48:35.205396 1993 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 12 23:48:35.206380 update_engine[1993]: I20260312 23:48:35.205411 1993 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 12 23:48:35.206380 update_engine[1993]: I20260312 23:48:35.205426 1993 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 12 23:48:35.206380 update_engine[1993]: I20260312 23:48:35.205551 1993 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 12 23:48:35.206380 update_engine[1993]: I20260312 23:48:35.205593 1993 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 12 23:48:35.206380 update_engine[1993]: I20260312 23:48:35.205610 1993 omaha_request_action.cc:272] Request: Mar 12 23:48:35.206380 update_engine[1993]: Mar 12 23:48:35.206380 update_engine[1993]: Mar 12 23:48:35.207245 update_engine[1993]: Mar 12 23:48:35.207245 update_engine[1993]: Mar 12 23:48:35.207245 update_engine[1993]: Mar 12 23:48:35.207245 update_engine[1993]: Mar 12 23:48:35.207245 update_engine[1993]: I20260312 23:48:35.205624 1993 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 23:48:35.207245 update_engine[1993]: I20260312 23:48:35.205663 1993 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 23:48:35.207245 update_engine[1993]: I20260312 23:48:35.206180 1993 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 23:48:35.208348 locksmithd[2041]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 12 23:48:35.208864 update_engine[1993]: E20260312 23:48:35.207988 1993 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 12 23:48:35.208864 update_engine[1993]: I20260312 23:48:35.208106 1993 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 12 23:48:35.208864 update_engine[1993]: I20260312 23:48:35.208124 1993 omaha_request_action.cc:617] Omaha request response: Mar 12 23:48:35.208864 update_engine[1993]: I20260312 23:48:35.208140 1993 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 12 23:48:35.208864 update_engine[1993]: I20260312 23:48:35.208154 1993 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 12 23:48:35.208864 update_engine[1993]: I20260312 23:48:35.208168 1993 update_attempter.cc:306] Processing Done. Mar 12 23:48:35.208864 update_engine[1993]: I20260312 23:48:35.208184 1993 update_attempter.cc:310] Error event sent. Mar 12 23:48:35.208864 update_engine[1993]: I20260312 23:48:35.208206 1993 update_check_scheduler.cc:74] Next update check in 46m36s Mar 12 23:48:35.209551 locksmithd[2041]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 12 23:48:39.394022 systemd[1]: cri-containerd-f5dfd1eed594b753580bdfaac9b5a6d6feb3c08b0c1af6155dff1384669a71cb.scope: Deactivated successfully. Mar 12 23:48:39.395911 systemd[1]: cri-containerd-f5dfd1eed594b753580bdfaac9b5a6d6feb3c08b0c1af6155dff1384669a71cb.scope: Consumed 18.223s CPU time, 116.9M memory peak. Mar 12 23:48:39.398617 containerd[2024]: time="2026-03-12T23:48:39.398450290Z" level=info msg="received container exit event container_id:\"f5dfd1eed594b753580bdfaac9b5a6d6feb3c08b0c1af6155dff1384669a71cb\" id:\"f5dfd1eed594b753580bdfaac9b5a6d6feb3c08b0c1af6155dff1384669a71cb\" pid:3957 exit_status:1 exited_at:{seconds:1773359319 nanos:397770094}" Mar 12 23:48:39.448126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5dfd1eed594b753580bdfaac9b5a6d6feb3c08b0c1af6155dff1384669a71cb-rootfs.mount: Deactivated successfully. Mar 12 23:48:39.851343 systemd[1]: cri-containerd-b5d31c494f33d4c640b8b93dd7ecc9bb1f06928af86ffc78423e8dd92ff6c170.scope: Deactivated successfully. Mar 12 23:48:39.854158 systemd[1]: cri-containerd-b5d31c494f33d4c640b8b93dd7ecc9bb1f06928af86ffc78423e8dd92ff6c170.scope: Consumed 4.890s CPU time, 59.4M memory peak, 64K read from disk. Mar 12 23:48:39.864699 containerd[2024]: time="2026-03-12T23:48:39.864621601Z" level=info msg="received container exit event container_id:\"b5d31c494f33d4c640b8b93dd7ecc9bb1f06928af86ffc78423e8dd92ff6c170\" id:\"b5d31c494f33d4c640b8b93dd7ecc9bb1f06928af86ffc78423e8dd92ff6c170\" pid:3207 exit_status:1 exited_at:{seconds:1773359319 nanos:864269593}" Mar 12 23:48:39.919213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5d31c494f33d4c640b8b93dd7ecc9bb1f06928af86ffc78423e8dd92ff6c170-rootfs.mount: Deactivated successfully. Mar 12 23:48:40.292950 kubelet[3357]: I0312 23:48:40.292499 3357 scope.go:117] "RemoveContainer" containerID="b5d31c494f33d4c640b8b93dd7ecc9bb1f06928af86ffc78423e8dd92ff6c170" Mar 12 23:48:40.298685 kubelet[3357]: I0312 23:48:40.298525 3357 scope.go:117] "RemoveContainer" containerID="f5dfd1eed594b753580bdfaac9b5a6d6feb3c08b0c1af6155dff1384669a71cb" Mar 12 23:48:40.299089 containerd[2024]: time="2026-03-12T23:48:40.299006771Z" level=info msg="CreateContainer within sandbox \"42e67c40b16f93b172f53920b7c3c57a7667f142e3e5edee14e242a70b957dda\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 12 23:48:40.325503 containerd[2024]: time="2026-03-12T23:48:40.325192919Z" level=info msg="Container 579b6796b5137b70bed9c1178f3669e527445ce28b7888724d1c2b09e8c709d0: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:48:40.336267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184462481.mount: Deactivated successfully. Mar 12 23:48:40.345940 containerd[2024]: time="2026-03-12T23:48:40.345643739Z" level=info msg="CreateContainer within sandbox \"bc9cd43889d2b554208ca9a6d31ce911f23be5b07acdb89a5c722d70a73ca657\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 12 23:48:40.349238 containerd[2024]: time="2026-03-12T23:48:40.349183559Z" level=info msg="CreateContainer within sandbox \"42e67c40b16f93b172f53920b7c3c57a7667f142e3e5edee14e242a70b957dda\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"579b6796b5137b70bed9c1178f3669e527445ce28b7888724d1c2b09e8c709d0\"" Mar 12 23:48:40.350700 containerd[2024]: time="2026-03-12T23:48:40.350655131Z" level=info msg="StartContainer for \"579b6796b5137b70bed9c1178f3669e527445ce28b7888724d1c2b09e8c709d0\"" Mar 12 23:48:40.353266 containerd[2024]: time="2026-03-12T23:48:40.353150219Z" level=info msg="connecting to shim 579b6796b5137b70bed9c1178f3669e527445ce28b7888724d1c2b09e8c709d0" address="unix:///run/containerd/s/f73f43bb1631c6dfba4548008870f26801120c6a294be536504b4ef986df9d9a" protocol=ttrpc version=3 Mar 12 23:48:40.395980 containerd[2024]: time="2026-03-12T23:48:40.395832791Z" level=info msg="Container b9bb48125fec87f2f0af0e12cb24a7d536b5f82686620784e2ee3cc3bad562d3: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:48:40.396384 systemd[1]: Started cri-containerd-579b6796b5137b70bed9c1178f3669e527445ce28b7888724d1c2b09e8c709d0.scope - libcontainer container 579b6796b5137b70bed9c1178f3669e527445ce28b7888724d1c2b09e8c709d0. Mar 12 23:48:40.429943 containerd[2024]: time="2026-03-12T23:48:40.429801767Z" level=info msg="CreateContainer within sandbox \"bc9cd43889d2b554208ca9a6d31ce911f23be5b07acdb89a5c722d70a73ca657\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b9bb48125fec87f2f0af0e12cb24a7d536b5f82686620784e2ee3cc3bad562d3\"" Mar 12 23:48:40.432107 containerd[2024]: time="2026-03-12T23:48:40.431964455Z" level=info msg="StartContainer for \"b9bb48125fec87f2f0af0e12cb24a7d536b5f82686620784e2ee3cc3bad562d3\"" Mar 12 23:48:40.438413 containerd[2024]: time="2026-03-12T23:48:40.438353219Z" level=info msg="connecting to shim b9bb48125fec87f2f0af0e12cb24a7d536b5f82686620784e2ee3cc3bad562d3" address="unix:///run/containerd/s/ec6c04992276d742cf2242099e1adfc4d0c71c80e8520c02199a63c9639a25a8" protocol=ttrpc version=3 Mar 12 23:48:40.496012 systemd[1]: Started cri-containerd-b9bb48125fec87f2f0af0e12cb24a7d536b5f82686620784e2ee3cc3bad562d3.scope - libcontainer container b9bb48125fec87f2f0af0e12cb24a7d536b5f82686620784e2ee3cc3bad562d3. Mar 12 23:48:40.532833 containerd[2024]: time="2026-03-12T23:48:40.532678428Z" level=info msg="StartContainer for \"579b6796b5137b70bed9c1178f3669e527445ce28b7888724d1c2b09e8c709d0\" returns successfully" Mar 12 23:48:40.598659 containerd[2024]: time="2026-03-12T23:48:40.597754128Z" level=info msg="StartContainer for \"b9bb48125fec87f2f0af0e12cb24a7d536b5f82686620784e2ee3cc3bad562d3\" returns successfully" Mar 12 23:48:44.887014 systemd[1]: cri-containerd-6797d9127297fce47f2a37706b6457f842c24acc1b42a2afce8239cb88f7e3be.scope: Deactivated successfully. Mar 12 23:48:44.888628 systemd[1]: cri-containerd-6797d9127297fce47f2a37706b6457f842c24acc1b42a2afce8239cb88f7e3be.scope: Consumed 3.390s CPU time, 20.8M memory peak. Mar 12 23:48:44.892781 containerd[2024]: time="2026-03-12T23:48:44.892665582Z" level=info msg="received container exit event container_id:\"6797d9127297fce47f2a37706b6457f842c24acc1b42a2afce8239cb88f7e3be\" id:\"6797d9127297fce47f2a37706b6457f842c24acc1b42a2afce8239cb88f7e3be\" pid:3161 exit_status:1 exited_at:{seconds:1773359324 nanos:892004706}" Mar 12 23:48:44.938971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6797d9127297fce47f2a37706b6457f842c24acc1b42a2afce8239cb88f7e3be-rootfs.mount: Deactivated successfully. Mar 12 23:48:45.329218 kubelet[3357]: I0312 23:48:45.328539 3357 scope.go:117] "RemoveContainer" containerID="6797d9127297fce47f2a37706b6457f842c24acc1b42a2afce8239cb88f7e3be" Mar 12 23:48:45.333645 containerd[2024]: time="2026-03-12T23:48:45.332857156Z" level=info msg="CreateContainer within sandbox \"c64c5542bcb541d5e3bcc2fd13ad65284f0d538b7ff1d5680ede478cbd8f3954\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 12 23:48:45.350704 containerd[2024]: time="2026-03-12T23:48:45.350102536Z" level=info msg="Container b25d607405071236520363e9dd5353e91cbebef985a477e43dcfd214a1d1d88e: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:48:45.369568 containerd[2024]: time="2026-03-12T23:48:45.369489268Z" level=info msg="CreateContainer within sandbox \"c64c5542bcb541d5e3bcc2fd13ad65284f0d538b7ff1d5680ede478cbd8f3954\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b25d607405071236520363e9dd5353e91cbebef985a477e43dcfd214a1d1d88e\"" Mar 12 23:48:45.370572 containerd[2024]: time="2026-03-12T23:48:45.370504012Z" level=info msg="StartContainer for \"b25d607405071236520363e9dd5353e91cbebef985a477e43dcfd214a1d1d88e\"" Mar 12 23:48:45.372824 containerd[2024]: time="2026-03-12T23:48:45.372764500Z" level=info msg="connecting to shim b25d607405071236520363e9dd5353e91cbebef985a477e43dcfd214a1d1d88e" address="unix:///run/containerd/s/2f4121e5b151b1cbed2d464fe26b0a431315fc95a45c4fe4ff88d84474afcae7" protocol=ttrpc version=3 Mar 12 23:48:45.415219 systemd[1]: Started cri-containerd-b25d607405071236520363e9dd5353e91cbebef985a477e43dcfd214a1d1d88e.scope - libcontainer container b25d607405071236520363e9dd5353e91cbebef985a477e43dcfd214a1d1d88e. Mar 12 23:48:45.497942 containerd[2024]: time="2026-03-12T23:48:45.497720237Z" level=info msg="StartContainer for \"b25d607405071236520363e9dd5353e91cbebef985a477e43dcfd214a1d1d88e\" returns successfully" Mar 12 23:48:49.033058 kubelet[3357]: E0312 23:48:49.032837 3357 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-65?timeout=10s\": context deadline exceeded" Mar 12 23:48:52.102414 systemd[1]: cri-containerd-b9bb48125fec87f2f0af0e12cb24a7d536b5f82686620784e2ee3cc3bad562d3.scope: Deactivated successfully. Mar 12 23:48:52.105631 containerd[2024]: time="2026-03-12T23:48:52.104248041Z" level=info msg="received container exit event container_id:\"b9bb48125fec87f2f0af0e12cb24a7d536b5f82686620784e2ee3cc3bad562d3\" id:\"b9bb48125fec87f2f0af0e12cb24a7d536b5f82686620784e2ee3cc3bad562d3\" pid:6747 exit_status:1 exited_at:{seconds:1773359332 nanos:102084405}" Mar 12 23:48:52.149408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9bb48125fec87f2f0af0e12cb24a7d536b5f82686620784e2ee3cc3bad562d3-rootfs.mount: Deactivated successfully. Mar 12 23:48:52.359013 kubelet[3357]: I0312 23:48:52.358853 3357 scope.go:117] "RemoveContainer" containerID="f5dfd1eed594b753580bdfaac9b5a6d6feb3c08b0c1af6155dff1384669a71cb" Mar 12 23:48:52.361227 kubelet[3357]: I0312 23:48:52.361173 3357 scope.go:117] "RemoveContainer" containerID="b9bb48125fec87f2f0af0e12cb24a7d536b5f82686620784e2ee3cc3bad562d3" Mar 12 23:48:52.361520 kubelet[3357]: E0312 23:48:52.361458 3357 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6bf85f8dd-qd5kz_tigera-operator(88c9b78b-868b-4006-aaac-9bf4f01e050e)\"" pod="tigera-operator/tigera-operator-6bf85f8dd-qd5kz" podUID="88c9b78b-868b-4006-aaac-9bf4f01e050e" Mar 12 23:48:52.363428 containerd[2024]: time="2026-03-12T23:48:52.363375167Z" level=info msg="RemoveContainer for \"f5dfd1eed594b753580bdfaac9b5a6d6feb3c08b0c1af6155dff1384669a71cb\"" Mar 12 23:48:52.375094 containerd[2024]: time="2026-03-12T23:48:52.375044891Z" level=info msg="RemoveContainer for \"f5dfd1eed594b753580bdfaac9b5a6d6feb3c08b0c1af6155dff1384669a71cb\" returns successfully" Mar 12 23:48:59.033539 kubelet[3357]: E0312 23:48:59.033461 3357 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-65?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"